Sep 4 17:17:13.304310 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 17:17:13.304357 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Sep 4 15:58:01 -00 2024 Sep 4 17:17:13.304382 kernel: KASLR disabled due to lack of seed Sep 4 17:17:13.304399 kernel: efi: EFI v2.7 by EDK II Sep 4 17:17:13.304417 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Sep 4 17:17:13.304433 kernel: ACPI: Early table checksum verification disabled Sep 4 17:17:13.304451 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 17:17:13.304468 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 17:17:13.304485 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:17:13.304501 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 17:17:13.304522 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:17:13.304538 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 17:17:13.304555 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 17:17:13.304571 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 17:17:13.304591 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:17:13.304612 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 17:17:13.304630 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 17:17:13.304647 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 17:17:13.304665 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 17:17:13.304682 kernel: printk: bootconsole [uart0] enabled Sep 4 17:17:13.304699 kernel: NUMA: Failed to initialise from firmware Sep 4 17:17:13.304717 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:17:13.304735 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 17:17:13.304753 kernel: Zone ranges: Sep 4 17:17:13.304770 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 17:17:13.304787 kernel: DMA32 empty Sep 4 17:17:13.304808 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 17:17:13.304825 kernel: Movable zone start for each node Sep 4 17:17:13.304842 kernel: Early memory node ranges Sep 4 17:17:13.304858 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 17:17:13.304875 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 17:17:13.304892 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 17:17:13.304909 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 17:17:13.304926 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 17:17:13.304982 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 17:17:13.305005 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 17:17:13.305023 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 17:17:13.305041 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:17:13.305070 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 17:17:13.305089 kernel: psci: probing for conduit method from ACPI. Sep 4 17:17:13.305115 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 17:17:13.305134 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:17:13.305153 kernel: psci: Trusted OS migration not required Sep 4 17:17:13.305178 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:17:13.305197 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:17:13.305217 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:17:13.305237 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:17:13.305255 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:17:13.305275 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:17:13.305294 kernel: CPU features: detected: Spectre-v2 Sep 4 17:17:13.305313 kernel: CPU features: detected: Spectre-v3a Sep 4 17:17:13.305333 kernel: CPU features: detected: Spectre-BHB Sep 4 17:17:13.305353 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 17:17:13.305372 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 17:17:13.305403 kernel: alternatives: applying boot alternatives Sep 4 17:17:13.305425 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:17:13.305445 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:17:13.305464 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:17:13.305483 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:17:13.305503 kernel: Fallback order for Node 0: 0 Sep 4 17:17:13.305522 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 17:17:13.305541 kernel: Policy zone: Normal Sep 4 17:17:13.305560 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:17:13.305578 kernel: software IO TLB: area num 2. Sep 4 17:17:13.305597 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 17:17:13.305630 kernel: Memory: 3820280K/4030464K available (10240K kernel code, 2184K rwdata, 8084K rodata, 39296K init, 897K bss, 210184K reserved, 0K cma-reserved) Sep 4 17:17:13.305650 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:17:13.305668 kernel: trace event string verifier disabled Sep 4 17:17:13.305687 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:17:13.305709 kernel: rcu: RCU event tracing is enabled. Sep 4 17:17:13.305732 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:17:13.305750 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:17:13.305769 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:17:13.305790 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:17:13.305809 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:17:13.305830 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:17:13.305864 kernel: GICv3: 96 SPIs implemented Sep 4 17:17:13.305884 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:17:13.305902 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:17:13.305922 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 17:17:13.308144 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 17:17:13.308197 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 17:17:13.308219 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:17:13.308240 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:17:13.308261 kernel: GICv3: using LPI property table @0x00000004000e0000 Sep 4 17:17:13.308282 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 17:17:13.308303 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Sep 4 17:17:13.308323 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:17:13.308355 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 17:17:13.308374 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 17:17:13.308393 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 17:17:13.308412 kernel: Console: colour dummy device 80x25 Sep 4 17:17:13.308430 kernel: printk: console [tty1] enabled Sep 4 17:17:13.308449 kernel: ACPI: Core revision 20230628 Sep 4 17:17:13.308467 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 17:17:13.308486 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:17:13.308504 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:17:13.308527 kernel: landlock: Up and running. Sep 4 17:17:13.308546 kernel: SELinux: Initializing. Sep 4 17:17:13.308564 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:17:13.308583 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:17:13.308601 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:17:13.308620 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:17:13.308638 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:17:13.308657 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:17:13.308676 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 17:17:13.308698 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 17:17:13.308717 kernel: Remapping and enabling EFI services. Sep 4 17:17:13.308735 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:17:13.308753 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:17:13.308771 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 17:17:13.308790 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Sep 4 17:17:13.308808 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 17:17:13.308826 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:17:13.308845 kernel: SMP: Total of 2 processors activated. Sep 4 17:17:13.308864 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:17:13.308888 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 17:17:13.308907 kernel: CPU features: detected: CRC32 instructions Sep 4 17:17:13.308987 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:17:13.309021 kernel: alternatives: applying system-wide alternatives Sep 4 17:17:13.309041 kernel: devtmpfs: initialized Sep 4 17:17:13.309061 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:17:13.309081 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:17:13.309101 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:17:13.309121 kernel: SMBIOS 3.0.0 present. Sep 4 17:17:13.309146 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 17:17:13.309165 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:17:13.309185 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:17:13.309204 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:17:13.309224 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:17:13.309244 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:17:13.309264 kernel: audit: type=2000 audit(0.326:1): state=initialized audit_enabled=0 res=1 Sep 4 17:17:13.309288 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:17:13.309308 kernel: cpuidle: using governor menu Sep 4 17:17:13.309327 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:17:13.309348 kernel: ASID allocator initialised with 65536 entries Sep 4 17:17:13.309368 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:17:13.309388 kernel: Serial: AMBA PL011 UART driver Sep 4 17:17:13.309408 kernel: Modules: 17536 pages in range for non-PLT usage Sep 4 17:17:13.309428 kernel: Modules: 509056 pages in range for PLT usage Sep 4 17:17:13.309449 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:17:13.309482 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:17:13.309502 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:17:13.309522 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:17:13.309544 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:17:13.309564 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:17:13.309583 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:17:13.309605 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:17:13.309626 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:17:13.309646 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:17:13.309677 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:17:13.309697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:17:13.309718 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:17:13.309738 kernel: ACPI: Interpreter enabled Sep 4 17:17:13.309758 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:17:13.309778 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:17:13.309798 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 17:17:13.312269 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:17:13.312545 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:17:13.312794 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:17:13.313159 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 17:17:13.313493 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 17:17:13.313535 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 17:17:13.313558 kernel: acpiphp: Slot [1] registered Sep 4 17:17:13.313580 kernel: acpiphp: Slot [2] registered Sep 4 17:17:13.313601 kernel: acpiphp: Slot [3] registered Sep 4 17:17:13.313636 kernel: acpiphp: Slot [4] registered Sep 4 17:17:13.313658 kernel: acpiphp: Slot [5] registered Sep 4 17:17:13.313679 kernel: acpiphp: Slot [6] registered Sep 4 17:17:13.313700 kernel: acpiphp: Slot [7] registered Sep 4 17:17:13.313720 kernel: acpiphp: Slot [8] registered Sep 4 17:17:13.313741 kernel: acpiphp: Slot [9] registered Sep 4 17:17:13.313761 kernel: acpiphp: Slot [10] registered Sep 4 17:17:13.313781 kernel: acpiphp: Slot [11] registered Sep 4 17:17:13.313801 kernel: acpiphp: Slot [12] registered Sep 4 17:17:13.313820 kernel: acpiphp: Slot [13] registered Sep 4 17:17:13.313846 kernel: acpiphp: Slot [14] registered Sep 4 17:17:13.313867 kernel: acpiphp: Slot [15] registered Sep 4 17:17:13.313887 kernel: acpiphp: Slot [16] registered Sep 4 17:17:13.313907 kernel: acpiphp: Slot [17] registered Sep 4 17:17:13.313926 kernel: acpiphp: Slot [18] registered Sep 4 17:17:13.316045 kernel: acpiphp: Slot [19] registered Sep 4 17:17:13.316084 kernel: acpiphp: Slot [20] registered Sep 4 17:17:13.316105 kernel: acpiphp: Slot [21] registered Sep 4 17:17:13.316126 kernel: acpiphp: Slot [22] registered Sep 4 17:17:13.316159 kernel: acpiphp: Slot [23] registered Sep 4 17:17:13.316180 kernel: acpiphp: Slot [24] registered Sep 4 17:17:13.316200 kernel: acpiphp: Slot [25] registered Sep 4 17:17:13.316220 kernel: acpiphp: Slot [26] registered Sep 4 17:17:13.316240 kernel: acpiphp: Slot [27] registered Sep 4 17:17:13.316259 kernel: acpiphp: Slot [28] registered Sep 4 17:17:13.316279 kernel: acpiphp: Slot [29] registered Sep 4 17:17:13.316298 kernel: acpiphp: Slot [30] registered Sep 4 17:17:13.316318 kernel: acpiphp: Slot [31] registered Sep 4 17:17:13.316337 kernel: PCI host bridge to bus 0000:00 Sep 4 17:17:13.316660 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 17:17:13.316894 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:17:13.319302 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 17:17:13.319546 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 17:17:13.319844 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 17:17:13.320698 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 17:17:13.321550 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 17:17:13.321797 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:17:13.324145 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 17:17:13.324407 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:17:13.324670 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:17:13.324904 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 17:17:13.325214 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 17:17:13.325458 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 17:17:13.325680 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:17:13.325896 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 17:17:13.327573 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 17:17:13.327846 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 17:17:13.328241 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 17:17:13.328482 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 17:17:13.328721 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 17:17:13.328929 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:17:13.329280 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 17:17:13.329313 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:17:13.329335 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:17:13.329357 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:17:13.329377 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:17:13.329397 kernel: iommu: Default domain type: Translated Sep 4 17:17:13.329433 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:17:13.329453 kernel: efivars: Registered efivars operations Sep 4 17:17:13.329473 kernel: vgaarb: loaded Sep 4 17:17:13.329493 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:17:13.329512 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:17:13.329533 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:17:13.329552 kernel: pnp: PnP ACPI init Sep 4 17:17:13.329789 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 17:17:13.329828 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:17:13.329848 kernel: NET: Registered PF_INET protocol family Sep 4 17:17:13.329867 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:17:13.329887 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:17:13.329906 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:17:13.329925 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:17:13.330021 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:17:13.330043 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:17:13.330063 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:17:13.330089 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:17:13.330108 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:17:13.330127 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:17:13.330146 kernel: kvm [1]: HYP mode not available Sep 4 17:17:13.330166 kernel: Initialise system trusted keyrings Sep 4 17:17:13.330185 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:17:13.330204 kernel: Key type asymmetric registered Sep 4 17:17:13.330223 kernel: Asymmetric key parser 'x509' registered Sep 4 17:17:13.330256 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:17:13.330287 kernel: io scheduler mq-deadline registered Sep 4 17:17:13.330306 kernel: io scheduler kyber registered Sep 4 17:17:13.330326 kernel: io scheduler bfq registered Sep 4 17:17:13.330557 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 17:17:13.330585 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:17:13.330605 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:17:13.330624 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 17:17:13.330643 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 17:17:13.330668 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:17:13.330689 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 17:17:13.330900 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 17:17:13.330927 kernel: printk: console [ttyS0] disabled Sep 4 17:17:13.330968 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 17:17:13.330990 kernel: printk: console [ttyS0] enabled Sep 4 17:17:13.331009 kernel: printk: bootconsole [uart0] disabled Sep 4 17:17:13.331029 kernel: thunder_xcv, ver 1.0 Sep 4 17:17:13.331048 kernel: thunder_bgx, ver 1.0 Sep 4 17:17:13.331073 kernel: nicpf, ver 1.0 Sep 4 17:17:13.331093 kernel: nicvf, ver 1.0 Sep 4 17:17:13.331372 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:17:13.331580 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:17:12 UTC (1725470232) Sep 4 17:17:13.331607 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:17:13.331628 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 17:17:13.331649 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:17:13.331669 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:17:13.331695 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:17:13.331715 kernel: Segment Routing with IPv6 Sep 4 17:17:13.331736 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:17:13.331755 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:17:13.331774 kernel: Key type dns_resolver registered Sep 4 17:17:13.331794 kernel: registered taskstats version 1 Sep 4 17:17:13.331813 kernel: Loading compiled-in X.509 certificates Sep 4 17:17:13.331832 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 6782952639b29daf968f5d0c3e73fb25e5af1d5e' Sep 4 17:17:13.331851 kernel: Key type .fscrypt registered Sep 4 17:17:13.331870 kernel: Key type fscrypt-provisioning registered Sep 4 17:17:13.331893 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:17:13.331913 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:17:13.331932 kernel: ima: No architecture policies found Sep 4 17:17:13.332023 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:17:13.332044 kernel: clk: Disabling unused clocks Sep 4 17:17:13.332063 kernel: Freeing unused kernel memory: 39296K Sep 4 17:17:13.332083 kernel: Run /init as init process Sep 4 17:17:13.332102 kernel: with arguments: Sep 4 17:17:13.332121 kernel: /init Sep 4 17:17:13.332149 kernel: with environment: Sep 4 17:17:13.332168 kernel: HOME=/ Sep 4 17:17:13.332187 kernel: TERM=linux Sep 4 17:17:13.332206 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:17:13.332230 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:17:13.332254 systemd[1]: Detected virtualization amazon. Sep 4 17:17:13.332276 systemd[1]: Detected architecture arm64. Sep 4 17:17:13.332302 systemd[1]: Running in initrd. Sep 4 17:17:13.332323 systemd[1]: No hostname configured, using default hostname. Sep 4 17:17:13.332343 systemd[1]: Hostname set to . Sep 4 17:17:13.332365 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:17:13.332386 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:17:13.332407 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:17:13.332429 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:17:13.332451 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:17:13.332478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:17:13.332499 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:17:13.332521 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:17:13.332545 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:17:13.332568 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:17:13.332591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:17:13.332613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:17:13.332641 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:17:13.332663 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:17:13.332686 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:17:13.332708 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:17:13.332731 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:17:13.332754 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:17:13.332776 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:17:13.332800 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:17:13.332824 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:17:13.332852 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:17:13.332874 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:17:13.332896 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:17:13.332918 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:17:13.332972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:17:13.333031 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:17:13.333055 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:17:13.333078 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:17:13.333110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:17:13.333133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:13.333155 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:17:13.333177 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:17:13.333199 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:17:13.333227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:17:13.333254 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:17:13.333339 systemd-journald[250]: Collecting audit messages is disabled. Sep 4 17:17:13.333385 kernel: Bridge firewalling registered Sep 4 17:17:13.333415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:13.333438 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:17:13.333460 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:17:13.333482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:17:13.333503 systemd-journald[250]: Journal started Sep 4 17:17:13.333542 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2c8279b470b899e34e716e7273ab15) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:17:13.262926 systemd-modules-load[251]: Inserted module 'overlay' Sep 4 17:17:13.296494 systemd-modules-load[251]: Inserted module 'br_netfilter' Sep 4 17:17:13.341093 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:17:13.343350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:17:13.364455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:17:13.372365 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:17:13.423071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:17:13.425444 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:17:13.435565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:13.448469 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:17:13.461607 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:13.472474 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:17:13.533799 dracut-cmdline[292]: dracut-dracut-053 Sep 4 17:17:13.536247 systemd-resolved[290]: Positive Trust Anchors: Sep 4 17:17:13.536277 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:17:13.536345 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:17:13.557934 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:17:13.728979 kernel: SCSI subsystem initialized Sep 4 17:17:13.737093 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:17:13.750164 kernel: iscsi: registered transport (tcp) Sep 4 17:17:13.772067 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:17:13.772141 kernel: QLogic iSCSI HBA Driver Sep 4 17:17:13.780137 kernel: random: crng init done Sep 4 17:17:13.780445 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 4 17:17:13.784632 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:17:13.787100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:17:13.860045 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:17:13.873256 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:17:13.915501 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:17:13.915575 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:17:13.917178 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:17:13.987048 kernel: raid6: neonx8 gen() 6613 MB/s Sep 4 17:17:14.004005 kernel: raid6: neonx4 gen() 6436 MB/s Sep 4 17:17:14.021006 kernel: raid6: neonx2 gen() 5411 MB/s Sep 4 17:17:14.038007 kernel: raid6: neonx1 gen() 3915 MB/s Sep 4 17:17:14.055005 kernel: raid6: int64x8 gen() 3713 MB/s Sep 4 17:17:14.072001 kernel: raid6: int64x4 gen() 3672 MB/s Sep 4 17:17:14.089087 kernel: raid6: int64x2 gen() 3522 MB/s Sep 4 17:17:14.107024 kernel: raid6: int64x1 gen() 2723 MB/s Sep 4 17:17:14.107122 kernel: raid6: using algorithm neonx8 gen() 6613 MB/s Sep 4 17:17:14.125818 kernel: raid6: .... xor() 4831 MB/s, rmw enabled Sep 4 17:17:14.125922 kernel: raid6: using neon recovery algorithm Sep 4 17:17:14.135002 kernel: xor: measuring software checksum speed Sep 4 17:17:14.136993 kernel: 8regs : 11099 MB/sec Sep 4 17:17:14.137996 kernel: 32regs : 11961 MB/sec Sep 4 17:17:14.140868 kernel: arm64_neon : 9521 MB/sec Sep 4 17:17:14.140967 kernel: xor: using function: 32regs (11961 MB/sec) Sep 4 17:17:14.232036 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:17:14.250648 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:17:14.260299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:17:14.304749 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 4 17:17:14.315759 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:17:14.328811 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:17:14.373849 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 4 17:17:14.445140 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:17:14.454263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:17:14.612308 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:17:14.627275 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:17:14.674675 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:17:14.678716 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:17:14.688839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:17:14.691429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:17:14.710535 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:17:14.744783 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:17:14.845992 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:17:14.846067 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 17:17:14.855132 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:17:14.855780 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:17:14.859682 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:17:14.859822 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:14.868921 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:17:14.871214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:17:14.871387 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:14.906456 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:75:a7:80:ca:71 Sep 4 17:17:14.875865 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:14.884913 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:17:14.924451 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 17:17:14.924497 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:17:14.916374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:14.936993 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:17:14.946433 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:17:14.946623 kernel: GPT:9289727 != 16777215 Sep 4 17:17:14.946659 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:17:14.947930 kernel: GPT:9289727 != 16777215 Sep 4 17:17:14.948018 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:17:14.949696 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:17:14.975667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:14.995098 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:17:15.059001 kernel: BTRFS: device fsid 3e706a0f-a579-4862-bc52-e66e95e66d87 devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (549) Sep 4 17:17:15.066895 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:15.113008 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (524) Sep 4 17:17:15.180493 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:17:15.240411 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:17:15.242885 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:17:15.260359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:17:15.276114 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:17:15.294336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:17:15.305031 disk-uuid[661]: Primary Header is updated. Sep 4 17:17:15.305031 disk-uuid[661]: Secondary Entries is updated. Sep 4 17:17:15.305031 disk-uuid[661]: Secondary Header is updated. Sep 4 17:17:15.318991 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:17:15.326989 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:17:15.337980 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:17:16.333971 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:17:16.337626 disk-uuid[662]: The operation has completed successfully. Sep 4 17:17:16.564816 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:17:16.566261 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:17:16.624327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:17:16.642485 sh[1005]: Success Sep 4 17:17:16.672157 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:17:16.800782 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:17:16.825344 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:17:16.838111 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:17:16.867317 kernel: BTRFS info (device dm-0): first mount of filesystem 3e706a0f-a579-4862-bc52-e66e95e66d87 Sep 4 17:17:16.867441 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:16.867474 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:17:16.870261 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:17:16.870356 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:17:16.892001 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:17:16.904416 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:17:16.909313 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:17:16.921702 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:17:16.941734 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:17:16.964511 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:17:16.964589 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:16.964617 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:17:16.973734 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:17:16.991137 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:17:16.994039 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:17:17.003812 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:17:17.012317 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:17:17.189388 ignition[1100]: Ignition 2.19.0 Sep 4 17:17:17.191108 ignition[1100]: Stage: fetch-offline Sep 4 17:17:17.191756 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:17.191783 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:17:17.192382 ignition[1100]: Ignition finished successfully Sep 4 17:17:17.199774 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:17:17.218080 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:17:17.230292 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:17:17.302604 systemd-networkd[1207]: lo: Link UP Sep 4 17:17:17.302621 systemd-networkd[1207]: lo: Gained carrier Sep 4 17:17:17.307415 systemd-networkd[1207]: Enumeration completed Sep 4 17:17:17.308366 systemd-networkd[1207]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:17.308374 systemd-networkd[1207]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:17:17.310149 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:17:17.316758 systemd[1]: Reached target network.target - Network. Sep 4 17:17:17.333509 systemd-networkd[1207]: eth0: Link UP Sep 4 17:17:17.333524 systemd-networkd[1207]: eth0: Gained carrier Sep 4 17:17:17.333542 systemd-networkd[1207]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:17.342041 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:17:17.366097 systemd-networkd[1207]: eth0: DHCPv4 address 172.31.22.59/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:17:17.376147 ignition[1209]: Ignition 2.19.0 Sep 4 17:17:17.376176 ignition[1209]: Stage: fetch Sep 4 17:17:17.376850 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:17.376878 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:17:17.377068 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:17:17.377542 ignition[1209]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 4 17:17:17.577842 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #2 Sep 4 17:17:17.587744 ignition[1209]: PUT result: OK Sep 4 17:17:17.592375 ignition[1209]: parsed url from cmdline: "" Sep 4 17:17:17.592601 ignition[1209]: no config URL provided Sep 4 17:17:17.594134 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:17:17.594180 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:17:17.594299 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:17:17.598459 ignition[1209]: PUT result: OK Sep 4 17:17:17.600610 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:17:17.604768 ignition[1209]: GET result: OK Sep 4 17:17:17.605733 ignition[1209]: parsing config with SHA512: 2d68e111cc6b39245eb483c81545810a473e6bad0977dd7b9e31e0a6fc79911227bd336c675646c1f13d29d7ed50176186e22b844a7a03114ded6301b664e340 Sep 4 17:17:17.614005 unknown[1209]: fetched base config from "system" Sep 4 17:17:17.614034 unknown[1209]: fetched base config from "system" Sep 4 17:17:17.614048 unknown[1209]: fetched user config from "aws" Sep 4 17:17:17.616753 ignition[1209]: fetch: fetch complete Sep 4 17:17:17.617551 ignition[1209]: fetch: fetch passed Sep 4 17:17:17.617835 ignition[1209]: Ignition finished successfully Sep 4 17:17:17.624280 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:17:17.643494 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:17:17.672463 ignition[1217]: Ignition 2.19.0 Sep 4 17:17:17.672500 ignition[1217]: Stage: kargs Sep 4 17:17:17.673636 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:17.673680 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:17:17.673910 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:17:17.677971 ignition[1217]: PUT result: OK Sep 4 17:17:17.686528 ignition[1217]: kargs: kargs passed Sep 4 17:17:17.686715 ignition[1217]: Ignition finished successfully Sep 4 17:17:17.692680 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:17:17.715416 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:17:17.745063 ignition[1223]: Ignition 2.19.0 Sep 4 17:17:17.745089 ignition[1223]: Stage: disks Sep 4 17:17:17.745784 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:17.745811 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:17:17.746598 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:17:17.749803 ignition[1223]: PUT result: OK Sep 4 17:17:17.759618 ignition[1223]: disks: disks passed Sep 4 17:17:17.760071 ignition[1223]: Ignition finished successfully Sep 4 17:17:17.765026 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:17:17.767571 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:17:17.771388 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:17:17.775202 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:17:17.777124 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:17:17.779051 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:17:17.795424 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:17:17.851551 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:17:17.858078 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:17:17.876243 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:17:17.988998 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 901d46b0-2319-4536-8a6d-46889db73e8c r/w with ordered data mode. Quota mode: none. Sep 4 17:17:17.991261 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:17:17.995462 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:17:18.018167 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:17:18.032406 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:17:18.036591 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:17:18.040175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:17:18.042288 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:17:18.051485 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:17:18.058872 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:17:18.079016 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Sep 4 17:17:18.084488 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:17:18.084715 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:18.084753 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:17:18.102007 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:17:18.106545 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:17:18.190920 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:17:18.201633 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:17:18.212020 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:17:18.223265 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:17:18.424184 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:17:18.440376 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:17:18.461141 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:17:18.471126 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:17:18.475900 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:17:18.537998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:17:18.540719 ignition[1362]: INFO : Ignition 2.19.0 Sep 4 17:17:18.543364 ignition[1362]: INFO : Stage: mount Sep 4 17:17:18.545363 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:18.547393 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:17:18.547393 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:17:18.552479 ignition[1362]: INFO : PUT result: OK Sep 4 17:17:18.557508 ignition[1362]: INFO : mount: mount passed Sep 4 17:17:18.561014 ignition[1362]: INFO : Ignition finished successfully Sep 4 17:17:18.563063 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:17:18.573170 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:17:19.004791 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:17:19.028167 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1374) Sep 4 17:17:19.028231 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:17:19.031452 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:19.031519 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:17:19.037984 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:17:19.042858 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:17:19.086558 ignition[1390]: INFO : Ignition 2.19.0 Sep 4 17:17:19.090164 ignition[1390]: INFO : Stage: files Sep 4 17:17:19.090164 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:19.090164 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:17:19.096682 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:17:19.096682 ignition[1390]: INFO : PUT result: OK Sep 4 17:17:19.103503 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:17:19.106376 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:17:19.106376 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:17:19.114800 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:17:19.117634 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:17:19.120578 unknown[1390]: wrote ssh authorized keys file for user: core Sep 4 17:17:19.123006 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:17:19.127500 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:17:19.131520 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:17:19.182816 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:17:19.261209 systemd-networkd[1207]: eth0: Gained IPv6LL Sep 4 17:17:19.271010 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:17:19.271010 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:17:19.271010 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:17:19.281425 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Sep 4 17:17:19.645004 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:17:20.076218 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:17:20.076218 ignition[1390]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:17:20.084208 ignition[1390]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:17:20.084208 ignition[1390]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:17:20.084208 ignition[1390]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:17:20.084208 ignition[1390]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:17:20.084208 ignition[1390]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:17:20.084208 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:17:20.084208 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:17:20.084208 ignition[1390]: INFO : files: files passed Sep 4 17:17:20.084208 ignition[1390]: INFO : Ignition finished successfully Sep 4 17:17:20.088145 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:17:20.114605 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:17:20.141621 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:17:20.147821 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:17:20.151407 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:17:20.179639 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:17:20.179639 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:17:20.194494 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:17:20.199284 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:17:20.207215 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:17:20.222847 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:17:20.280104 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:17:20.280589 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:17:20.288447 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:17:20.290496 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:17:20.292606 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:17:20.311380 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:17:20.343033 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:17:20.356439 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:17:20.390695 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:17:20.395169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:17:20.399660 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:17:20.402933 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:17:20.403726 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:17:20.410032 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:17:20.413289 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:17:20.416593 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:17:20.421610 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:17:20.429363 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:17:20.432544 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:17:20.436658 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:17:20.441493 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:17:20.444788 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:17:20.448713 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:17:20.451260 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:17:20.451535 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:17:20.459266 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:17:20.469203 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:17:20.473817 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:17:20.477206 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:17:20.481080 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:17:20.481416 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:17:20.489857 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:17:20.490715 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:17:20.500349 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:17:20.500834 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:17:20.517180 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:17:20.530389 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:17:20.532495 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:17:20.538289 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:17:20.546760 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:17:20.551261 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:17:20.568140 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:17:20.572176 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:17:20.597117 ignition[1443]: INFO : Ignition 2.19.0 Sep 4 17:17:20.597117 ignition[1443]: INFO : Stage: umount Sep 4 17:17:20.597117 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:20.597117 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:17:20.620653 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:17:20.623867 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:17:20.626889 ignition[1443]: INFO : PUT result: OK Sep 4 17:17:20.632691 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:17:20.633027 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:17:20.643230 ignition[1443]: INFO : umount: umount passed Sep 4 17:17:20.643230 ignition[1443]: INFO : Ignition finished successfully Sep 4 17:17:20.640577 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:17:20.641417 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:17:20.650286 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:17:20.650489 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:17:20.653277 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:17:20.653448 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:17:20.659548 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:17:20.659693 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:17:20.661896 systemd[1]: Stopped target network.target - Network. Sep 4 17:17:20.663852 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:17:20.664721 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:17:20.683601 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:17:20.685729 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:17:20.687790 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:17:20.690409 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:17:20.692117 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:17:20.693993 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:17:20.694115 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:17:20.696063 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:17:20.696149 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:17:20.698674 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:17:20.698781 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:17:20.711415 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:17:20.711510 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:17:20.713488 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:17:20.713566 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:17:20.715781 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:17:20.718229 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:17:20.730211 systemd-networkd[1207]: eth0: DHCPv6 lease lost Sep 4 17:17:20.737808 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:17:20.738581 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:17:20.757130 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:17:20.757749 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:17:20.766671 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:17:20.766776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:17:20.791267 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:17:20.794678 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:17:20.794807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:17:20.798672 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:17:20.798784 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:20.802473 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:17:20.802654 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:17:20.802855 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:17:20.802965 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:17:20.805314 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:17:20.848415 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:17:20.850354 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:17:20.856566 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:17:20.856674 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:17:20.858823 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:17:20.858911 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:17:20.863894 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:17:20.864063 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:17:20.874515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:17:20.874712 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:17:20.877421 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:17:20.877589 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:20.894401 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:17:20.901482 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:17:20.901635 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:17:20.904117 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:17:20.904211 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:17:20.907133 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:17:20.907221 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:17:20.909885 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:17:20.910004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:20.913328 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:17:20.913521 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:17:20.964788 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:17:20.966622 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:17:20.970867 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:17:20.991492 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:17:21.011380 systemd[1]: Switching root. Sep 4 17:17:21.056150 systemd-journald[250]: Journal stopped Sep 4 17:17:23.416717 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Sep 4 17:17:23.416867 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:17:23.416914 kernel: SELinux: policy capability open_perms=1 Sep 4 17:17:23.416968 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:17:23.417004 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:17:23.417039 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:17:23.417077 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:17:23.417110 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:17:23.417142 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:17:23.417177 kernel: audit: type=1403 audit(1725470241.484:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:17:23.417221 systemd[1]: Successfully loaded SELinux policy in 58.229ms. Sep 4 17:17:23.417278 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.679ms. Sep 4 17:17:23.417316 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:17:23.417350 systemd[1]: Detected virtualization amazon. Sep 4 17:17:23.417383 systemd[1]: Detected architecture arm64. Sep 4 17:17:23.417413 systemd[1]: Detected first boot. Sep 4 17:17:23.417448 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:17:23.417483 zram_generator::config[1484]: No configuration found. Sep 4 17:17:23.417535 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:17:23.417579 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:17:23.417612 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:17:23.417669 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:17:23.417706 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:17:23.417743 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:17:23.417773 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:17:23.417805 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:17:23.417845 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:17:23.417880 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:17:23.417912 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:17:23.424133 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:17:23.424204 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:17:23.424242 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:17:23.424274 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:17:23.424307 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:17:23.424338 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:17:23.424381 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:17:23.424416 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:17:23.424446 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:17:23.427054 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:17:23.427093 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:17:23.427124 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:17:23.427155 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:17:23.427193 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:17:23.427227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:17:23.427259 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:17:23.427291 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:17:23.427321 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:17:23.427350 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:17:23.427382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:17:23.427412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:17:23.427442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:17:23.427474 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:17:23.427510 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:17:23.427545 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:17:23.427577 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:17:23.427609 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:17:23.427663 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:17:23.427700 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:17:23.427737 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:17:23.427779 systemd[1]: Reached target machines.target - Containers. Sep 4 17:17:23.427811 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:17:23.427852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:17:23.427885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:17:23.427919 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:17:23.427988 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:17:23.428026 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:17:23.428056 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:17:23.428094 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:17:23.428124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:17:23.428165 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:17:23.428197 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:17:23.428228 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:17:23.428258 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:17:23.428293 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:17:23.428344 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:17:23.428375 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:17:23.428406 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:17:23.428437 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:17:23.428472 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:17:23.428505 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:17:23.428539 systemd[1]: Stopped verity-setup.service. Sep 4 17:17:23.428571 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:17:23.428601 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:17:23.428631 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:17:23.428665 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:17:23.428695 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:17:23.428725 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:17:23.428755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:17:23.428785 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:17:23.428815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:17:23.428844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:17:23.428878 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:17:23.428909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:17:23.434127 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:17:23.434230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:17:23.434266 kernel: ACPI: bus type drm_connector registered Sep 4 17:17:23.434296 kernel: fuse: init (API version 7.39) Sep 4 17:17:23.434329 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:17:23.436746 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:17:23.436813 kernel: loop: module loaded Sep 4 17:17:23.436856 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:17:23.436893 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:17:23.436927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:17:23.436982 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:17:23.437015 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:17:23.437046 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:17:23.437084 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:17:23.437115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:17:23.437146 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:17:23.437176 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:17:23.437209 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:17:23.437240 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:17:23.437273 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:17:23.437306 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:17:23.437341 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:17:23.437372 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:17:23.437406 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:17:23.437436 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:17:23.437467 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:17:23.437502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:17:23.437577 systemd-journald[1568]: Collecting audit messages is disabled. Sep 4 17:17:23.437629 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:17:23.437662 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:17:23.437693 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:17:23.437726 systemd-journald[1568]: Journal started Sep 4 17:17:23.437780 systemd-journald[1568]: Runtime Journal (/run/log/journal/ec2c8279b470b899e34e716e7273ab15) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:17:22.600548 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:17:22.630410 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:17:22.631774 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:17:23.461180 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:17:23.479077 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:17:23.468622 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:17:23.495040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:23.497890 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:17:23.526535 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Sep 4 17:17:23.526573 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Sep 4 17:17:23.550973 kernel: loop0: detected capacity change from 0 to 193208 Sep 4 17:17:23.559636 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:17:23.564868 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:17:23.576357 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:17:23.590494 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:17:23.606878 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:17:23.625442 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:17:23.665887 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:17:23.680206 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:17:23.686348 systemd-journald[1568]: Time spent on flushing to /var/log/journal/ec2c8279b470b899e34e716e7273ab15 is 97.213ms for 922 entries. Sep 4 17:17:23.686348 systemd-journald[1568]: System Journal (/var/log/journal/ec2c8279b470b899e34e716e7273ab15) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:17:23.814242 systemd-journald[1568]: Received client request to flush runtime journal. Sep 4 17:17:23.814684 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:17:23.814773 kernel: loop1: detected capacity change from 0 to 52536 Sep 4 17:17:23.763203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:17:23.779576 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:17:23.825092 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:17:23.829394 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:17:23.839984 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:17:23.865518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:17:23.887172 kernel: loop2: detected capacity change from 0 to 114288 Sep 4 17:17:23.973037 kernel: loop3: detected capacity change from 0 to 65520 Sep 4 17:17:23.978418 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Sep 4 17:17:23.978457 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Sep 4 17:17:23.996410 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:17:24.053168 kernel: loop4: detected capacity change from 0 to 193208 Sep 4 17:17:24.097015 kernel: loop5: detected capacity change from 0 to 52536 Sep 4 17:17:24.128016 kernel: loop6: detected capacity change from 0 to 114288 Sep 4 17:17:24.157017 kernel: loop7: detected capacity change from 0 to 65520 Sep 4 17:17:24.184015 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:17:24.185054 (sd-merge)[1641]: Merged extensions into '/usr'. Sep 4 17:17:24.202628 systemd[1]: Reloading requested from client PID 1597 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:17:24.202670 systemd[1]: Reloading... Sep 4 17:17:24.499317 zram_generator::config[1671]: No configuration found. Sep 4 17:17:24.646213 ldconfig[1594]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:17:24.837995 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:24.966500 systemd[1]: Reloading finished in 762 ms. Sep 4 17:17:25.013603 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:17:25.018241 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:17:25.041479 systemd[1]: Starting ensure-sysext.service... Sep 4 17:17:25.065785 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:17:25.106194 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:17:25.111445 systemd[1]: Reloading requested from client PID 1717 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:17:25.111726 systemd[1]: Reloading... Sep 4 17:17:25.129588 systemd-tmpfiles[1718]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:17:25.130667 systemd-tmpfiles[1718]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:17:25.134494 systemd-tmpfiles[1718]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:17:25.135231 systemd-tmpfiles[1718]: ACLs are not supported, ignoring. Sep 4 17:17:25.135401 systemd-tmpfiles[1718]: ACLs are not supported, ignoring. Sep 4 17:17:25.146643 systemd-tmpfiles[1718]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:17:25.146688 systemd-tmpfiles[1718]: Skipping /boot Sep 4 17:17:25.171721 systemd-tmpfiles[1718]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:17:25.171756 systemd-tmpfiles[1718]: Skipping /boot Sep 4 17:17:25.295998 zram_generator::config[1746]: No configuration found. Sep 4 17:17:25.540984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:25.669191 systemd[1]: Reloading finished in 556 ms. Sep 4 17:17:25.706996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:17:25.728755 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:17:25.738365 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:17:25.754322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:17:25.761558 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:17:25.772316 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:17:25.776510 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:17:25.790589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:17:25.804662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:17:25.820649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:17:25.827890 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:17:25.830295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:17:25.849428 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:17:25.856604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:17:25.858232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:17:25.873583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:17:25.881147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:17:25.883425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:17:25.883827 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:17:25.892034 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:17:25.906351 systemd[1]: Finished ensure-sysext.service. Sep 4 17:17:25.916312 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:17:25.919733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:17:25.920314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:17:25.983327 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:17:25.988695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:17:25.990126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:17:25.994650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:17:26.004648 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:17:26.006124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:17:26.008827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:17:26.027624 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:17:26.040097 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:17:26.041126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:17:26.056132 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:17:26.060158 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:17:26.105374 systemd-udevd[1801]: Using default interface naming scheme 'v255'. Sep 4 17:17:26.117745 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:17:26.140071 augenrules[1836]: No rules Sep 4 17:17:26.142491 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:17:26.180104 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:17:26.193216 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:17:26.354327 systemd-networkd[1847]: lo: Link UP Sep 4 17:17:26.355446 systemd-networkd[1847]: lo: Gained carrier Sep 4 17:17:26.361782 systemd-resolved[1800]: Positive Trust Anchors: Sep 4 17:17:26.361837 systemd-resolved[1800]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:17:26.361906 systemd-resolved[1800]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:17:26.363533 systemd-networkd[1847]: Enumeration completed Sep 4 17:17:26.363840 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:17:26.375709 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:17:26.381661 systemd-resolved[1800]: Defaulting to hostname 'linux'. Sep 4 17:17:26.389692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:17:26.392187 systemd[1]: Reached target network.target - Network. Sep 4 17:17:26.393920 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:17:26.440295 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:17:26.449106 (udev-worker)[1849]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:17:26.472021 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1848) Sep 4 17:17:26.481997 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1848) Sep 4 17:17:26.575736 systemd-networkd[1847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:26.576526 systemd-networkd[1847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:17:26.581495 systemd-networkd[1847]: eth0: Link UP Sep 4 17:17:26.582553 systemd-networkd[1847]: eth0: Gained carrier Sep 4 17:17:26.583583 systemd-networkd[1847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:26.595398 systemd-networkd[1847]: eth0: DHCPv4 address 172.31.22.59/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:17:26.636111 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1865) Sep 4 17:17:26.911507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:26.953072 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:17:26.966170 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:17:26.978506 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:17:26.997221 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:17:27.027155 lvm[1962]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:17:27.062461 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:17:27.072197 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:17:27.073816 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:17:27.089562 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:17:27.099732 lvm[1969]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:17:27.120565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:27.124328 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:17:27.126739 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:17:27.129300 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:17:27.132378 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:17:27.140077 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:17:27.144586 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:17:27.147497 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:17:27.147582 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:17:27.149884 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:17:27.153481 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:17:27.160394 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:17:27.176634 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:17:27.180491 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:17:27.183706 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:17:27.188127 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:17:27.190724 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:17:27.193287 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:17:27.193372 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:17:27.201365 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:17:27.215035 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:17:27.222705 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:17:27.230339 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:17:27.251602 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:17:27.254586 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:17:27.267185 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:17:27.279462 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:17:27.289228 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:17:27.297056 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:17:27.304588 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:17:27.315777 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:17:27.337542 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:17:27.349496 jq[1978]: false Sep 4 17:17:27.340773 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:17:27.344052 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:17:27.354563 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:17:27.362411 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:17:27.371806 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:17:27.373723 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:17:27.416073 extend-filesystems[1979]: Found loop4 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found loop5 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found loop6 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found loop7 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found nvme0n1 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found nvme0n1p1 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found nvme0n1p2 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found nvme0n1p3 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found usr Sep 4 17:17:27.416073 extend-filesystems[1979]: Found nvme0n1p4 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found nvme0n1p6 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found nvme0n1p7 Sep 4 17:17:27.416073 extend-filesystems[1979]: Found nvme0n1p9 Sep 4 17:17:27.416073 extend-filesystems[1979]: Checking size of /dev/nvme0n1p9 Sep 4 17:17:27.429808 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:18:26 UTC 2024 (1): Starting Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: ---------------------------------------------------- Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: corporation. Support and training for ntp-4 are Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: available at https://www.nwtime.org/support Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: ---------------------------------------------------- Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: proto: precision = 0.096 usec (-23) Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: basedate set to 2024-08-23 Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: gps base set to 2024-08-25 (week 2329) Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: Listen normally on 3 eth0 172.31.22.59:123 Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: Listen normally on 4 lo [::1]:123 Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: bind(21) AF_INET6 fe80::475:a7ff:fe80:ca71%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: unable to create socket on eth0 (5) for fe80::475:a7ff:fe80:ca71%2#123 Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: failed to init interface for address fe80::475:a7ff:fe80:ca71%2 Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: Listening on routing socket on fd #21 for interface updates Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:17:27.525704 ntpd[1981]: 4 Sep 17:17:27 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:17:27.451623 ntpd[1981]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:18:26 UTC 2024 (1): Starting Sep 4 17:17:27.541814 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:17:27.451693 ntpd[1981]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:17:27.563719 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:17:27.451725 ntpd[1981]: ---------------------------------------------------- Sep 4 17:17:27.451747 ntpd[1981]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:17:27.451784 ntpd[1981]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:17:27.451813 ntpd[1981]: corporation. Support and training for ntp-4 are Sep 4 17:17:27.451834 ntpd[1981]: available at https://www.nwtime.org/support Sep 4 17:17:27.451854 ntpd[1981]: ---------------------------------------------------- Sep 4 17:17:27.461186 ntpd[1981]: proto: precision = 0.096 usec (-23) Sep 4 17:17:27.461806 ntpd[1981]: basedate set to 2024-08-23 Sep 4 17:17:27.461838 ntpd[1981]: gps base set to 2024-08-25 (week 2329) Sep 4 17:17:27.473311 ntpd[1981]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:17:27.473421 ntpd[1981]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:17:27.476234 ntpd[1981]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:17:27.476315 ntpd[1981]: Listen normally on 3 eth0 172.31.22.59:123 Sep 4 17:17:27.476385 ntpd[1981]: Listen normally on 4 lo [::1]:123 Sep 4 17:17:27.476469 ntpd[1981]: bind(21) AF_INET6 fe80::475:a7ff:fe80:ca71%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:17:27.476510 ntpd[1981]: unable to create socket on eth0 (5) for fe80::475:a7ff:fe80:ca71%2#123 Sep 4 17:17:27.476545 ntpd[1981]: failed to init interface for address fe80::475:a7ff:fe80:ca71%2 Sep 4 17:17:27.476607 ntpd[1981]: Listening on routing socket on fd #21 for interface updates Sep 4 17:17:27.488011 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:17:27.488066 ntpd[1981]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:17:27.610016 extend-filesystems[1979]: Resized partition /dev/nvme0n1p9 Sep 4 17:17:27.615514 dbus-daemon[1977]: [system] SELinux support is enabled Sep 4 17:17:27.646432 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:17:27.646750 jq[1991]: true Sep 4 17:17:27.623339 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:17:27.647691 extend-filesystems[2016]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:17:27.648463 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:17:27.649015 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:17:27.681189 dbus-daemon[1977]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1847 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:17:27.687051 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:17:27.686818 dbus-daemon[1977]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:17:27.687197 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:17:27.690196 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:17:27.690265 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:17:27.711449 jq[2020]: true Sep 4 17:17:27.700527 (ntainerd)[2021]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:17:27.738349 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:17:27.751236 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:17:27.751337 tar[1999]: linux-arm64/helm Sep 4 17:17:27.782621 update_engine[1990]: I0904 17:17:27.775017 1990 main.cc:92] Flatcar Update Engine starting Sep 4 17:17:27.779903 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:17:27.787706 extend-filesystems[2016]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:17:27.787706 extend-filesystems[2016]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:17:27.787706 extend-filesystems[2016]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:17:27.812989 extend-filesystems[1979]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:17:27.808907 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:17:27.811305 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:17:27.821232 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:17:27.826114 update_engine[1990]: I0904 17:17:27.825533 1990 update_check_scheduler.cc:74] Next update check in 11m31s Sep 4 17:17:27.834519 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:17:27.896879 systemd-logind[1989]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:17:27.896983 systemd-logind[1989]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 17:17:27.897459 systemd-logind[1989]: New seat seat0. Sep 4 17:17:27.938546 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:17:27.964746 coreos-metadata[1976]: Sep 04 17:17:27.961 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:17:27.974409 coreos-metadata[1976]: Sep 04 17:17:27.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:17:27.977127 coreos-metadata[1976]: Sep 04 17:17:27.976 INFO Fetch successful Sep 4 17:17:27.977127 coreos-metadata[1976]: Sep 04 17:17:27.976 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:17:27.983847 coreos-metadata[1976]: Sep 04 17:17:27.981 INFO Fetch successful Sep 4 17:17:27.983847 coreos-metadata[1976]: Sep 04 17:17:27.982 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:17:27.997042 coreos-metadata[1976]: Sep 04 17:17:27.990 INFO Fetch successful Sep 4 17:17:27.997042 coreos-metadata[1976]: Sep 04 17:17:27.990 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:17:27.998216 coreos-metadata[1976]: Sep 04 17:17:27.997 INFO Fetch successful Sep 4 17:17:28.000603 coreos-metadata[1976]: Sep 04 17:17:28.000 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:17:28.005025 coreos-metadata[1976]: Sep 04 17:17:28.003 INFO Fetch failed with 404: resource not found Sep 4 17:17:28.005025 coreos-metadata[1976]: Sep 04 17:17:28.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:17:28.009228 coreos-metadata[1976]: Sep 04 17:17:28.008 INFO Fetch successful Sep 4 17:17:28.009228 coreos-metadata[1976]: Sep 04 17:17:28.008 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:17:28.011584 coreos-metadata[1976]: Sep 04 17:17:28.010 INFO Fetch successful Sep 4 17:17:28.011584 coreos-metadata[1976]: Sep 04 17:17:28.010 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:17:28.014343 coreos-metadata[1976]: Sep 04 17:17:28.013 INFO Fetch successful Sep 4 17:17:28.014343 coreos-metadata[1976]: Sep 04 17:17:28.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:17:28.019273 coreos-metadata[1976]: Sep 04 17:17:28.016 INFO Fetch successful Sep 4 17:17:28.019273 coreos-metadata[1976]: Sep 04 17:17:28.017 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:17:28.025047 coreos-metadata[1976]: Sep 04 17:17:28.020 INFO Fetch successful Sep 4 17:17:28.054794 bash[2059]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:17:28.063692 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:17:28.092625 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1867) Sep 4 17:17:28.092653 systemd[1]: Starting sshkeys.service... Sep 4 17:17:28.218987 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:17:28.268223 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:17:28.297542 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:17:28.301218 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:17:28.453042 ntpd[1981]: bind(24) AF_INET6 fe80::475:a7ff:fe80:ca71%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:17:28.455540 ntpd[1981]: 4 Sep 17:17:28 ntpd[1981]: bind(24) AF_INET6 fe80::475:a7ff:fe80:ca71%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:17:28.455540 ntpd[1981]: 4 Sep 17:17:28 ntpd[1981]: unable to create socket on eth0 (6) for fe80::475:a7ff:fe80:ca71%2#123 Sep 4 17:17:28.455540 ntpd[1981]: 4 Sep 17:17:28 ntpd[1981]: failed to init interface for address fe80::475:a7ff:fe80:ca71%2 Sep 4 17:17:28.453134 ntpd[1981]: unable to create socket on eth0 (6) for fe80::475:a7ff:fe80:ca71%2#123 Sep 4 17:17:28.453173 ntpd[1981]: failed to init interface for address fe80::475:a7ff:fe80:ca71%2 Sep 4 17:17:28.454393 dbus-daemon[1977]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:17:28.460158 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:17:28.464803 dbus-daemon[1977]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2031 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:17:28.495226 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:17:28.555667 polkitd[2115]: Started polkitd version 121 Sep 4 17:17:28.588362 polkitd[2115]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:17:28.588519 polkitd[2115]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:17:28.596783 polkitd[2115]: Finished loading, compiling and executing 2 rules Sep 4 17:17:28.605335 systemd-networkd[1847]: eth0: Gained IPv6LL Sep 4 17:17:28.616658 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:17:28.614091 dbus-daemon[1977]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:17:28.619125 polkitd[2115]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:17:28.624064 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:17:28.632230 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:17:28.671124 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:17:28.683804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:28.699323 containerd[2021]: time="2024-09-04T17:17:28.697494504Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:17:28.703169 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:17:28.735474 coreos-metadata[2078]: Sep 04 17:17:28.734 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:17:28.735474 coreos-metadata[2078]: Sep 04 17:17:28.735 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:17:28.748199 coreos-metadata[2078]: Sep 04 17:17:28.736 INFO Fetch successful Sep 4 17:17:28.748199 coreos-metadata[2078]: Sep 04 17:17:28.736 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:17:28.748199 coreos-metadata[2078]: Sep 04 17:17:28.737 INFO Fetch successful Sep 4 17:17:28.749750 unknown[2078]: wrote ssh authorized keys file for user: core Sep 4 17:17:28.800152 systemd-hostnamed[2031]: Hostname set to (transient) Sep 4 17:17:28.801823 systemd-resolved[1800]: System hostname changed to 'ip-172-31-22-59'. Sep 4 17:17:28.866284 locksmithd[2035]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:17:28.945025 amazon-ssm-agent[2141]: Initializing new seelog logger Sep 4 17:17:28.939131 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:17:28.952393 amazon-ssm-agent[2141]: New Seelog Logger Creation Complete Sep 4 17:17:28.952393 amazon-ssm-agent[2141]: 2024/09/04 17:17:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:17:28.952393 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:17:28.952393 amazon-ssm-agent[2141]: 2024/09/04 17:17:28 processing appconfig overrides Sep 4 17:17:28.956672 amazon-ssm-agent[2141]: 2024/09/04 17:17:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:17:28.956672 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:17:28.956672 amazon-ssm-agent[2141]: 2024/09/04 17:17:28 processing appconfig overrides Sep 4 17:17:28.956672 amazon-ssm-agent[2141]: 2024/09/04 17:17:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:17:28.956672 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:17:28.956672 amazon-ssm-agent[2141]: 2024/09/04 17:17:28 processing appconfig overrides Sep 4 17:17:28.964694 amazon-ssm-agent[2141]: 2024-09-04 17:17:28 INFO Proxy environment variables: Sep 4 17:17:28.976897 amazon-ssm-agent[2141]: 2024/09/04 17:17:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:17:28.976897 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:17:28.976897 amazon-ssm-agent[2141]: 2024/09/04 17:17:28 processing appconfig overrides Sep 4 17:17:29.006059 containerd[2021]: time="2024-09-04T17:17:29.003407529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:29.014816 containerd[2021]: time="2024-09-04T17:17:29.014740966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:17:29.015793 containerd[2021]: time="2024-09-04T17:17:29.014970526Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:17:29.015793 containerd[2021]: time="2024-09-04T17:17:29.015013738Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:17:29.016998 containerd[2021]: time="2024-09-04T17:17:29.016906282Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:17:29.017203 containerd[2021]: time="2024-09-04T17:17:29.017172262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:29.019144 containerd[2021]: time="2024-09-04T17:17:29.019063846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:17:29.019345 containerd[2021]: time="2024-09-04T17:17:29.019314046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:29.026112 containerd[2021]: time="2024-09-04T17:17:29.024525094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:17:29.026112 containerd[2021]: time="2024-09-04T17:17:29.025073014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:29.026112 containerd[2021]: time="2024-09-04T17:17:29.025145650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:17:29.026112 containerd[2021]: time="2024-09-04T17:17:29.025175794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:29.026112 containerd[2021]: time="2024-09-04T17:17:29.025422382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:29.026112 containerd[2021]: time="2024-09-04T17:17:29.026043286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:29.031995 containerd[2021]: time="2024-09-04T17:17:29.028216102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:17:29.031995 containerd[2021]: time="2024-09-04T17:17:29.028301554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:17:29.037534 containerd[2021]: time="2024-09-04T17:17:29.036562006Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:17:29.037534 containerd[2021]: time="2024-09-04T17:17:29.036722614Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:17:29.055040 update-ssh-keys[2152]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:17:29.064126 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.067114594Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.067204846Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.067240246Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.067274986Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.067311790Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.067570606Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.067992922Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.068232034Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.068265814Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.068296438Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.068327602Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.068357206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.068391622Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:17:29.070927 containerd[2021]: time="2024-09-04T17:17:29.068424310Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068461486Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068496250Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068525314Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068552662Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068592610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068625262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068656318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068688574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068719006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068756854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068786218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068827990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068859754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.071652 containerd[2021]: time="2024-09-04T17:17:29.068895178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.078206 amazon-ssm-agent[2141]: 2024-09-04 17:17:28 INFO https_proxy: Sep 4 17:17:29.075022 systemd[1]: Finished sshkeys.service. Sep 4 17:17:29.079104 containerd[2021]: time="2024-09-04T17:17:29.068924038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.079438 containerd[2021]: time="2024-09-04T17:17:29.079304782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.083893 containerd[2021]: time="2024-09-04T17:17:29.079595206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.083893 containerd[2021]: time="2024-09-04T17:17:29.082015150Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:17:29.083893 containerd[2021]: time="2024-09-04T17:17:29.082110478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.083893 containerd[2021]: time="2024-09-04T17:17:29.082203670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.083893 containerd[2021]: time="2024-09-04T17:17:29.082259698Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:17:29.083893 containerd[2021]: time="2024-09-04T17:17:29.082452394Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:17:29.084305 containerd[2021]: time="2024-09-04T17:17:29.083838466Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:17:29.087969 containerd[2021]: time="2024-09-04T17:17:29.084415858Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:17:29.087969 containerd[2021]: time="2024-09-04T17:17:29.087012130Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:17:29.087969 containerd[2021]: time="2024-09-04T17:17:29.087058282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.087969 containerd[2021]: time="2024-09-04T17:17:29.087120538Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:17:29.087969 containerd[2021]: time="2024-09-04T17:17:29.087146590Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:17:29.087969 containerd[2021]: time="2024-09-04T17:17:29.087203362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:17:29.091041 containerd[2021]: time="2024-09-04T17:17:29.090836878Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:17:29.091395 containerd[2021]: time="2024-09-04T17:17:29.091353622Z" level=info msg="Connect containerd service" Sep 4 17:17:29.094345 containerd[2021]: time="2024-09-04T17:17:29.091556398Z" level=info msg="using legacy CRI server" Sep 4 17:17:29.094345 containerd[2021]: time="2024-09-04T17:17:29.091583362Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:17:29.094345 containerd[2021]: time="2024-09-04T17:17:29.094223422Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:17:29.104756 containerd[2021]: time="2024-09-04T17:17:29.104700370Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:17:29.108818 containerd[2021]: time="2024-09-04T17:17:29.108119242Z" level=info msg="Start subscribing containerd event" Sep 4 17:17:29.108818 containerd[2021]: time="2024-09-04T17:17:29.108311866Z" level=info msg="Start recovering state" Sep 4 17:17:29.108818 containerd[2021]: time="2024-09-04T17:17:29.108495718Z" level=info msg="Start event monitor" Sep 4 17:17:29.108818 containerd[2021]: time="2024-09-04T17:17:29.108529582Z" level=info msg="Start snapshots syncer" Sep 4 17:17:29.108818 containerd[2021]: time="2024-09-04T17:17:29.108557698Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:17:29.108818 containerd[2021]: time="2024-09-04T17:17:29.108578302Z" level=info msg="Start streaming server" Sep 4 17:17:29.120730 containerd[2021]: time="2024-09-04T17:17:29.116533030Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:17:29.120730 containerd[2021]: time="2024-09-04T17:17:29.116737366Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:17:29.120730 containerd[2021]: time="2024-09-04T17:17:29.116884186Z" level=info msg="containerd successfully booted in 0.452221s" Sep 4 17:17:29.117071 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:17:29.194043 amazon-ssm-agent[2141]: 2024-09-04 17:17:28 INFO http_proxy: Sep 4 17:17:29.293320 amazon-ssm-agent[2141]: 2024-09-04 17:17:28 INFO no_proxy: Sep 4 17:17:29.394533 amazon-ssm-agent[2141]: 2024-09-04 17:17:28 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:17:29.493113 amazon-ssm-agent[2141]: 2024-09-04 17:17:28 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:17:29.594165 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO Agent will take identity from EC2 Sep 4 17:17:29.694008 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:17:29.792331 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:17:29.891114 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:17:29.977139 tar[1999]: linux-arm64/LICENSE Sep 4 17:17:29.977721 tar[1999]: linux-arm64/README.md Sep 4 17:17:29.990970 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:17:30.036237 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:17:30.090633 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 17:17:30.192987 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:17:30.193155 sshd_keygen[2005]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:17:30.291117 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:17:30.298395 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:17:30.321752 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:17:30.336704 systemd[1]: Started sshd@0-172.31.22.59:22-139.178.89.65:60922.service - OpenSSH per-connection server daemon (139.178.89.65:60922). Sep 4 17:17:30.378778 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:17:30.382376 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:17:30.396089 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO [Registrar] Starting registrar module Sep 4 17:17:30.402103 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:17:30.458349 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:17:30.473656 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:17:30.486634 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:17:30.491034 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:17:30.497277 amazon-ssm-agent[2141]: 2024-09-04 17:17:29 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:17:30.605506 sshd[2214]: Accepted publickey for core from 139.178.89.65 port 60922 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:30.609435 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:30.639359 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:17:30.649426 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:17:30.663354 systemd-logind[1989]: New session 1 of user core. Sep 4 17:17:30.702325 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:17:30.722519 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:17:30.748161 (systemd)[2225]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:17:30.795650 amazon-ssm-agent[2141]: 2024-09-04 17:17:30 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:17:30.858149 amazon-ssm-agent[2141]: 2024-09-04 17:17:30 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:17:30.858763 amazon-ssm-agent[2141]: 2024-09-04 17:17:30 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:17:30.858763 amazon-ssm-agent[2141]: 2024-09-04 17:17:30 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:17:30.896448 amazon-ssm-agent[2141]: 2024-09-04 17:17:30 INFO [CredentialRefresher] Next credential rotation will be in 30.6249240623 minutes Sep 4 17:17:30.996460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:31.003332 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:17:31.014814 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:17:31.050582 systemd[2225]: Queued start job for default target default.target. Sep 4 17:17:31.060382 systemd[2225]: Created slice app.slice - User Application Slice. Sep 4 17:17:31.060543 systemd[2225]: Reached target paths.target - Paths. Sep 4 17:17:31.060591 systemd[2225]: Reached target timers.target - Timers. Sep 4 17:17:31.087325 systemd[2225]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:17:31.134094 systemd[2225]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:17:31.134477 systemd[2225]: Reached target sockets.target - Sockets. Sep 4 17:17:31.134514 systemd[2225]: Reached target basic.target - Basic System. Sep 4 17:17:31.134607 systemd[2225]: Reached target default.target - Main User Target. Sep 4 17:17:31.134675 systemd[2225]: Startup finished in 370ms. Sep 4 17:17:31.135310 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:17:31.147962 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:17:31.150645 systemd[1]: Startup finished in 1.335s (kernel) + 8.672s (initrd) + 9.720s (userspace) = 19.728s. Sep 4 17:17:31.327452 systemd[1]: Started sshd@1-172.31.22.59:22-139.178.89.65:35060.service - OpenSSH per-connection server daemon (139.178.89.65:35060). Sep 4 17:17:31.452747 ntpd[1981]: Listen normally on 7 eth0 [fe80::475:a7ff:fe80:ca71%2]:123 Sep 4 17:17:31.453509 ntpd[1981]: 4 Sep 17:17:31 ntpd[1981]: Listen normally on 7 eth0 [fe80::475:a7ff:fe80:ca71%2]:123 Sep 4 17:17:31.532196 sshd[2250]: Accepted publickey for core from 139.178.89.65 port 35060 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:31.536753 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:31.549892 systemd-logind[1989]: New session 2 of user core. Sep 4 17:17:31.558441 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:17:31.703374 sshd[2250]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:31.709550 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:17:31.714136 systemd[1]: sshd@1-172.31.22.59:22-139.178.89.65:35060.service: Deactivated successfully. Sep 4 17:17:31.720559 systemd-logind[1989]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:17:31.749618 systemd[1]: Started sshd@2-172.31.22.59:22-139.178.89.65:35062.service - OpenSSH per-connection server daemon (139.178.89.65:35062). Sep 4 17:17:31.752877 systemd-logind[1989]: Removed session 2. Sep 4 17:17:31.903124 amazon-ssm-agent[2141]: 2024-09-04 17:17:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:17:31.929259 sshd[2257]: Accepted publickey for core from 139.178.89.65 port 35062 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:31.932907 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:31.948299 systemd-logind[1989]: New session 3 of user core. Sep 4 17:17:31.951258 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:17:32.004306 amazon-ssm-agent[2141]: 2024-09-04 17:17:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2262) started Sep 4 17:17:32.076879 sshd[2257]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:32.088232 systemd[1]: sshd@2-172.31.22.59:22-139.178.89.65:35062.service: Deactivated successfully. Sep 4 17:17:32.095382 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:17:32.100723 systemd-logind[1989]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:17:32.105674 amazon-ssm-agent[2141]: 2024-09-04 17:17:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:17:32.121377 systemd[1]: Started sshd@3-172.31.22.59:22-139.178.89.65:35072.service - OpenSSH per-connection server daemon (139.178.89.65:35072). Sep 4 17:17:32.125112 systemd-logind[1989]: Removed session 3. Sep 4 17:17:32.278002 kubelet[2236]: E0904 17:17:32.277739 2236 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:17:32.285436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:17:32.286387 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:17:32.287766 systemd[1]: kubelet.service: Consumed 1.501s CPU time. Sep 4 17:17:32.325758 sshd[2273]: Accepted publickey for core from 139.178.89.65 port 35072 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:32.329030 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:32.339011 systemd-logind[1989]: New session 4 of user core. Sep 4 17:17:32.348300 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:17:32.480230 sshd[2273]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:32.491212 systemd-logind[1989]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:17:32.492124 systemd[1]: sshd@3-172.31.22.59:22-139.178.89.65:35072.service: Deactivated successfully. Sep 4 17:17:32.497959 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:17:32.517069 systemd-logind[1989]: Removed session 4. Sep 4 17:17:32.525746 systemd[1]: Started sshd@4-172.31.22.59:22-139.178.89.65:35078.service - OpenSSH per-connection server daemon (139.178.89.65:35078). Sep 4 17:17:32.713120 sshd[2284]: Accepted publickey for core from 139.178.89.65 port 35078 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:32.716663 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:32.728563 systemd-logind[1989]: New session 5 of user core. Sep 4 17:17:32.739531 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:17:32.865108 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:17:32.865915 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:17:32.886795 sudo[2287]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:32.911609 sshd[2284]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:32.923025 systemd-logind[1989]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:17:32.924223 systemd[1]: sshd@4-172.31.22.59:22-139.178.89.65:35078.service: Deactivated successfully. Sep 4 17:17:32.930268 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:17:32.932457 systemd-logind[1989]: Removed session 5. Sep 4 17:17:32.952452 systemd[1]: Started sshd@5-172.31.22.59:22-139.178.89.65:35082.service - OpenSSH per-connection server daemon (139.178.89.65:35082). Sep 4 17:17:33.138778 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 35082 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:33.142320 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:33.153856 systemd-logind[1989]: New session 6 of user core. Sep 4 17:17:33.164469 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:17:33.278643 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:17:33.279426 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:17:33.288529 sudo[2296]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:33.301392 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:17:33.302249 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:17:33.327709 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:17:33.335153 auditctl[2299]: No rules Sep 4 17:17:33.336096 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:17:33.336550 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:17:33.349706 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:17:33.427691 augenrules[2317]: No rules Sep 4 17:17:33.432094 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:17:33.435518 sudo[2295]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:33.460666 sshd[2292]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:33.468343 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:17:33.471658 systemd[1]: sshd@5-172.31.22.59:22-139.178.89.65:35082.service: Deactivated successfully. Sep 4 17:17:33.482856 systemd-logind[1989]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:17:33.506023 systemd[1]: Started sshd@6-172.31.22.59:22-139.178.89.65:35090.service - OpenSSH per-connection server daemon (139.178.89.65:35090). Sep 4 17:17:33.508403 systemd-logind[1989]: Removed session 6. Sep 4 17:17:33.704371 sshd[2325]: Accepted publickey for core from 139.178.89.65 port 35090 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:33.707106 sshd[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:33.716141 systemd-logind[1989]: New session 7 of user core. Sep 4 17:17:33.726326 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:17:33.843244 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:17:33.844161 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:17:34.073431 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:17:34.074443 (dockerd)[2337]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:17:34.496010 dockerd[2337]: time="2024-09-04T17:17:34.495749021Z" level=info msg="Starting up" Sep 4 17:17:34.715800 dockerd[2337]: time="2024-09-04T17:17:34.714984594Z" level=info msg="Loading containers: start." Sep 4 17:17:34.897011 kernel: Initializing XFRM netlink socket Sep 4 17:17:34.928565 (udev-worker)[2359]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:17:35.013909 systemd-networkd[1847]: docker0: Link UP Sep 4 17:17:35.040219 dockerd[2337]: time="2024-09-04T17:17:35.040097762Z" level=info msg="Loading containers: done." Sep 4 17:17:35.078100 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3193357379-merged.mount: Deactivated successfully. Sep 4 17:17:35.081449 dockerd[2337]: time="2024-09-04T17:17:35.081259023Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:17:35.082625 dockerd[2337]: time="2024-09-04T17:17:35.081835298Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:17:35.082625 dockerd[2337]: time="2024-09-04T17:17:35.082190873Z" level=info msg="Daemon has completed initialization" Sep 4 17:17:35.138707 dockerd[2337]: time="2024-09-04T17:17:35.138564724Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:17:35.140122 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:17:36.255481 containerd[2021]: time="2024-09-04T17:17:36.254757467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:17:36.944190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213479942.mount: Deactivated successfully. Sep 4 17:17:38.774655 containerd[2021]: time="2024-09-04T17:17:38.774560131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:38.777486 containerd[2021]: time="2024-09-04T17:17:38.777424479Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=31599022" Sep 4 17:17:38.779604 containerd[2021]: time="2024-09-04T17:17:38.779536478Z" level=info msg="ImageCreate event name:\"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:38.790971 containerd[2021]: time="2024-09-04T17:17:38.789693729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:38.796666 containerd[2021]: time="2024-09-04T17:17:38.796595232Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"31595822\" in 2.541726892s" Sep 4 17:17:38.796834 containerd[2021]: time="2024-09-04T17:17:38.796671442Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\"" Sep 4 17:17:38.836104 containerd[2021]: time="2024-09-04T17:17:38.835932098Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:17:40.765766 containerd[2021]: time="2024-09-04T17:17:40.765698122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:40.770691 containerd[2021]: time="2024-09-04T17:17:40.770585233Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=29019496" Sep 4 17:17:40.771536 containerd[2021]: time="2024-09-04T17:17:40.771470727Z" level=info msg="ImageCreate event name:\"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:40.780979 containerd[2021]: time="2024-09-04T17:17:40.780801855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:40.785018 containerd[2021]: time="2024-09-04T17:17:40.784914129Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"30506763\" in 1.948889318s" Sep 4 17:17:40.785416 containerd[2021]: time="2024-09-04T17:17:40.785206387Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\"" Sep 4 17:17:40.836680 containerd[2021]: time="2024-09-04T17:17:40.836315772Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:17:42.076032 containerd[2021]: time="2024-09-04T17:17:42.075503088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:42.078851 containerd[2021]: time="2024-09-04T17:17:42.078772161Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=15533681" Sep 4 17:17:42.084211 containerd[2021]: time="2024-09-04T17:17:42.084089113Z" level=info msg="ImageCreate event name:\"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:42.092200 containerd[2021]: time="2024-09-04T17:17:42.092058118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:42.094824 containerd[2021]: time="2024-09-04T17:17:42.094758040Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"17020966\" in 1.258358861s" Sep 4 17:17:42.095217 containerd[2021]: time="2024-09-04T17:17:42.095029548Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\"" Sep 4 17:17:42.140862 containerd[2021]: time="2024-09-04T17:17:42.140431888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:17:42.470237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:17:42.481428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:42.909058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:42.919572 (kubelet)[2566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:17:43.030221 kubelet[2566]: E0904 17:17:43.029894 2566 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:17:43.039524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:17:43.040221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:17:43.669571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664701617.mount: Deactivated successfully. Sep 4 17:17:44.415651 containerd[2021]: time="2024-09-04T17:17:44.415544333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:44.418215 containerd[2021]: time="2024-09-04T17:17:44.418110138Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=24977930" Sep 4 17:17:44.419272 containerd[2021]: time="2024-09-04T17:17:44.419179678Z" level=info msg="ImageCreate event name:\"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:44.423433 containerd[2021]: time="2024-09-04T17:17:44.423353255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:44.425535 containerd[2021]: time="2024-09-04T17:17:44.425097794Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"24976949\" in 2.284586326s" Sep 4 17:17:44.425535 containerd[2021]: time="2024-09-04T17:17:44.425164900Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\"" Sep 4 17:17:44.467240 containerd[2021]: time="2024-09-04T17:17:44.467100242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:17:45.012663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728210570.mount: Deactivated successfully. Sep 4 17:17:45.020977 containerd[2021]: time="2024-09-04T17:17:45.019522187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:45.021485 containerd[2021]: time="2024-09-04T17:17:45.021436213Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:17:45.024018 containerd[2021]: time="2024-09-04T17:17:45.023954390Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:45.028391 containerd[2021]: time="2024-09-04T17:17:45.028320158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:45.030385 containerd[2021]: time="2024-09-04T17:17:45.030331911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 563.163147ms" Sep 4 17:17:45.030545 containerd[2021]: time="2024-09-04T17:17:45.030514208Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:17:45.069821 containerd[2021]: time="2024-09-04T17:17:45.069753659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:17:45.691290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194222011.mount: Deactivated successfully. Sep 4 17:17:48.142232 containerd[2021]: time="2024-09-04T17:17:48.142076858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:48.162362 containerd[2021]: time="2024-09-04T17:17:48.162199995Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Sep 4 17:17:48.178000 containerd[2021]: time="2024-09-04T17:17:48.176195086Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:48.197705 containerd[2021]: time="2024-09-04T17:17:48.197530493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:48.203476 containerd[2021]: time="2024-09-04T17:17:48.203306780Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.133237991s" Sep 4 17:17:48.204569 containerd[2021]: time="2024-09-04T17:17:48.203474672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:17:48.249298 containerd[2021]: time="2024-09-04T17:17:48.248902858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:17:48.843798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730174870.mount: Deactivated successfully. Sep 4 17:17:49.390051 containerd[2021]: time="2024-09-04T17:17:49.389054649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:49.391191 containerd[2021]: time="2024-09-04T17:17:49.391131914Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Sep 4 17:17:49.392811 containerd[2021]: time="2024-09-04T17:17:49.392753035Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:49.398339 containerd[2021]: time="2024-09-04T17:17:49.398260703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:49.402797 containerd[2021]: time="2024-09-04T17:17:49.402665416Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.153675086s" Sep 4 17:17:49.402797 containerd[2021]: time="2024-09-04T17:17:49.402782250Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Sep 4 17:17:53.220399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:17:53.232680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:53.668503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:53.682745 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:17:53.813016 kubelet[2718]: E0904 17:17:53.810697 2718 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:17:53.817629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:17:53.818133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:17:57.307376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:57.318623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:57.368565 systemd[1]: Reloading requested from client PID 2732 ('systemctl') (unit session-7.scope)... Sep 4 17:17:57.368881 systemd[1]: Reloading... Sep 4 17:17:57.711987 zram_generator::config[2774]: No configuration found. Sep 4 17:17:58.021576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:58.226044 systemd[1]: Reloading finished in 856 ms. Sep 4 17:17:58.343437 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:17:58.343689 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:17:58.344442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:58.354648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:58.720361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:58.734560 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:17:58.810401 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:17:58.839288 kubelet[2835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:58.839288 kubelet[2835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:17:58.839288 kubelet[2835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:58.843528 kubelet[2835]: I0904 17:17:58.843413 2835 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:17:59.906123 kubelet[2835]: I0904 17:17:59.905681 2835 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:17:59.906123 kubelet[2835]: I0904 17:17:59.905731 2835 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:17:59.908001 kubelet[2835]: I0904 17:17:59.907156 2835 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:17:59.944322 kubelet[2835]: I0904 17:17:59.944270 2835 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:17:59.948481 kubelet[2835]: E0904 17:17:59.948411 2835 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:17:59.968820 kubelet[2835]: W0904 17:17:59.968733 2835 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:17:59.971081 kubelet[2835]: I0904 17:17:59.970893 2835 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:17:59.972093 kubelet[2835]: I0904 17:17:59.972045 2835 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:17:59.972454 kubelet[2835]: I0904 17:17:59.972398 2835 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:17:59.972655 kubelet[2835]: I0904 17:17:59.972468 2835 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:17:59.972655 kubelet[2835]: I0904 17:17:59.972491 2835 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:17:59.972766 kubelet[2835]: I0904 17:17:59.972691 2835 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:59.976163 kubelet[2835]: I0904 17:17:59.976064 2835 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:17:59.976163 kubelet[2835]: I0904 17:17:59.976168 2835 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:17:59.978142 kubelet[2835]: I0904 17:17:59.976303 2835 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:17:59.978142 kubelet[2835]: I0904 17:17:59.976359 2835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:17:59.979754 kubelet[2835]: W0904 17:17:59.979449 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.22.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:17:59.980175 kubelet[2835]: E0904 17:17:59.979816 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:17:59.981147 kubelet[2835]: I0904 17:17:59.980310 2835 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:17:59.989063 kubelet[2835]: W0904 17:17:59.988238 2835 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:17:59.989655 kubelet[2835]: W0904 17:17:59.989517 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.22.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-59&limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:17:59.989883 kubelet[2835]: E0904 17:17:59.989662 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-59&limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:17:59.991042 kubelet[2835]: I0904 17:17:59.990933 2835 server.go:1232] "Started kubelet" Sep 4 17:17:59.991724 kubelet[2835]: I0904 17:17:59.991484 2835 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:17:59.993694 kubelet[2835]: I0904 17:17:59.993617 2835 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:17:59.997987 kubelet[2835]: I0904 17:17:59.997238 2835 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:17:59.998276 kubelet[2835]: I0904 17:17:59.998244 2835 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:17:59.998706 kubelet[2835]: I0904 17:17:59.998650 2835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:18:00.002719 kubelet[2835]: E0904 17:18:00.002583 2835 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:18:00.004583 kubelet[2835]: E0904 17:18:00.003278 2835 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:18:00.009834 kubelet[2835]: E0904 17:17:59.999054 2835 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-22-59.17f21a13e55996e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-22-59", UID:"ip-172-31-22-59", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-22-59"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 17, 59, 990875874, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 17, 59, 990875874, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-22-59"}': 'Post "https://172.31.22.59:6443/api/v1/namespaces/default/events": dial tcp 172.31.22.59:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:18:00.012799 kubelet[2835]: E0904 17:18:00.012746 2835 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-22-59\" not found" Sep 4 17:18:00.013377 kubelet[2835]: I0904 17:18:00.013123 2835 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:18:00.013377 kubelet[2835]: I0904 17:18:00.013315 2835 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:18:00.014281 kubelet[2835]: I0904 17:18:00.013619 2835 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:18:00.017069 kubelet[2835]: E0904 17:18:00.016670 2835 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-59?timeout=10s\": dial tcp 172.31.22.59:6443: connect: connection refused" interval="200ms" Sep 4 17:18:00.018685 kubelet[2835]: W0904 17:18:00.018435 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.22.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:00.018914 kubelet[2835]: E0904 17:18:00.018881 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:00.063189 kubelet[2835]: I0904 17:18:00.062765 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:18:00.066245 kubelet[2835]: I0904 17:18:00.066163 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:18:00.066245 kubelet[2835]: I0904 17:18:00.066226 2835 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:18:00.066519 kubelet[2835]: I0904 17:18:00.066274 2835 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:18:00.066519 kubelet[2835]: E0904 17:18:00.066390 2835 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:18:00.087193 kubelet[2835]: W0904 17:18:00.085381 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.22.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:00.087193 kubelet[2835]: E0904 17:18:00.085467 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:00.112023 kubelet[2835]: I0904 17:18:00.111914 2835 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:18:00.112481 kubelet[2835]: I0904 17:18:00.112454 2835 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:18:00.112699 kubelet[2835]: I0904 17:18:00.112674 2835 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:18:00.116853 kubelet[2835]: I0904 17:18:00.116808 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-59" Sep 4 17:18:00.118798 kubelet[2835]: E0904 17:18:00.118025 2835 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.22.59:6443/api/v1/nodes\": dial tcp 172.31.22.59:6443: connect: connection refused" node="ip-172-31-22-59" Sep 4 17:18:00.122682 kubelet[2835]: I0904 17:18:00.122183 2835 policy_none.go:49] "None policy: Start" Sep 4 17:18:00.124287 kubelet[2835]: I0904 17:18:00.124221 2835 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:18:00.125657 kubelet[2835]: I0904 17:18:00.124913 2835 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:18:00.138897 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:18:00.161856 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:18:00.167593 kubelet[2835]: E0904 17:18:00.167494 2835 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:18:00.174519 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:18:00.188821 kubelet[2835]: I0904 17:18:00.188747 2835 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:18:00.189538 kubelet[2835]: I0904 17:18:00.189473 2835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:18:00.190870 kubelet[2835]: E0904 17:18:00.190811 2835 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-59\" not found" Sep 4 17:18:00.217711 kubelet[2835]: E0904 17:18:00.217661 2835 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-59?timeout=10s\": dial tcp 172.31.22.59:6443: connect: connection refused" interval="400ms" Sep 4 17:18:00.320933 kubelet[2835]: I0904 17:18:00.320897 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-59" Sep 4 17:18:00.321654 kubelet[2835]: E0904 17:18:00.321613 2835 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.22.59:6443/api/v1/nodes\": dial tcp 172.31.22.59:6443: connect: connection refused" node="ip-172-31-22-59" Sep 4 17:18:00.367986 kubelet[2835]: I0904 17:18:00.367888 2835 topology_manager.go:215] "Topology Admit Handler" podUID="d95260335d229358f6ac69b20aa03751" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-59" Sep 4 17:18:00.371298 kubelet[2835]: I0904 17:18:00.370927 2835 topology_manager.go:215] "Topology Admit Handler" podUID="9720d97f4393110f9016767cfdae7b27" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:00.373739 kubelet[2835]: I0904 17:18:00.373579 2835 topology_manager.go:215] "Topology Admit Handler" podUID="5f74bd16fbaf2f81b83b3879791f25fb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-59" Sep 4 17:18:00.390264 systemd[1]: Created slice kubepods-burstable-podd95260335d229358f6ac69b20aa03751.slice - libcontainer container kubepods-burstable-podd95260335d229358f6ac69b20aa03751.slice. Sep 4 17:18:00.410835 systemd[1]: Created slice kubepods-burstable-pod9720d97f4393110f9016767cfdae7b27.slice - libcontainer container kubepods-burstable-pod9720d97f4393110f9016767cfdae7b27.slice. Sep 4 17:18:00.415031 kubelet[2835]: I0904 17:18:00.414575 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d95260335d229358f6ac69b20aa03751-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-59\" (UID: \"d95260335d229358f6ac69b20aa03751\") " pod="kube-system/kube-apiserver-ip-172-31-22-59" Sep 4 17:18:00.415031 kubelet[2835]: I0904 17:18:00.414663 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:00.415031 kubelet[2835]: I0904 17:18:00.414715 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f74bd16fbaf2f81b83b3879791f25fb-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-59\" (UID: \"5f74bd16fbaf2f81b83b3879791f25fb\") " pod="kube-system/kube-scheduler-ip-172-31-22-59" Sep 4 17:18:00.415031 kubelet[2835]: I0904 17:18:00.414763 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:00.415031 kubelet[2835]: I0904 17:18:00.414824 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:00.415366 kubelet[2835]: I0904 17:18:00.414871 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:00.415366 kubelet[2835]: I0904 17:18:00.414913 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d95260335d229358f6ac69b20aa03751-ca-certs\") pod \"kube-apiserver-ip-172-31-22-59\" (UID: \"d95260335d229358f6ac69b20aa03751\") " pod="kube-system/kube-apiserver-ip-172-31-22-59" Sep 4 17:18:00.415366 kubelet[2835]: I0904 17:18:00.414991 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d95260335d229358f6ac69b20aa03751-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-59\" (UID: \"d95260335d229358f6ac69b20aa03751\") " pod="kube-system/kube-apiserver-ip-172-31-22-59" Sep 4 17:18:00.415366 kubelet[2835]: I0904 17:18:00.415044 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:00.427531 systemd[1]: Created slice kubepods-burstable-pod5f74bd16fbaf2f81b83b3879791f25fb.slice - libcontainer container kubepods-burstable-pod5f74bd16fbaf2f81b83b3879791f25fb.slice. Sep 4 17:18:00.619507 kubelet[2835]: E0904 17:18:00.619418 2835 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-59?timeout=10s\": dial tcp 172.31.22.59:6443: connect: connection refused" interval="800ms" Sep 4 17:18:00.706764 containerd[2021]: time="2024-09-04T17:18:00.706209802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-59,Uid:d95260335d229358f6ac69b20aa03751,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:00.722563 containerd[2021]: time="2024-09-04T17:18:00.722205516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-59,Uid:9720d97f4393110f9016767cfdae7b27,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:00.724892 kubelet[2835]: I0904 17:18:00.724386 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-59" Sep 4 17:18:00.724892 kubelet[2835]: E0904 17:18:00.724846 2835 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.22.59:6443/api/v1/nodes\": dial tcp 172.31.22.59:6443: connect: connection refused" node="ip-172-31-22-59" Sep 4 17:18:00.735326 containerd[2021]: time="2024-09-04T17:18:00.735262629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-59,Uid:5f74bd16fbaf2f81b83b3879791f25fb,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:00.803985 kubelet[2835]: W0904 17:18:00.803337 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.22.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:00.803985 kubelet[2835]: E0904 17:18:00.803428 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:00.889652 kubelet[2835]: W0904 17:18:00.889497 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.22.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-59&limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:00.889925 kubelet[2835]: E0904 17:18:00.889680 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-59&limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:00.997803 kubelet[2835]: W0904 17:18:00.997555 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.22.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:00.997803 kubelet[2835]: E0904 17:18:00.997632 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:01.325603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700395591.mount: Deactivated successfully. Sep 4 17:18:01.338798 containerd[2021]: time="2024-09-04T17:18:01.338661280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:01.341293 containerd[2021]: time="2024-09-04T17:18:01.341135498Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:18:01.343192 containerd[2021]: time="2024-09-04T17:18:01.342851816Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:01.346182 containerd[2021]: time="2024-09-04T17:18:01.345867995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:18:01.348741 containerd[2021]: time="2024-09-04T17:18:01.348492222Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:18:01.351293 containerd[2021]: time="2024-09-04T17:18:01.351084857Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:01.353330 containerd[2021]: time="2024-09-04T17:18:01.353187070Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:01.358249 containerd[2021]: time="2024-09-04T17:18:01.357273437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 634.948749ms" Sep 4 17:18:01.360974 containerd[2021]: time="2024-09-04T17:18:01.360816237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 654.396504ms" Sep 4 17:18:01.363554 containerd[2021]: time="2024-09-04T17:18:01.363205070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:01.371804 containerd[2021]: time="2024-09-04T17:18:01.371647191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 636.267045ms" Sep 4 17:18:01.421923 kubelet[2835]: E0904 17:18:01.421856 2835 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-59?timeout=10s\": dial tcp 172.31.22.59:6443: connect: connection refused" interval="1.6s" Sep 4 17:18:01.441561 kubelet[2835]: W0904 17:18:01.439638 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.22.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:01.441561 kubelet[2835]: E0904 17:18:01.439787 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:01.530299 kubelet[2835]: I0904 17:18:01.529671 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-59" Sep 4 17:18:01.530299 kubelet[2835]: E0904 17:18:01.530236 2835 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.22.59:6443/api/v1/nodes\": dial tcp 172.31.22.59:6443: connect: connection refused" node="ip-172-31-22-59" Sep 4 17:18:01.611319 containerd[2021]: time="2024-09-04T17:18:01.610823108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:01.612204 containerd[2021]: time="2024-09-04T17:18:01.611894952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:01.613013 containerd[2021]: time="2024-09-04T17:18:01.612295323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:01.615498 containerd[2021]: time="2024-09-04T17:18:01.615087143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:01.621099 containerd[2021]: time="2024-09-04T17:18:01.619049900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:01.621099 containerd[2021]: time="2024-09-04T17:18:01.619276059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:01.621099 containerd[2021]: time="2024-09-04T17:18:01.619376461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:01.621099 containerd[2021]: time="2024-09-04T17:18:01.619845414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:01.628499 containerd[2021]: time="2024-09-04T17:18:01.627251614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:01.628499 containerd[2021]: time="2024-09-04T17:18:01.627495859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:01.628499 containerd[2021]: time="2024-09-04T17:18:01.627536171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:01.628499 containerd[2021]: time="2024-09-04T17:18:01.628079176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:01.668755 systemd[1]: Started cri-containerd-2b31cc36cd4fa84d2ff8de9cc8523c43f5e28300c9af04bd7db5436c2deaa1cf.scope - libcontainer container 2b31cc36cd4fa84d2ff8de9cc8523c43f5e28300c9af04bd7db5436c2deaa1cf. Sep 4 17:18:01.706537 systemd[1]: Started cri-containerd-0b5076d738450c697e0bbc7a146fe0437444812e6f5b66d003ed393bf7b82101.scope - libcontainer container 0b5076d738450c697e0bbc7a146fe0437444812e6f5b66d003ed393bf7b82101. Sep 4 17:18:01.731010 systemd[1]: Started cri-containerd-337fe6442552df1c7205d61a6f1d4fabf0473f3538e2f835b4ebb54c88728d0d.scope - libcontainer container 337fe6442552df1c7205d61a6f1d4fabf0473f3538e2f835b4ebb54c88728d0d. Sep 4 17:18:01.871353 containerd[2021]: time="2024-09-04T17:18:01.870827469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-59,Uid:9720d97f4393110f9016767cfdae7b27,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b5076d738450c697e0bbc7a146fe0437444812e6f5b66d003ed393bf7b82101\"" Sep 4 17:18:01.892900 containerd[2021]: time="2024-09-04T17:18:01.892307584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-59,Uid:d95260335d229358f6ac69b20aa03751,Namespace:kube-system,Attempt:0,} returns sandbox id \"337fe6442552df1c7205d61a6f1d4fabf0473f3538e2f835b4ebb54c88728d0d\"" Sep 4 17:18:01.893118 containerd[2021]: time="2024-09-04T17:18:01.892882636Z" level=info msg="CreateContainer within sandbox \"0b5076d738450c697e0bbc7a146fe0437444812e6f5b66d003ed393bf7b82101\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:18:01.901691 containerd[2021]: time="2024-09-04T17:18:01.900149741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-59,Uid:5f74bd16fbaf2f81b83b3879791f25fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b31cc36cd4fa84d2ff8de9cc8523c43f5e28300c9af04bd7db5436c2deaa1cf\"" Sep 4 17:18:01.906916 containerd[2021]: time="2024-09-04T17:18:01.906816102Z" level=info msg="CreateContainer within sandbox \"337fe6442552df1c7205d61a6f1d4fabf0473f3538e2f835b4ebb54c88728d0d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:18:01.910015 containerd[2021]: time="2024-09-04T17:18:01.909767742Z" level=info msg="CreateContainer within sandbox \"2b31cc36cd4fa84d2ff8de9cc8523c43f5e28300c9af04bd7db5436c2deaa1cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:18:01.930691 containerd[2021]: time="2024-09-04T17:18:01.930616264Z" level=info msg="CreateContainer within sandbox \"0b5076d738450c697e0bbc7a146fe0437444812e6f5b66d003ed393bf7b82101\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"46596331eea43843af36eefde224c1bfee47af5361b97920db35e1e37c57d68c\"" Sep 4 17:18:01.932566 containerd[2021]: time="2024-09-04T17:18:01.931537667Z" level=info msg="StartContainer for \"46596331eea43843af36eefde224c1bfee47af5361b97920db35e1e37c57d68c\"" Sep 4 17:18:01.940673 containerd[2021]: time="2024-09-04T17:18:01.940577593Z" level=info msg="CreateContainer within sandbox \"337fe6442552df1c7205d61a6f1d4fabf0473f3538e2f835b4ebb54c88728d0d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d0962b60fe9bef917af9f2b17572d50da53555c72fb2653b733cad8c21d010f0\"" Sep 4 17:18:01.943681 containerd[2021]: time="2024-09-04T17:18:01.943432573Z" level=info msg="StartContainer for \"d0962b60fe9bef917af9f2b17572d50da53555c72fb2653b733cad8c21d010f0\"" Sep 4 17:18:01.946897 containerd[2021]: time="2024-09-04T17:18:01.946314575Z" level=info msg="CreateContainer within sandbox \"2b31cc36cd4fa84d2ff8de9cc8523c43f5e28300c9af04bd7db5436c2deaa1cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ed66f8330a82612f5157c9075a218d691f4000b8134b4419ed650b783a2208c\"" Sep 4 17:18:01.949010 containerd[2021]: time="2024-09-04T17:18:01.947593771Z" level=info msg="StartContainer for \"6ed66f8330a82612f5157c9075a218d691f4000b8134b4419ed650b783a2208c\"" Sep 4 17:18:01.978164 kubelet[2835]: E0904 17:18:01.978093 2835 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.59:6443: connect: connection refused Sep 4 17:18:02.005621 systemd[1]: Started cri-containerd-46596331eea43843af36eefde224c1bfee47af5361b97920db35e1e37c57d68c.scope - libcontainer container 46596331eea43843af36eefde224c1bfee47af5361b97920db35e1e37c57d68c. Sep 4 17:18:02.066251 systemd[1]: Started cri-containerd-d0962b60fe9bef917af9f2b17572d50da53555c72fb2653b733cad8c21d010f0.scope - libcontainer container d0962b60fe9bef917af9f2b17572d50da53555c72fb2653b733cad8c21d010f0. Sep 4 17:18:02.089620 systemd[1]: Started cri-containerd-6ed66f8330a82612f5157c9075a218d691f4000b8134b4419ed650b783a2208c.scope - libcontainer container 6ed66f8330a82612f5157c9075a218d691f4000b8134b4419ed650b783a2208c. Sep 4 17:18:02.213315 containerd[2021]: time="2024-09-04T17:18:02.209658390Z" level=info msg="StartContainer for \"46596331eea43843af36eefde224c1bfee47af5361b97920db35e1e37c57d68c\" returns successfully" Sep 4 17:18:02.249767 containerd[2021]: time="2024-09-04T17:18:02.249686494Z" level=info msg="StartContainer for \"d0962b60fe9bef917af9f2b17572d50da53555c72fb2653b733cad8c21d010f0\" returns successfully" Sep 4 17:18:02.268615 containerd[2021]: time="2024-09-04T17:18:02.268540139Z" level=info msg="StartContainer for \"6ed66f8330a82612f5157c9075a218d691f4000b8134b4419ed650b783a2208c\" returns successfully" Sep 4 17:18:03.139766 kubelet[2835]: I0904 17:18:03.139714 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-59" Sep 4 17:18:06.522740 kubelet[2835]: E0904 17:18:06.522599 2835 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-59\" not found" node="ip-172-31-22-59" Sep 4 17:18:06.530969 kubelet[2835]: I0904 17:18:06.529697 2835 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-22-59" Sep 4 17:18:06.983908 kubelet[2835]: I0904 17:18:06.982796 2835 apiserver.go:52] "Watching apiserver" Sep 4 17:18:07.014692 kubelet[2835]: I0904 17:18:07.014523 2835 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:18:09.425296 systemd[1]: Reloading requested from client PID 3111 ('systemctl') (unit session-7.scope)... Sep 4 17:18:09.426021 systemd[1]: Reloading... Sep 4 17:18:09.680992 zram_generator::config[3158]: No configuration found. Sep 4 17:18:09.962618 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:18:10.176808 systemd[1]: Reloading finished in 749 ms. Sep 4 17:18:10.299643 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:10.301092 kubelet[2835]: I0904 17:18:10.299663 2835 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:18:10.324379 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:18:10.326131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:10.326326 systemd[1]: kubelet.service: Consumed 2.223s CPU time, 114.2M memory peak, 0B memory swap peak. Sep 4 17:18:10.340873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:10.743684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:10.755998 (kubelet)[3213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:18:10.897989 kubelet[3213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:18:10.897989 kubelet[3213]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:18:10.897989 kubelet[3213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:18:10.900153 kubelet[3213]: I0904 17:18:10.898216 3213 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:18:10.912708 kubelet[3213]: I0904 17:18:10.911778 3213 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:18:10.912708 kubelet[3213]: I0904 17:18:10.912070 3213 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:18:10.912708 kubelet[3213]: I0904 17:18:10.912674 3213 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:18:10.917737 kubelet[3213]: I0904 17:18:10.917669 3213 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:18:10.921199 kubelet[3213]: I0904 17:18:10.920139 3213 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:18:10.941558 kubelet[3213]: W0904 17:18:10.941492 3213 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:18:10.943510 kubelet[3213]: I0904 17:18:10.943422 3213 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:18:10.944229 kubelet[3213]: I0904 17:18:10.944172 3213 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:18:10.944631 kubelet[3213]: I0904 17:18:10.944524 3213 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:18:10.944631 kubelet[3213]: I0904 17:18:10.944620 3213 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:18:10.944631 kubelet[3213]: I0904 17:18:10.944644 3213 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:18:10.945840 kubelet[3213]: I0904 17:18:10.944740 3213 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:18:10.945840 kubelet[3213]: I0904 17:18:10.945017 3213 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:18:10.945840 kubelet[3213]: I0904 17:18:10.945639 3213 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:18:10.945840 kubelet[3213]: I0904 17:18:10.945788 3213 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:18:10.945840 kubelet[3213]: I0904 17:18:10.945833 3213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:18:10.954980 kubelet[3213]: I0904 17:18:10.948636 3213 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:18:10.963017 kubelet[3213]: I0904 17:18:10.962830 3213 server.go:1232] "Started kubelet" Sep 4 17:18:10.982713 kubelet[3213]: I0904 17:18:10.982344 3213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:18:10.992440 kubelet[3213]: I0904 17:18:10.992295 3213 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:18:11.004005 kubelet[3213]: I0904 17:18:11.002165 3213 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:18:11.006684 kubelet[3213]: I0904 17:18:11.006630 3213 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:18:11.011398 kubelet[3213]: E0904 17:18:11.011328 3213 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:18:11.011398 kubelet[3213]: E0904 17:18:11.011413 3213 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:18:11.018097 kubelet[3213]: I0904 17:18:11.016662 3213 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:18:11.018097 kubelet[3213]: I0904 17:18:11.017240 3213 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:18:11.030121 kubelet[3213]: I0904 17:18:11.016693 3213 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:18:11.041544 kubelet[3213]: I0904 17:18:11.041476 3213 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:18:11.128227 kubelet[3213]: I0904 17:18:11.128172 3213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:18:11.153232 kubelet[3213]: I0904 17:18:11.153179 3213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:18:11.159499 kubelet[3213]: I0904 17:18:11.159041 3213 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:18:11.159883 kubelet[3213]: I0904 17:18:11.159582 3213 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:18:11.165465 kubelet[3213]: E0904 17:18:11.162222 3213 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:18:11.165636 kubelet[3213]: I0904 17:18:11.165072 3213 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-59" Sep 4 17:18:11.229359 kubelet[3213]: I0904 17:18:11.229219 3213 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-22-59" Sep 4 17:18:11.230399 kubelet[3213]: I0904 17:18:11.230276 3213 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-22-59" Sep 4 17:18:11.266595 kubelet[3213]: E0904 17:18:11.266377 3213 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:18:11.373020 kubelet[3213]: I0904 17:18:11.372883 3213 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:18:11.373020 kubelet[3213]: I0904 17:18:11.372921 3213 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:18:11.373020 kubelet[3213]: I0904 17:18:11.372999 3213 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:18:11.373379 kubelet[3213]: I0904 17:18:11.373253 3213 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:18:11.373379 kubelet[3213]: I0904 17:18:11.373291 3213 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:18:11.373379 kubelet[3213]: I0904 17:18:11.373308 3213 policy_none.go:49] "None policy: Start" Sep 4 17:18:11.375123 kubelet[3213]: I0904 17:18:11.374366 3213 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:18:11.375123 kubelet[3213]: I0904 17:18:11.374405 3213 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:18:11.375123 kubelet[3213]: I0904 17:18:11.374712 3213 state_mem.go:75] "Updated machine memory state" Sep 4 17:18:11.389367 kubelet[3213]: I0904 17:18:11.389317 3213 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:18:11.390605 kubelet[3213]: I0904 17:18:11.390544 3213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:18:11.467850 kubelet[3213]: I0904 17:18:11.467699 3213 topology_manager.go:215] "Topology Admit Handler" podUID="d95260335d229358f6ac69b20aa03751" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-59" Sep 4 17:18:11.468271 kubelet[3213]: I0904 17:18:11.468167 3213 topology_manager.go:215] "Topology Admit Handler" podUID="9720d97f4393110f9016767cfdae7b27" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:11.468364 kubelet[3213]: I0904 17:18:11.468277 3213 topology_manager.go:215] "Topology Admit Handler" podUID="5f74bd16fbaf2f81b83b3879791f25fb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-59" Sep 4 17:18:11.502983 kubelet[3213]: E0904 17:18:11.502542 3213 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-22-59\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:11.545363 kubelet[3213]: I0904 17:18:11.545218 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d95260335d229358f6ac69b20aa03751-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-59\" (UID: \"d95260335d229358f6ac69b20aa03751\") " pod="kube-system/kube-apiserver-ip-172-31-22-59" Sep 4 17:18:11.545363 kubelet[3213]: I0904 17:18:11.545303 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:11.545543 kubelet[3213]: I0904 17:18:11.545371 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:11.545543 kubelet[3213]: I0904 17:18:11.545444 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:11.545543 kubelet[3213]: I0904 17:18:11.545492 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f74bd16fbaf2f81b83b3879791f25fb-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-59\" (UID: \"5f74bd16fbaf2f81b83b3879791f25fb\") " pod="kube-system/kube-scheduler-ip-172-31-22-59" Sep 4 17:18:11.545543 kubelet[3213]: I0904 17:18:11.545535 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d95260335d229358f6ac69b20aa03751-ca-certs\") pod \"kube-apiserver-ip-172-31-22-59\" (UID: \"d95260335d229358f6ac69b20aa03751\") " pod="kube-system/kube-apiserver-ip-172-31-22-59" Sep 4 17:18:11.546396 kubelet[3213]: I0904 17:18:11.545937 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:11.546396 kubelet[3213]: I0904 17:18:11.546076 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9720d97f4393110f9016767cfdae7b27-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-59\" (UID: \"9720d97f4393110f9016767cfdae7b27\") " pod="kube-system/kube-controller-manager-ip-172-31-22-59" Sep 4 17:18:11.546396 kubelet[3213]: I0904 17:18:11.546125 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d95260335d229358f6ac69b20aa03751-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-59\" (UID: \"d95260335d229358f6ac69b20aa03751\") " pod="kube-system/kube-apiserver-ip-172-31-22-59" Sep 4 17:18:11.949627 kubelet[3213]: I0904 17:18:11.949438 3213 apiserver.go:52] "Watching apiserver" Sep 4 17:18:12.017029 kubelet[3213]: I0904 17:18:12.016936 3213 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:18:12.172355 kubelet[3213]: I0904 17:18:12.172286 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-59" podStartSLOduration=1.172197183 podCreationTimestamp="2024-09-04 17:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:12.153886782 +0000 UTC m=+1.386814279" watchObservedRunningTime="2024-09-04 17:18:12.172197183 +0000 UTC m=+1.405124692" Sep 4 17:18:12.210736 kubelet[3213]: I0904 17:18:12.210350 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-59" podStartSLOduration=1.210264495 podCreationTimestamp="2024-09-04 17:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:12.184164292 +0000 UTC m=+1.417091813" watchObservedRunningTime="2024-09-04 17:18:12.210264495 +0000 UTC m=+1.443191992" Sep 4 17:18:13.425990 update_engine[1990]: I0904 17:18:13.424729 1990 update_attempter.cc:509] Updating boot flags... Sep 4 17:18:13.653180 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3265) Sep 4 17:18:14.259413 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3256) Sep 4 17:18:14.947037 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3256) Sep 4 17:18:20.318594 sudo[2328]: pam_unix(sudo:session): session closed for user root Sep 4 17:18:20.343206 sshd[2325]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:20.349518 systemd[1]: sshd@6-172.31.22.59:22-139.178.89.65:35090.service: Deactivated successfully. Sep 4 17:18:20.353650 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:18:20.354341 systemd[1]: session-7.scope: Consumed 12.112s CPU time, 136.4M memory peak, 0B memory swap peak. Sep 4 17:18:20.357826 systemd-logind[1989]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:18:20.361571 systemd-logind[1989]: Removed session 7. Sep 4 17:18:24.043005 kubelet[3213]: I0904 17:18:24.041707 3213 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:18:24.043711 containerd[2021]: time="2024-09-04T17:18:24.043282538Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:18:24.044447 kubelet[3213]: I0904 17:18:24.043770 3213 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:18:24.230065 kubelet[3213]: I0904 17:18:24.229984 3213 topology_manager.go:215] "Topology Admit Handler" podUID="bd81a1ec-acb2-4298-b8c4-1083d66ab89c" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-jfd2h" Sep 4 17:18:24.252045 systemd[1]: Created slice kubepods-besteffort-podbd81a1ec_acb2_4298_b8c4_1083d66ab89c.slice - libcontainer container kubepods-besteffort-podbd81a1ec_acb2_4298_b8c4_1083d66ab89c.slice. Sep 4 17:18:24.344330 kubelet[3213]: I0904 17:18:24.342693 3213 topology_manager.go:215] "Topology Admit Handler" podUID="85a32971-5f12-4a19-9099-28062418db0b" podNamespace="kube-system" podName="kube-proxy-xtvj5" Sep 4 17:18:24.344330 kubelet[3213]: I0904 17:18:24.342991 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd81a1ec-acb2-4298-b8c4-1083d66ab89c-var-lib-calico\") pod \"tigera-operator-5d56685c77-jfd2h\" (UID: \"bd81a1ec-acb2-4298-b8c4-1083d66ab89c\") " pod="tigera-operator/tigera-operator-5d56685c77-jfd2h" Sep 4 17:18:24.344330 kubelet[3213]: I0904 17:18:24.343169 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpq4l\" (UniqueName: \"kubernetes.io/projected/bd81a1ec-acb2-4298-b8c4-1083d66ab89c-kube-api-access-rpq4l\") pod \"tigera-operator-5d56685c77-jfd2h\" (UID: \"bd81a1ec-acb2-4298-b8c4-1083d66ab89c\") " pod="tigera-operator/tigera-operator-5d56685c77-jfd2h" Sep 4 17:18:24.373288 systemd[1]: Created slice kubepods-besteffort-pod85a32971_5f12_4a19_9099_28062418db0b.slice - libcontainer container kubepods-besteffort-pod85a32971_5f12_4a19_9099_28062418db0b.slice. Sep 4 17:18:24.443832 kubelet[3213]: I0904 17:18:24.443773 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lszk\" (UniqueName: \"kubernetes.io/projected/85a32971-5f12-4a19-9099-28062418db0b-kube-api-access-6lszk\") pod \"kube-proxy-xtvj5\" (UID: \"85a32971-5f12-4a19-9099-28062418db0b\") " pod="kube-system/kube-proxy-xtvj5" Sep 4 17:18:24.444028 kubelet[3213]: I0904 17:18:24.443855 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85a32971-5f12-4a19-9099-28062418db0b-xtables-lock\") pod \"kube-proxy-xtvj5\" (UID: \"85a32971-5f12-4a19-9099-28062418db0b\") " pod="kube-system/kube-proxy-xtvj5" Sep 4 17:18:24.444028 kubelet[3213]: I0904 17:18:24.443901 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85a32971-5f12-4a19-9099-28062418db0b-lib-modules\") pod \"kube-proxy-xtvj5\" (UID: \"85a32971-5f12-4a19-9099-28062418db0b\") " pod="kube-system/kube-proxy-xtvj5" Sep 4 17:18:24.444028 kubelet[3213]: I0904 17:18:24.443976 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85a32971-5f12-4a19-9099-28062418db0b-kube-proxy\") pod \"kube-proxy-xtvj5\" (UID: \"85a32971-5f12-4a19-9099-28062418db0b\") " pod="kube-system/kube-proxy-xtvj5" Sep 4 17:18:24.578277 containerd[2021]: time="2024-09-04T17:18:24.576917645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-jfd2h,Uid:bd81a1ec-acb2-4298-b8c4-1083d66ab89c,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:18:24.647295 containerd[2021]: time="2024-09-04T17:18:24.646895978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:24.647295 containerd[2021]: time="2024-09-04T17:18:24.647033561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:24.647295 containerd[2021]: time="2024-09-04T17:18:24.647229351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:24.649347 containerd[2021]: time="2024-09-04T17:18:24.648665309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:24.684488 containerd[2021]: time="2024-09-04T17:18:24.682568581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xtvj5,Uid:85a32971-5f12-4a19-9099-28062418db0b,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:24.700675 systemd[1]: Started cri-containerd-6b88d9b5161784ff41f489f6b393048e6c59e7a842c5123f5edb4f6b6145e4ff.scope - libcontainer container 6b88d9b5161784ff41f489f6b393048e6c59e7a842c5123f5edb4f6b6145e4ff. Sep 4 17:18:24.737877 containerd[2021]: time="2024-09-04T17:18:24.737387530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:24.737877 containerd[2021]: time="2024-09-04T17:18:24.737522654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:24.737877 containerd[2021]: time="2024-09-04T17:18:24.737561551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:24.737877 containerd[2021]: time="2024-09-04T17:18:24.737725017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:24.784642 systemd[1]: Started cri-containerd-baf2a3d949097a277da84421061d01aa9db4a116922f566d2273c760eb81764d.scope - libcontainer container baf2a3d949097a277da84421061d01aa9db4a116922f566d2273c760eb81764d. Sep 4 17:18:24.812722 containerd[2021]: time="2024-09-04T17:18:24.812642708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-jfd2h,Uid:bd81a1ec-acb2-4298-b8c4-1083d66ab89c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6b88d9b5161784ff41f489f6b393048e6c59e7a842c5123f5edb4f6b6145e4ff\"" Sep 4 17:18:24.818591 containerd[2021]: time="2024-09-04T17:18:24.818203174Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:18:24.856919 containerd[2021]: time="2024-09-04T17:18:24.856795273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xtvj5,Uid:85a32971-5f12-4a19-9099-28062418db0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"baf2a3d949097a277da84421061d01aa9db4a116922f566d2273c760eb81764d\"" Sep 4 17:18:24.865368 containerd[2021]: time="2024-09-04T17:18:24.865306731Z" level=info msg="CreateContainer within sandbox \"baf2a3d949097a277da84421061d01aa9db4a116922f566d2273c760eb81764d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:18:24.887177 containerd[2021]: time="2024-09-04T17:18:24.886904986Z" level=info msg="CreateContainer within sandbox \"baf2a3d949097a277da84421061d01aa9db4a116922f566d2273c760eb81764d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"236acb12ab35a2df11951e73deb2879b3a17fb706dc024a842b7fdf3c1418a8a\"" Sep 4 17:18:24.888801 containerd[2021]: time="2024-09-04T17:18:24.888586677Z" level=info msg="StartContainer for \"236acb12ab35a2df11951e73deb2879b3a17fb706dc024a842b7fdf3c1418a8a\"" Sep 4 17:18:24.966349 systemd[1]: Started cri-containerd-236acb12ab35a2df11951e73deb2879b3a17fb706dc024a842b7fdf3c1418a8a.scope - libcontainer container 236acb12ab35a2df11951e73deb2879b3a17fb706dc024a842b7fdf3c1418a8a. Sep 4 17:18:25.027166 containerd[2021]: time="2024-09-04T17:18:25.026524178Z" level=info msg="StartContainer for \"236acb12ab35a2df11951e73deb2879b3a17fb706dc024a842b7fdf3c1418a8a\" returns successfully" Sep 4 17:18:26.156605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112478694.mount: Deactivated successfully. Sep 4 17:18:26.913505 containerd[2021]: time="2024-09-04T17:18:26.913205319Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:26.915237 containerd[2021]: time="2024-09-04T17:18:26.915107304Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485911" Sep 4 17:18:26.916754 containerd[2021]: time="2024-09-04T17:18:26.916658140Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:26.923186 containerd[2021]: time="2024-09-04T17:18:26.923064435Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:26.926369 containerd[2021]: time="2024-09-04T17:18:26.925046048Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.106764709s" Sep 4 17:18:26.926369 containerd[2021]: time="2024-09-04T17:18:26.925149412Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:18:26.929924 containerd[2021]: time="2024-09-04T17:18:26.929821399Z" level=info msg="CreateContainer within sandbox \"6b88d9b5161784ff41f489f6b393048e6c59e7a842c5123f5edb4f6b6145e4ff\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:18:26.965830 containerd[2021]: time="2024-09-04T17:18:26.965686650Z" level=info msg="CreateContainer within sandbox \"6b88d9b5161784ff41f489f6b393048e6c59e7a842c5123f5edb4f6b6145e4ff\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"872d7fe6e4668bc1f4a4ef6cf7941cdfc5885e37c65a1951f1686e391e5de094\"" Sep 4 17:18:26.971069 containerd[2021]: time="2024-09-04T17:18:26.969500850Z" level=info msg="StartContainer for \"872d7fe6e4668bc1f4a4ef6cf7941cdfc5885e37c65a1951f1686e391e5de094\"" Sep 4 17:18:27.035667 systemd[1]: run-containerd-runc-k8s.io-872d7fe6e4668bc1f4a4ef6cf7941cdfc5885e37c65a1951f1686e391e5de094-runc.eLV2vu.mount: Deactivated successfully. Sep 4 17:18:27.047314 systemd[1]: Started cri-containerd-872d7fe6e4668bc1f4a4ef6cf7941cdfc5885e37c65a1951f1686e391e5de094.scope - libcontainer container 872d7fe6e4668bc1f4a4ef6cf7941cdfc5885e37c65a1951f1686e391e5de094. Sep 4 17:18:27.114826 containerd[2021]: time="2024-09-04T17:18:27.114745479Z" level=info msg="StartContainer for \"872d7fe6e4668bc1f4a4ef6cf7941cdfc5885e37c65a1951f1686e391e5de094\" returns successfully" Sep 4 17:18:27.397476 kubelet[3213]: I0904 17:18:27.392701 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xtvj5" podStartSLOduration=3.392540721 podCreationTimestamp="2024-09-04 17:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:25.365059759 +0000 UTC m=+14.597987256" watchObservedRunningTime="2024-09-04 17:18:27.392540721 +0000 UTC m=+16.625468206" Sep 4 17:18:31.851973 kubelet[3213]: I0904 17:18:31.849471 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-jfd2h" podStartSLOduration=5.7406824499999995 podCreationTimestamp="2024-09-04 17:18:24 +0000 UTC" firstStartedPulling="2024-09-04 17:18:24.817024692 +0000 UTC m=+14.049952165" lastFinishedPulling="2024-09-04 17:18:26.92575042 +0000 UTC m=+16.158677881" observedRunningTime="2024-09-04 17:18:27.400336401 +0000 UTC m=+16.633264690" watchObservedRunningTime="2024-09-04 17:18:31.849408166 +0000 UTC m=+21.082335651" Sep 4 17:18:31.851973 kubelet[3213]: I0904 17:18:31.849773 3213 topology_manager.go:215] "Topology Admit Handler" podUID="13a27632-4d8a-43f5-b31a-56c030357a85" podNamespace="calico-system" podName="calico-typha-6d664767d5-29znw" Sep 4 17:18:31.858772 kubelet[3213]: W0904 17:18:31.858708 3213 reflector.go:535] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-22-59" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-59' and this object Sep 4 17:18:31.859012 kubelet[3213]: E0904 17:18:31.858783 3213 reflector.go:147] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-22-59" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-59' and this object Sep 4 17:18:31.859012 kubelet[3213]: W0904 17:18:31.858896 3213 reflector.go:535] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-22-59" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-59' and this object Sep 4 17:18:31.859012 kubelet[3213]: E0904 17:18:31.858925 3213 reflector.go:147] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-22-59" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-59' and this object Sep 4 17:18:31.861158 kubelet[3213]: W0904 17:18:31.861097 3213 reflector.go:535] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-22-59" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-59' and this object Sep 4 17:18:31.861158 kubelet[3213]: E0904 17:18:31.861159 3213 reflector.go:147] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-22-59" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-59' and this object Sep 4 17:18:31.874251 systemd[1]: Created slice kubepods-besteffort-pod13a27632_4d8a_43f5_b31a_56c030357a85.slice - libcontainer container kubepods-besteffort-pod13a27632_4d8a_43f5_b31a_56c030357a85.slice. Sep 4 17:18:31.895438 kubelet[3213]: I0904 17:18:31.895333 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lpmh\" (UniqueName: \"kubernetes.io/projected/13a27632-4d8a-43f5-b31a-56c030357a85-kube-api-access-6lpmh\") pod \"calico-typha-6d664767d5-29znw\" (UID: \"13a27632-4d8a-43f5-b31a-56c030357a85\") " pod="calico-system/calico-typha-6d664767d5-29znw" Sep 4 17:18:31.895438 kubelet[3213]: I0904 17:18:31.895451 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13a27632-4d8a-43f5-b31a-56c030357a85-tigera-ca-bundle\") pod \"calico-typha-6d664767d5-29znw\" (UID: \"13a27632-4d8a-43f5-b31a-56c030357a85\") " pod="calico-system/calico-typha-6d664767d5-29znw" Sep 4 17:18:31.895808 kubelet[3213]: I0904 17:18:31.895508 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/13a27632-4d8a-43f5-b31a-56c030357a85-typha-certs\") pod \"calico-typha-6d664767d5-29znw\" (UID: \"13a27632-4d8a-43f5-b31a-56c030357a85\") " pod="calico-system/calico-typha-6d664767d5-29znw" Sep 4 17:18:32.081980 kubelet[3213]: I0904 17:18:32.081525 3213 topology_manager.go:215] "Topology Admit Handler" podUID="957309b4-fec6-4832-88e9-e2d7e0874dbb" podNamespace="calico-system" podName="calico-node-hzz2t" Sep 4 17:18:32.104090 systemd[1]: Created slice kubepods-besteffort-pod957309b4_fec6_4832_88e9_e2d7e0874dbb.slice - libcontainer container kubepods-besteffort-pod957309b4_fec6_4832_88e9_e2d7e0874dbb.slice. Sep 4 17:18:32.199018 kubelet[3213]: I0904 17:18:32.197784 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/957309b4-fec6-4832-88e9-e2d7e0874dbb-lib-modules\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.199508 kubelet[3213]: I0904 17:18:32.199428 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/957309b4-fec6-4832-88e9-e2d7e0874dbb-cni-log-dir\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.200076 kubelet[3213]: I0904 17:18:32.199927 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/957309b4-fec6-4832-88e9-e2d7e0874dbb-node-certs\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.200901 kubelet[3213]: I0904 17:18:32.200741 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/957309b4-fec6-4832-88e9-e2d7e0874dbb-cni-net-dir\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.202043 kubelet[3213]: I0904 17:18:32.201327 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgn8\" (UniqueName: \"kubernetes.io/projected/957309b4-fec6-4832-88e9-e2d7e0874dbb-kube-api-access-mlgn8\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.202256 kubelet[3213]: I0904 17:18:32.202100 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/957309b4-fec6-4832-88e9-e2d7e0874dbb-policysync\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.202256 kubelet[3213]: I0904 17:18:32.202190 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/957309b4-fec6-4832-88e9-e2d7e0874dbb-var-run-calico\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.202256 kubelet[3213]: I0904 17:18:32.202244 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/957309b4-fec6-4832-88e9-e2d7e0874dbb-flexvol-driver-host\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.202815 kubelet[3213]: I0904 17:18:32.202300 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/957309b4-fec6-4832-88e9-e2d7e0874dbb-tigera-ca-bundle\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.202815 kubelet[3213]: I0904 17:18:32.202349 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/957309b4-fec6-4832-88e9-e2d7e0874dbb-var-lib-calico\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.202815 kubelet[3213]: I0904 17:18:32.202409 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/957309b4-fec6-4832-88e9-e2d7e0874dbb-xtables-lock\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.202815 kubelet[3213]: I0904 17:18:32.202460 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/957309b4-fec6-4832-88e9-e2d7e0874dbb-cni-bin-dir\") pod \"calico-node-hzz2t\" (UID: \"957309b4-fec6-4832-88e9-e2d7e0874dbb\") " pod="calico-system/calico-node-hzz2t" Sep 4 17:18:32.233175 kubelet[3213]: I0904 17:18:32.230721 3213 topology_manager.go:215] "Topology Admit Handler" podUID="34e7c28a-1deb-4336-b6f8-a4fcfffd40c0" podNamespace="calico-system" podName="csi-node-driver-zdx2g" Sep 4 17:18:32.233175 kubelet[3213]: E0904 17:18:32.231450 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdx2g" podUID="34e7c28a-1deb-4336-b6f8-a4fcfffd40c0" Sep 4 17:18:32.303129 kubelet[3213]: I0904 17:18:32.303075 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/34e7c28a-1deb-4336-b6f8-a4fcfffd40c0-registration-dir\") pod \"csi-node-driver-zdx2g\" (UID: \"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0\") " pod="calico-system/csi-node-driver-zdx2g" Sep 4 17:18:32.303311 kubelet[3213]: I0904 17:18:32.303171 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/34e7c28a-1deb-4336-b6f8-a4fcfffd40c0-varrun\") pod \"csi-node-driver-zdx2g\" (UID: \"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0\") " pod="calico-system/csi-node-driver-zdx2g" Sep 4 17:18:32.303311 kubelet[3213]: I0904 17:18:32.303221 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/34e7c28a-1deb-4336-b6f8-a4fcfffd40c0-socket-dir\") pod \"csi-node-driver-zdx2g\" (UID: \"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0\") " pod="calico-system/csi-node-driver-zdx2g" Sep 4 17:18:32.303427 kubelet[3213]: I0904 17:18:32.303369 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2m2b\" (UniqueName: \"kubernetes.io/projected/34e7c28a-1deb-4336-b6f8-a4fcfffd40c0-kube-api-access-f2m2b\") pod \"csi-node-driver-zdx2g\" (UID: \"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0\") " pod="calico-system/csi-node-driver-zdx2g" Sep 4 17:18:32.303427 kubelet[3213]: I0904 17:18:32.303413 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34e7c28a-1deb-4336-b6f8-a4fcfffd40c0-kubelet-dir\") pod \"csi-node-driver-zdx2g\" (UID: \"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0\") " pod="calico-system/csi-node-driver-zdx2g" Sep 4 17:18:32.320167 kubelet[3213]: E0904 17:18:32.320107 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.320167 kubelet[3213]: W0904 17:18:32.320153 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.320403 kubelet[3213]: E0904 17:18:32.320218 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.405677 kubelet[3213]: E0904 17:18:32.405438 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.406427 kubelet[3213]: W0904 17:18:32.405890 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.406427 kubelet[3213]: E0904 17:18:32.406018 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.410842 kubelet[3213]: E0904 17:18:32.409420 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.410842 kubelet[3213]: W0904 17:18:32.409458 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.410842 kubelet[3213]: E0904 17:18:32.409521 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.413416 kubelet[3213]: E0904 17:18:32.412675 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.413416 kubelet[3213]: W0904 17:18:32.412708 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.413416 kubelet[3213]: E0904 17:18:32.412752 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.414548 kubelet[3213]: E0904 17:18:32.414389 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.414548 kubelet[3213]: W0904 17:18:32.414421 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.414548 kubelet[3213]: E0904 17:18:32.414513 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.416268 kubelet[3213]: E0904 17:18:32.415997 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.416268 kubelet[3213]: W0904 17:18:32.416029 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.416268 kubelet[3213]: E0904 17:18:32.416101 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.417260 kubelet[3213]: E0904 17:18:32.416882 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.417260 kubelet[3213]: W0904 17:18:32.416962 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.417260 kubelet[3213]: E0904 17:18:32.417080 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.418159 kubelet[3213]: E0904 17:18:32.417883 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.418159 kubelet[3213]: W0904 17:18:32.417913 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.418159 kubelet[3213]: E0904 17:18:32.418104 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.419535 kubelet[3213]: E0904 17:18:32.419453 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.419765 kubelet[3213]: W0904 17:18:32.419501 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.419765 kubelet[3213]: E0904 17:18:32.419812 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.422095 kubelet[3213]: E0904 17:18:32.421916 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.422095 kubelet[3213]: W0904 17:18:32.421981 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.423181 kubelet[3213]: E0904 17:18:32.422385 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.423181 kubelet[3213]: E0904 17:18:32.423184 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.426420 kubelet[3213]: W0904 17:18:32.423245 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.426420 kubelet[3213]: E0904 17:18:32.423531 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.427321 kubelet[3213]: E0904 17:18:32.427265 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.427321 kubelet[3213]: W0904 17:18:32.427306 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.427655 kubelet[3213]: E0904 17:18:32.427470 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.429468 kubelet[3213]: E0904 17:18:32.429418 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.429468 kubelet[3213]: W0904 17:18:32.429455 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.429739 kubelet[3213]: E0904 17:18:32.429507 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.431722 kubelet[3213]: E0904 17:18:32.431673 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.431722 kubelet[3213]: W0904 17:18:32.431710 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.432143 kubelet[3213]: E0904 17:18:32.431864 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.436021 kubelet[3213]: E0904 17:18:32.433823 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.436021 kubelet[3213]: W0904 17:18:32.433926 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.436021 kubelet[3213]: E0904 17:18:32.434002 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.436908 kubelet[3213]: E0904 17:18:32.436621 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.436908 kubelet[3213]: W0904 17:18:32.436654 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.436908 kubelet[3213]: E0904 17:18:32.436691 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.437741 kubelet[3213]: E0904 17:18:32.437526 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.437741 kubelet[3213]: W0904 17:18:32.437552 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.437741 kubelet[3213]: E0904 17:18:32.437610 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.438550 kubelet[3213]: E0904 17:18:32.438312 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.438550 kubelet[3213]: W0904 17:18:32.438342 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.438550 kubelet[3213]: E0904 17:18:32.438386 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.439194 kubelet[3213]: E0904 17:18:32.439059 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.439194 kubelet[3213]: W0904 17:18:32.439088 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.439194 kubelet[3213]: E0904 17:18:32.439139 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.440026 kubelet[3213]: E0904 17:18:32.439910 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.440026 kubelet[3213]: W0904 17:18:32.439970 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.442619 kubelet[3213]: E0904 17:18:32.442137 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.443091 kubelet[3213]: E0904 17:18:32.442837 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.443091 kubelet[3213]: W0904 17:18:32.442879 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.443436 kubelet[3213]: E0904 17:18:32.443397 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.443893 kubelet[3213]: W0904 17:18:32.443642 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.443893 kubelet[3213]: E0904 17:18:32.443587 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.444219 kubelet[3213]: E0904 17:18:32.444131 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.447493 kubelet[3213]: E0904 17:18:32.445080 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.447493 kubelet[3213]: W0904 17:18:32.445113 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.448213 kubelet[3213]: E0904 17:18:32.447675 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.448815 kubelet[3213]: E0904 17:18:32.448598 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.448815 kubelet[3213]: W0904 17:18:32.448631 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.448815 kubelet[3213]: E0904 17:18:32.448679 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.450623 kubelet[3213]: E0904 17:18:32.450275 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.450623 kubelet[3213]: W0904 17:18:32.450338 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.450623 kubelet[3213]: E0904 17:18:32.450415 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.451229 kubelet[3213]: E0904 17:18:32.451005 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.451229 kubelet[3213]: W0904 17:18:32.451033 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.451229 kubelet[3213]: E0904 17:18:32.451098 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.451932 kubelet[3213]: E0904 17:18:32.451597 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.451932 kubelet[3213]: W0904 17:18:32.451715 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.451932 kubelet[3213]: E0904 17:18:32.451771 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.453778 kubelet[3213]: E0904 17:18:32.453523 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.453778 kubelet[3213]: W0904 17:18:32.453558 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.453778 kubelet[3213]: E0904 17:18:32.453611 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.454811 kubelet[3213]: E0904 17:18:32.454567 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.454811 kubelet[3213]: W0904 17:18:32.454600 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.454811 kubelet[3213]: E0904 17:18:32.454675 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.456593 kubelet[3213]: E0904 17:18:32.455234 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.456593 kubelet[3213]: W0904 17:18:32.455284 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.456593 kubelet[3213]: E0904 17:18:32.455351 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.457317 kubelet[3213]: E0904 17:18:32.457279 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.457587 kubelet[3213]: W0904 17:18:32.457550 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.457836 kubelet[3213]: E0904 17:18:32.457767 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.550580 kubelet[3213]: E0904 17:18:32.550542 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.550802 kubelet[3213]: W0904 17:18:32.550774 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.551010 kubelet[3213]: E0904 17:18:32.550979 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.551663 kubelet[3213]: E0904 17:18:32.551635 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.551858 kubelet[3213]: W0904 17:18:32.551835 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.552014 kubelet[3213]: E0904 17:18:32.551993 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.552649 kubelet[3213]: E0904 17:18:32.552622 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.552981 kubelet[3213]: W0904 17:18:32.552795 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.552981 kubelet[3213]: E0904 17:18:32.552840 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.553646 kubelet[3213]: E0904 17:18:32.553483 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.553646 kubelet[3213]: W0904 17:18:32.553509 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.553646 kubelet[3213]: E0904 17:18:32.553539 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.554404 kubelet[3213]: E0904 17:18:32.554228 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.554404 kubelet[3213]: W0904 17:18:32.554258 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.554404 kubelet[3213]: E0904 17:18:32.554291 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.555140 kubelet[3213]: E0904 17:18:32.555005 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.555140 kubelet[3213]: W0904 17:18:32.555033 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.555140 kubelet[3213]: E0904 17:18:32.555064 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.655974 kubelet[3213]: E0904 17:18:32.655816 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.655974 kubelet[3213]: W0904 17:18:32.655852 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.655974 kubelet[3213]: E0904 17:18:32.655887 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.656381 kubelet[3213]: E0904 17:18:32.656344 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.656381 kubelet[3213]: W0904 17:18:32.656374 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.656534 kubelet[3213]: E0904 17:18:32.656408 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.657513 kubelet[3213]: E0904 17:18:32.657363 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.657513 kubelet[3213]: W0904 17:18:32.657505 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.657719 kubelet[3213]: E0904 17:18:32.657545 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.660082 kubelet[3213]: E0904 17:18:32.658077 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.660082 kubelet[3213]: W0904 17:18:32.658107 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.660082 kubelet[3213]: E0904 17:18:32.658138 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.660082 kubelet[3213]: E0904 17:18:32.659524 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.660082 kubelet[3213]: W0904 17:18:32.659641 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.660082 kubelet[3213]: E0904 17:18:32.659758 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.660612 kubelet[3213]: E0904 17:18:32.660548 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.660612 kubelet[3213]: W0904 17:18:32.660604 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.660756 kubelet[3213]: E0904 17:18:32.660642 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.763846 kubelet[3213]: E0904 17:18:32.763797 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.764604 kubelet[3213]: W0904 17:18:32.764285 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.764604 kubelet[3213]: E0904 17:18:32.764343 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.768608 kubelet[3213]: E0904 17:18:32.767548 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.768608 kubelet[3213]: W0904 17:18:32.767611 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.768608 kubelet[3213]: E0904 17:18:32.767683 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.770282 kubelet[3213]: E0904 17:18:32.770100 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.771252 kubelet[3213]: W0904 17:18:32.771215 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.771586 kubelet[3213]: E0904 17:18:32.771525 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.774731 kubelet[3213]: E0904 17:18:32.774543 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.775878 kubelet[3213]: W0904 17:18:32.775217 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.775878 kubelet[3213]: E0904 17:18:32.775276 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.777626 kubelet[3213]: E0904 17:18:32.776698 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.777626 kubelet[3213]: W0904 17:18:32.776754 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.777626 kubelet[3213]: E0904 17:18:32.776833 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.780106 kubelet[3213]: E0904 17:18:32.780068 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.780978 kubelet[3213]: W0904 17:18:32.780288 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.780978 kubelet[3213]: E0904 17:18:32.780352 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.803033 kubelet[3213]: E0904 17:18:32.802560 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.803033 kubelet[3213]: W0904 17:18:32.802661 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.803033 kubelet[3213]: E0904 17:18:32.802750 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.823926 kubelet[3213]: E0904 17:18:32.823804 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.823926 kubelet[3213]: W0904 17:18:32.823867 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.825768 kubelet[3213]: E0904 17:18:32.824769 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.852437 kubelet[3213]: E0904 17:18:32.852224 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.852437 kubelet[3213]: W0904 17:18:32.852284 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.852437 kubelet[3213]: E0904 17:18:32.852345 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.883730 kubelet[3213]: E0904 17:18:32.883321 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.883730 kubelet[3213]: W0904 17:18:32.883375 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.883730 kubelet[3213]: E0904 17:18:32.883428 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.886149 kubelet[3213]: E0904 17:18:32.885683 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.886149 kubelet[3213]: W0904 17:18:32.885774 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.886149 kubelet[3213]: E0904 17:18:32.885821 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.888729 kubelet[3213]: E0904 17:18:32.888550 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.888729 kubelet[3213]: W0904 17:18:32.888593 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.888729 kubelet[3213]: E0904 17:18:32.888639 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.931732 kubelet[3213]: E0904 17:18:32.931517 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.933294 kubelet[3213]: W0904 17:18:32.933064 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.933294 kubelet[3213]: E0904 17:18:32.933176 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.991570 kubelet[3213]: E0904 17:18:32.991093 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.991570 kubelet[3213]: W0904 17:18:32.991149 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.991570 kubelet[3213]: E0904 17:18:32.991196 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:32.998035 kubelet[3213]: E0904 17:18:32.997689 3213 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:32.998035 kubelet[3213]: E0904 17:18:32.997842 3213 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13a27632-4d8a-43f5-b31a-56c030357a85-tigera-ca-bundle podName:13a27632-4d8a-43f5-b31a-56c030357a85 nodeName:}" failed. No retries permitted until 2024-09-04 17:18:33.497802113 +0000 UTC m=+22.730729622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/13a27632-4d8a-43f5-b31a-56c030357a85-tigera-ca-bundle") pod "calico-typha-6d664767d5-29znw" (UID: "13a27632-4d8a-43f5-b31a-56c030357a85") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:32.999324 kubelet[3213]: E0904 17:18:32.999074 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:32.999324 kubelet[3213]: W0904 17:18:32.999127 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:32.999324 kubelet[3213]: E0904 17:18:32.999179 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.058832 kubelet[3213]: E0904 17:18:33.058749 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.059377 kubelet[3213]: W0904 17:18:33.059177 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.059377 kubelet[3213]: E0904 17:18:33.059268 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.101390 kubelet[3213]: E0904 17:18:33.101220 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.101390 kubelet[3213]: W0904 17:18:33.101253 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.101390 kubelet[3213]: E0904 17:18:33.101291 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.203930 kubelet[3213]: E0904 17:18:33.203311 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.203930 kubelet[3213]: W0904 17:18:33.203425 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.203930 kubelet[3213]: E0904 17:18:33.203531 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.305432 kubelet[3213]: E0904 17:18:33.305380 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.305892 kubelet[3213]: W0904 17:18:33.305642 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.305892 kubelet[3213]: E0904 17:18:33.305726 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.312792 containerd[2021]: time="2024-09-04T17:18:33.311895601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzz2t,Uid:957309b4-fec6-4832-88e9-e2d7e0874dbb,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:33.374308 containerd[2021]: time="2024-09-04T17:18:33.374030470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:33.374556 containerd[2021]: time="2024-09-04T17:18:33.374273828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:33.376498 containerd[2021]: time="2024-09-04T17:18:33.374793996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:33.377132 containerd[2021]: time="2024-09-04T17:18:33.376695177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:33.408681 kubelet[3213]: E0904 17:18:33.408594 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.409820 kubelet[3213]: W0904 17:18:33.409260 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.409820 kubelet[3213]: E0904 17:18:33.409347 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.462592 systemd[1]: Started cri-containerd-24d3dfbd7b72675ec396e67b6a32c76159a865b819c8ff73c29919229848179a.scope - libcontainer container 24d3dfbd7b72675ec396e67b6a32c76159a865b819c8ff73c29919229848179a. Sep 4 17:18:33.515796 kubelet[3213]: E0904 17:18:33.513302 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.515796 kubelet[3213]: W0904 17:18:33.513333 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.515796 kubelet[3213]: E0904 17:18:33.513370 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.515796 kubelet[3213]: E0904 17:18:33.515510 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.515796 kubelet[3213]: W0904 17:18:33.515536 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.515796 kubelet[3213]: E0904 17:18:33.515572 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.516594 kubelet[3213]: E0904 17:18:33.516564 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.516751 kubelet[3213]: W0904 17:18:33.516723 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.516865 kubelet[3213]: E0904 17:18:33.516846 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.517453 kubelet[3213]: E0904 17:18:33.517426 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.517717 kubelet[3213]: W0904 17:18:33.517565 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.517717 kubelet[3213]: E0904 17:18:33.517603 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.518991 kubelet[3213]: E0904 17:18:33.518320 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.518991 kubelet[3213]: W0904 17:18:33.518348 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.518991 kubelet[3213]: E0904 17:18:33.518380 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.521043 kubelet[3213]: E0904 17:18:33.521007 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:33.521323 kubelet[3213]: W0904 17:18:33.521241 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:33.521493 kubelet[3213]: E0904 17:18:33.521471 3213 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:33.559096 containerd[2021]: time="2024-09-04T17:18:33.557219006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzz2t,Uid:957309b4-fec6-4832-88e9-e2d7e0874dbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"24d3dfbd7b72675ec396e67b6a32c76159a865b819c8ff73c29919229848179a\"" Sep 4 17:18:33.567201 containerd[2021]: time="2024-09-04T17:18:33.566928401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:18:33.684055 containerd[2021]: time="2024-09-04T17:18:33.683091695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d664767d5-29znw,Uid:13a27632-4d8a-43f5-b31a-56c030357a85,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:33.775788 containerd[2021]: time="2024-09-04T17:18:33.775054035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:33.775788 containerd[2021]: time="2024-09-04T17:18:33.775210533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:33.775788 containerd[2021]: time="2024-09-04T17:18:33.775246851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:33.775788 containerd[2021]: time="2024-09-04T17:18:33.775550191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:33.849430 systemd[1]: Started cri-containerd-a086a6b3f44225f5388981c0330e4b849a72672246d6c66a34f06027ce122d89.scope - libcontainer container a086a6b3f44225f5388981c0330e4b849a72672246d6c66a34f06027ce122d89. Sep 4 17:18:34.063610 containerd[2021]: time="2024-09-04T17:18:34.063315805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d664767d5-29znw,Uid:13a27632-4d8a-43f5-b31a-56c030357a85,Namespace:calico-system,Attempt:0,} returns sandbox id \"a086a6b3f44225f5388981c0330e4b849a72672246d6c66a34f06027ce122d89\"" Sep 4 17:18:34.165996 kubelet[3213]: E0904 17:18:34.163683 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdx2g" podUID="34e7c28a-1deb-4336-b6f8-a4fcfffd40c0" Sep 4 17:18:34.912482 containerd[2021]: time="2024-09-04T17:18:34.911868087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:34.914890 containerd[2021]: time="2024-09-04T17:18:34.914196506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:18:34.916028 containerd[2021]: time="2024-09-04T17:18:34.915870305Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:34.922287 containerd[2021]: time="2024-09-04T17:18:34.922043629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:34.927985 containerd[2021]: time="2024-09-04T17:18:34.925777781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.358646477s" Sep 4 17:18:34.927985 containerd[2021]: time="2024-09-04T17:18:34.925851088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:18:34.929463 containerd[2021]: time="2024-09-04T17:18:34.928774218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:18:34.934450 containerd[2021]: time="2024-09-04T17:18:34.934348321Z" level=info msg="CreateContainer within sandbox \"24d3dfbd7b72675ec396e67b6a32c76159a865b819c8ff73c29919229848179a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:18:34.974412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580352977.mount: Deactivated successfully. Sep 4 17:18:34.986717 containerd[2021]: time="2024-09-04T17:18:34.986604728Z" level=info msg="CreateContainer within sandbox \"24d3dfbd7b72675ec396e67b6a32c76159a865b819c8ff73c29919229848179a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"22d698baa6671fc1c700bd6d4aa73e05f3c225012d5e7c05c5b35b0bcb6fb6c5\"" Sep 4 17:18:34.993049 containerd[2021]: time="2024-09-04T17:18:34.992244847Z" level=info msg="StartContainer for \"22d698baa6671fc1c700bd6d4aa73e05f3c225012d5e7c05c5b35b0bcb6fb6c5\"" Sep 4 17:18:35.123289 systemd[1]: Started cri-containerd-22d698baa6671fc1c700bd6d4aa73e05f3c225012d5e7c05c5b35b0bcb6fb6c5.scope - libcontainer container 22d698baa6671fc1c700bd6d4aa73e05f3c225012d5e7c05c5b35b0bcb6fb6c5. Sep 4 17:18:35.258803 containerd[2021]: time="2024-09-04T17:18:35.258629042Z" level=info msg="StartContainer for \"22d698baa6671fc1c700bd6d4aa73e05f3c225012d5e7c05c5b35b0bcb6fb6c5\" returns successfully" Sep 4 17:18:35.315358 systemd[1]: cri-containerd-22d698baa6671fc1c700bd6d4aa73e05f3c225012d5e7c05c5b35b0bcb6fb6c5.scope: Deactivated successfully. Sep 4 17:18:35.408499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22d698baa6671fc1c700bd6d4aa73e05f3c225012d5e7c05c5b35b0bcb6fb6c5-rootfs.mount: Deactivated successfully. Sep 4 17:18:35.626911 containerd[2021]: time="2024-09-04T17:18:35.626269547Z" level=info msg="shim disconnected" id=22d698baa6671fc1c700bd6d4aa73e05f3c225012d5e7c05c5b35b0bcb6fb6c5 namespace=k8s.io Sep 4 17:18:35.626911 containerd[2021]: time="2024-09-04T17:18:35.626395808Z" level=warning msg="cleaning up after shim disconnected" id=22d698baa6671fc1c700bd6d4aa73e05f3c225012d5e7c05c5b35b0bcb6fb6c5 namespace=k8s.io Sep 4 17:18:35.626911 containerd[2021]: time="2024-09-04T17:18:35.626425886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:18:36.164450 kubelet[3213]: E0904 17:18:36.163796 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdx2g" podUID="34e7c28a-1deb-4336-b6f8-a4fcfffd40c0" Sep 4 17:18:37.744879 containerd[2021]: time="2024-09-04T17:18:37.744773596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:37.748199 containerd[2021]: time="2024-09-04T17:18:37.748126960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:18:37.750279 containerd[2021]: time="2024-09-04T17:18:37.749724040Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:37.761115 containerd[2021]: time="2024-09-04T17:18:37.760916752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:37.768186 containerd[2021]: time="2024-09-04T17:18:37.767918776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.838897147s" Sep 4 17:18:37.768491 containerd[2021]: time="2024-09-04T17:18:37.768189436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:18:37.775990 containerd[2021]: time="2024-09-04T17:18:37.772513804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:18:37.837967 containerd[2021]: time="2024-09-04T17:18:37.837641296Z" level=info msg="CreateContainer within sandbox \"a086a6b3f44225f5388981c0330e4b849a72672246d6c66a34f06027ce122d89\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:18:37.905713 containerd[2021]: time="2024-09-04T17:18:37.905584553Z" level=info msg="CreateContainer within sandbox \"a086a6b3f44225f5388981c0330e4b849a72672246d6c66a34f06027ce122d89\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3c06478bdaa4958079043510b6246048d9e2fcad0c9b683ecb9ea620edd33672\"" Sep 4 17:18:37.907992 containerd[2021]: time="2024-09-04T17:18:37.907143929Z" level=info msg="StartContainer for \"3c06478bdaa4958079043510b6246048d9e2fcad0c9b683ecb9ea620edd33672\"" Sep 4 17:18:37.993281 systemd[1]: Started cri-containerd-3c06478bdaa4958079043510b6246048d9e2fcad0c9b683ecb9ea620edd33672.scope - libcontainer container 3c06478bdaa4958079043510b6246048d9e2fcad0c9b683ecb9ea620edd33672. Sep 4 17:18:38.164657 kubelet[3213]: E0904 17:18:38.163526 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdx2g" podUID="34e7c28a-1deb-4336-b6f8-a4fcfffd40c0" Sep 4 17:18:38.172012 containerd[2021]: time="2024-09-04T17:18:38.171248810Z" level=info msg="StartContainer for \"3c06478bdaa4958079043510b6246048d9e2fcad0c9b683ecb9ea620edd33672\" returns successfully" Sep 4 17:18:39.477230 kubelet[3213]: I0904 17:18:39.476285 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:18:40.162975 kubelet[3213]: E0904 17:18:40.162506 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdx2g" podUID="34e7c28a-1deb-4336-b6f8-a4fcfffd40c0" Sep 4 17:18:42.163423 kubelet[3213]: E0904 17:18:42.163344 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zdx2g" podUID="34e7c28a-1deb-4336-b6f8-a4fcfffd40c0" Sep 4 17:18:42.426140 containerd[2021]: time="2024-09-04T17:18:42.425732659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:42.427832 containerd[2021]: time="2024-09-04T17:18:42.427763587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:18:42.429364 containerd[2021]: time="2024-09-04T17:18:42.429200503Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:42.435087 containerd[2021]: time="2024-09-04T17:18:42.434899915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:42.437105 containerd[2021]: time="2024-09-04T17:18:42.436822555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.664055167s" Sep 4 17:18:42.437105 containerd[2021]: time="2024-09-04T17:18:42.436896967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:18:42.442862 containerd[2021]: time="2024-09-04T17:18:42.442562203Z" level=info msg="CreateContainer within sandbox \"24d3dfbd7b72675ec396e67b6a32c76159a865b819c8ff73c29919229848179a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:18:42.465905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796650340.mount: Deactivated successfully. Sep 4 17:18:42.470242 containerd[2021]: time="2024-09-04T17:18:42.469305643Z" level=info msg="CreateContainer within sandbox \"24d3dfbd7b72675ec396e67b6a32c76159a865b819c8ff73c29919229848179a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"948d4c6a427a93453c826c19dd35a5e0bf17c9b5d0dc2b5166ac7f455778ef41\"" Sep 4 17:18:42.472174 containerd[2021]: time="2024-09-04T17:18:42.470717851Z" level=info msg="StartContainer for \"948d4c6a427a93453c826c19dd35a5e0bf17c9b5d0dc2b5166ac7f455778ef41\"" Sep 4 17:18:42.556590 systemd[1]: Started cri-containerd-948d4c6a427a93453c826c19dd35a5e0bf17c9b5d0dc2b5166ac7f455778ef41.scope - libcontainer container 948d4c6a427a93453c826c19dd35a5e0bf17c9b5d0dc2b5166ac7f455778ef41. Sep 4 17:18:42.620264 containerd[2021]: time="2024-09-04T17:18:42.620029064Z" level=info msg="StartContainer for \"948d4c6a427a93453c826c19dd35a5e0bf17c9b5d0dc2b5166ac7f455778ef41\" returns successfully" Sep 4 17:18:43.548697 kubelet[3213]: I0904 17:18:43.548624 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6d664767d5-29znw" podStartSLOduration=8.857038804 podCreationTimestamp="2024-09-04 17:18:31 +0000 UTC" firstStartedPulling="2024-09-04 17:18:34.077671424 +0000 UTC m=+23.310598884" lastFinishedPulling="2024-09-04 17:18:37.769194472 +0000 UTC m=+27.002121945" observedRunningTime="2024-09-04 17:18:38.507069112 +0000 UTC m=+27.739996597" watchObservedRunningTime="2024-09-04 17:18:43.548561865 +0000 UTC m=+32.781489362" Sep 4 17:18:43.587894 containerd[2021]: time="2024-09-04T17:18:43.587822853Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:18:43.594233 systemd[1]: cri-containerd-948d4c6a427a93453c826c19dd35a5e0bf17c9b5d0dc2b5166ac7f455778ef41.scope: Deactivated successfully. Sep 4 17:18:43.645206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-948d4c6a427a93453c826c19dd35a5e0bf17c9b5d0dc2b5166ac7f455778ef41-rootfs.mount: Deactivated successfully. Sep 4 17:18:43.676346 kubelet[3213]: I0904 17:18:43.676226 3213 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:18:43.733186 kubelet[3213]: I0904 17:18:43.730714 3213 topology_manager.go:215] "Topology Admit Handler" podUID="00846270-5600-49b2-8fa0-755bfb6bc2f1" podNamespace="kube-system" podName="coredns-5dd5756b68-hpjrs" Sep 4 17:18:43.745611 kubelet[3213]: I0904 17:18:43.745523 3213 topology_manager.go:215] "Topology Admit Handler" podUID="ab091938-ea71-49ca-bf29-e750e1990329" podNamespace="kube-system" podName="coredns-5dd5756b68-r55tp" Sep 4 17:18:43.759003 kubelet[3213]: I0904 17:18:43.755548 3213 topology_manager.go:215] "Topology Admit Handler" podUID="9e686dba-47ca-4edd-83cd-098666d46a40" podNamespace="calico-system" podName="calico-kube-controllers-844466c8d6-b5z26" Sep 4 17:18:43.770314 systemd[1]: Created slice kubepods-burstable-pod00846270_5600_49b2_8fa0_755bfb6bc2f1.slice - libcontainer container kubepods-burstable-pod00846270_5600_49b2_8fa0_755bfb6bc2f1.slice. Sep 4 17:18:43.801000 systemd[1]: Created slice kubepods-burstable-podab091938_ea71_49ca_bf29_e750e1990329.slice - libcontainer container kubepods-burstable-podab091938_ea71_49ca_bf29_e750e1990329.slice. Sep 4 17:18:43.848485 systemd[1]: Created slice kubepods-besteffort-pod9e686dba_47ca_4edd_83cd_098666d46a40.slice - libcontainer container kubepods-besteffort-pod9e686dba_47ca_4edd_83cd_098666d46a40.slice. Sep 4 17:18:43.923050 kubelet[3213]: I0904 17:18:43.922962 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab091938-ea71-49ca-bf29-e750e1990329-config-volume\") pod \"coredns-5dd5756b68-r55tp\" (UID: \"ab091938-ea71-49ca-bf29-e750e1990329\") " pod="kube-system/coredns-5dd5756b68-r55tp" Sep 4 17:18:43.923249 kubelet[3213]: I0904 17:18:43.923125 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00846270-5600-49b2-8fa0-755bfb6bc2f1-config-volume\") pod \"coredns-5dd5756b68-hpjrs\" (UID: \"00846270-5600-49b2-8fa0-755bfb6bc2f1\") " pod="kube-system/coredns-5dd5756b68-hpjrs" Sep 4 17:18:43.923249 kubelet[3213]: I0904 17:18:43.923197 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmlb\" (UniqueName: \"kubernetes.io/projected/9e686dba-47ca-4edd-83cd-098666d46a40-kube-api-access-hcmlb\") pod \"calico-kube-controllers-844466c8d6-b5z26\" (UID: \"9e686dba-47ca-4edd-83cd-098666d46a40\") " pod="calico-system/calico-kube-controllers-844466c8d6-b5z26" Sep 4 17:18:43.923374 kubelet[3213]: I0904 17:18:43.923257 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e686dba-47ca-4edd-83cd-098666d46a40-tigera-ca-bundle\") pod \"calico-kube-controllers-844466c8d6-b5z26\" (UID: \"9e686dba-47ca-4edd-83cd-098666d46a40\") " pod="calico-system/calico-kube-controllers-844466c8d6-b5z26" Sep 4 17:18:43.923374 kubelet[3213]: I0904 17:18:43.923309 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brxjg\" (UniqueName: \"kubernetes.io/projected/ab091938-ea71-49ca-bf29-e750e1990329-kube-api-access-brxjg\") pod \"coredns-5dd5756b68-r55tp\" (UID: \"ab091938-ea71-49ca-bf29-e750e1990329\") " pod="kube-system/coredns-5dd5756b68-r55tp" Sep 4 17:18:43.923374 kubelet[3213]: I0904 17:18:43.923361 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wh5\" (UniqueName: \"kubernetes.io/projected/00846270-5600-49b2-8fa0-755bfb6bc2f1-kube-api-access-m4wh5\") pod \"coredns-5dd5756b68-hpjrs\" (UID: \"00846270-5600-49b2-8fa0-755bfb6bc2f1\") " pod="kube-system/coredns-5dd5756b68-hpjrs" Sep 4 17:18:44.094808 containerd[2021]: time="2024-09-04T17:18:44.093098047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hpjrs,Uid:00846270-5600-49b2-8fa0-755bfb6bc2f1,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:44.124979 containerd[2021]: time="2024-09-04T17:18:44.124891291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r55tp,Uid:ab091938-ea71-49ca-bf29-e750e1990329,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:44.159757 containerd[2021]: time="2024-09-04T17:18:44.159391496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844466c8d6-b5z26,Uid:9e686dba-47ca-4edd-83cd-098666d46a40,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:44.175288 systemd[1]: Created slice kubepods-besteffort-pod34e7c28a_1deb_4336_b6f8_a4fcfffd40c0.slice - libcontainer container kubepods-besteffort-pod34e7c28a_1deb_4336_b6f8_a4fcfffd40c0.slice. Sep 4 17:18:44.181822 containerd[2021]: time="2024-09-04T17:18:44.181083812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdx2g,Uid:34e7c28a-1deb-4336-b6f8-a4fcfffd40c0,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:44.838391 containerd[2021]: time="2024-09-04T17:18:44.837888275Z" level=info msg="shim disconnected" id=948d4c6a427a93453c826c19dd35a5e0bf17c9b5d0dc2b5166ac7f455778ef41 namespace=k8s.io Sep 4 17:18:44.838391 containerd[2021]: time="2024-09-04T17:18:44.838039175Z" level=warning msg="cleaning up after shim disconnected" id=948d4c6a427a93453c826c19dd35a5e0bf17c9b5d0dc2b5166ac7f455778ef41 namespace=k8s.io Sep 4 17:18:44.838391 containerd[2021]: time="2024-09-04T17:18:44.838070183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:18:45.238770 containerd[2021]: time="2024-09-04T17:18:45.238333977Z" level=error msg="Failed to destroy network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.243254 containerd[2021]: time="2024-09-04T17:18:45.241899645Z" level=error msg="encountered an error cleaning up failed sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.243438 containerd[2021]: time="2024-09-04T17:18:45.243297069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdx2g,Uid:34e7c28a-1deb-4336-b6f8-a4fcfffd40c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.243711 kubelet[3213]: E0904 17:18:45.243665 3213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.246614 kubelet[3213]: E0904 17:18:45.243765 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zdx2g" Sep 4 17:18:45.246614 kubelet[3213]: E0904 17:18:45.243807 3213 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zdx2g" Sep 4 17:18:45.246614 kubelet[3213]: E0904 17:18:45.243906 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zdx2g_calico-system(34e7c28a-1deb-4336-b6f8-a4fcfffd40c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zdx2g_calico-system(34e7c28a-1deb-4336-b6f8-a4fcfffd40c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zdx2g" podUID="34e7c28a-1deb-4336-b6f8-a4fcfffd40c0" Sep 4 17:18:45.251568 containerd[2021]: time="2024-09-04T17:18:45.249805329Z" level=error msg="Failed to destroy network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.254832 containerd[2021]: time="2024-09-04T17:18:45.254516145Z" level=error msg="encountered an error cleaning up failed sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.255960 containerd[2021]: time="2024-09-04T17:18:45.255680997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r55tp,Uid:ab091938-ea71-49ca-bf29-e750e1990329,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.256572 kubelet[3213]: E0904 17:18:45.256428 3213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.256572 kubelet[3213]: E0904 17:18:45.256533 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-r55tp" Sep 4 17:18:45.257423 kubelet[3213]: E0904 17:18:45.257040 3213 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-r55tp" Sep 4 17:18:45.257423 kubelet[3213]: E0904 17:18:45.257226 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-r55tp_kube-system(ab091938-ea71-49ca-bf29-e750e1990329)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-r55tp_kube-system(ab091938-ea71-49ca-bf29-e750e1990329)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-r55tp" podUID="ab091938-ea71-49ca-bf29-e750e1990329" Sep 4 17:18:45.258043 containerd[2021]: time="2024-09-04T17:18:45.257776773Z" level=error msg="Failed to destroy network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.258874 containerd[2021]: time="2024-09-04T17:18:45.258348393Z" level=error msg="encountered an error cleaning up failed sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.258874 containerd[2021]: time="2024-09-04T17:18:45.258444417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844466c8d6-b5z26,Uid:9e686dba-47ca-4edd-83cd-098666d46a40,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.259168 kubelet[3213]: E0904 17:18:45.258781 3213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.259168 kubelet[3213]: E0904 17:18:45.258848 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-844466c8d6-b5z26" Sep 4 17:18:45.259168 kubelet[3213]: E0904 17:18:45.258892 3213 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-844466c8d6-b5z26" Sep 4 17:18:45.259357 kubelet[3213]: E0904 17:18:45.258996 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-844466c8d6-b5z26_calico-system(9e686dba-47ca-4edd-83cd-098666d46a40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-844466c8d6-b5z26_calico-system(9e686dba-47ca-4edd-83cd-098666d46a40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-844466c8d6-b5z26" podUID="9e686dba-47ca-4edd-83cd-098666d46a40" Sep 4 17:18:45.260275 containerd[2021]: time="2024-09-04T17:18:45.260161989Z" level=error msg="Failed to destroy network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.261059 containerd[2021]: time="2024-09-04T17:18:45.260768097Z" level=error msg="encountered an error cleaning up failed sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.261059 containerd[2021]: time="2024-09-04T17:18:45.260849397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hpjrs,Uid:00846270-5600-49b2-8fa0-755bfb6bc2f1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.261427 kubelet[3213]: E0904 17:18:45.261386 3213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.261596 kubelet[3213]: E0904 17:18:45.261463 3213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-hpjrs" Sep 4 17:18:45.261596 kubelet[3213]: E0904 17:18:45.261502 3213 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-hpjrs" Sep 4 17:18:45.261886 kubelet[3213]: E0904 17:18:45.261591 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-hpjrs_kube-system(00846270-5600-49b2-8fa0-755bfb6bc2f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-hpjrs_kube-system(00846270-5600-49b2-8fa0-755bfb6bc2f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-hpjrs" podUID="00846270-5600-49b2-8fa0-755bfb6bc2f1" Sep 4 17:18:45.528060 kubelet[3213]: I0904 17:18:45.527084 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:18:45.531388 containerd[2021]: time="2024-09-04T17:18:45.530693194Z" level=info msg="StopPodSandbox for \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\"" Sep 4 17:18:45.533002 containerd[2021]: time="2024-09-04T17:18:45.532316434Z" level=info msg="Ensure that sandbox 595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333 in task-service has been cleanup successfully" Sep 4 17:18:45.537691 kubelet[3213]: I0904 17:18:45.536282 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:18:45.538683 containerd[2021]: time="2024-09-04T17:18:45.538606042Z" level=info msg="StopPodSandbox for \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\"" Sep 4 17:18:45.540277 containerd[2021]: time="2024-09-04T17:18:45.540213142Z" level=info msg="Ensure that sandbox 4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686 in task-service has been cleanup successfully" Sep 4 17:18:45.544506 kubelet[3213]: I0904 17:18:45.544315 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:18:45.550771 containerd[2021]: time="2024-09-04T17:18:45.550644815Z" level=info msg="StopPodSandbox for \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\"" Sep 4 17:18:45.555021 containerd[2021]: time="2024-09-04T17:18:45.553711259Z" level=info msg="Ensure that sandbox b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543 in task-service has been cleanup successfully" Sep 4 17:18:45.563774 kubelet[3213]: I0904 17:18:45.563349 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:18:45.568900 containerd[2021]: time="2024-09-04T17:18:45.567775451Z" level=info msg="StopPodSandbox for \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\"" Sep 4 17:18:45.568900 containerd[2021]: time="2024-09-04T17:18:45.568147199Z" level=info msg="Ensure that sandbox c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97 in task-service has been cleanup successfully" Sep 4 17:18:45.597529 containerd[2021]: time="2024-09-04T17:18:45.597356027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:18:45.731822 containerd[2021]: time="2024-09-04T17:18:45.731729003Z" level=error msg="StopPodSandbox for \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\" failed" error="failed to destroy network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.733669 kubelet[3213]: E0904 17:18:45.733310 3213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:18:45.733669 kubelet[3213]: E0904 17:18:45.733481 3213 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686"} Sep 4 17:18:45.733669 kubelet[3213]: E0904 17:18:45.733556 3213 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e686dba-47ca-4edd-83cd-098666d46a40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:45.733669 kubelet[3213]: E0904 17:18:45.733615 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e686dba-47ca-4edd-83cd-098666d46a40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-844466c8d6-b5z26" podUID="9e686dba-47ca-4edd-83cd-098666d46a40" Sep 4 17:18:45.748823 containerd[2021]: time="2024-09-04T17:18:45.748306547Z" level=error msg="StopPodSandbox for \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\" failed" error="failed to destroy network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.749259 kubelet[3213]: E0904 17:18:45.748705 3213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:18:45.749259 kubelet[3213]: E0904 17:18:45.748787 3213 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97"} Sep 4 17:18:45.749259 kubelet[3213]: E0904 17:18:45.748998 3213 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00846270-5600-49b2-8fa0-755bfb6bc2f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:45.749259 kubelet[3213]: E0904 17:18:45.749062 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00846270-5600-49b2-8fa0-755bfb6bc2f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-hpjrs" podUID="00846270-5600-49b2-8fa0-755bfb6bc2f1" Sep 4 17:18:45.753559 containerd[2021]: time="2024-09-04T17:18:45.752724900Z" level=error msg="StopPodSandbox for \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\" failed" error="failed to destroy network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.753798 kubelet[3213]: E0904 17:18:45.753263 3213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:18:45.753798 kubelet[3213]: E0904 17:18:45.753336 3213 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333"} Sep 4 17:18:45.753798 kubelet[3213]: E0904 17:18:45.753405 3213 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:45.753798 kubelet[3213]: E0904 17:18:45.753469 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zdx2g" podUID="34e7c28a-1deb-4336-b6f8-a4fcfffd40c0" Sep 4 17:18:45.767017 containerd[2021]: time="2024-09-04T17:18:45.766900848Z" level=error msg="StopPodSandbox for \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\" failed" error="failed to destroy network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:45.767881 kubelet[3213]: E0904 17:18:45.767425 3213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:18:45.767881 kubelet[3213]: E0904 17:18:45.767532 3213 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543"} Sep 4 17:18:45.767881 kubelet[3213]: E0904 17:18:45.767608 3213 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab091938-ea71-49ca-bf29-e750e1990329\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:45.767881 kubelet[3213]: E0904 17:18:45.767668 3213 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab091938-ea71-49ca-bf29-e750e1990329\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-r55tp" podUID="ab091938-ea71-49ca-bf29-e750e1990329" Sep 4 17:18:45.857256 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543-shm.mount: Deactivated successfully. Sep 4 17:18:45.857642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686-shm.mount: Deactivated successfully. Sep 4 17:18:45.857807 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333-shm.mount: Deactivated successfully. Sep 4 17:18:45.858058 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97-shm.mount: Deactivated successfully. Sep 4 17:18:49.649390 kubelet[3213]: I0904 17:18:49.648933 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:18:51.917908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553177349.mount: Deactivated successfully. Sep 4 17:18:51.985014 containerd[2021]: time="2024-09-04T17:18:51.984891414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:51.986624 containerd[2021]: time="2024-09-04T17:18:51.986581902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:18:51.987534 containerd[2021]: time="2024-09-04T17:18:51.987372642Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:51.995350 containerd[2021]: time="2024-09-04T17:18:51.995272027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 6.397844828s" Sep 4 17:18:51.995350 containerd[2021]: time="2024-09-04T17:18:51.995348851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:18:51.996514 containerd[2021]: time="2024-09-04T17:18:51.996066559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:52.028718 containerd[2021]: time="2024-09-04T17:18:52.026514243Z" level=info msg="CreateContainer within sandbox \"24d3dfbd7b72675ec396e67b6a32c76159a865b819c8ff73c29919229848179a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:18:52.058774 containerd[2021]: time="2024-09-04T17:18:52.058569687Z" level=info msg="CreateContainer within sandbox \"24d3dfbd7b72675ec396e67b6a32c76159a865b819c8ff73c29919229848179a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"70cdda3b40962e8d4d4fec969b19aee2bbd0f0d430edd59cbe6241714e102e05\"" Sep 4 17:18:52.062239 containerd[2021]: time="2024-09-04T17:18:52.062066979Z" level=info msg="StartContainer for \"70cdda3b40962e8d4d4fec969b19aee2bbd0f0d430edd59cbe6241714e102e05\"" Sep 4 17:18:52.135999 systemd[1]: Started cri-containerd-70cdda3b40962e8d4d4fec969b19aee2bbd0f0d430edd59cbe6241714e102e05.scope - libcontainer container 70cdda3b40962e8d4d4fec969b19aee2bbd0f0d430edd59cbe6241714e102e05. Sep 4 17:18:52.217051 containerd[2021]: time="2024-09-04T17:18:52.216680512Z" level=info msg="StartContainer for \"70cdda3b40962e8d4d4fec969b19aee2bbd0f0d430edd59cbe6241714e102e05\" returns successfully" Sep 4 17:18:52.350124 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:18:52.350312 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:18:52.685226 kubelet[3213]: I0904 17:18:52.685087 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hzz2t" podStartSLOduration=2.254100496 podCreationTimestamp="2024-09-04 17:18:32 +0000 UTC" firstStartedPulling="2024-09-04 17:18:33.565008929 +0000 UTC m=+22.797936402" lastFinishedPulling="2024-09-04 17:18:51.995821051 +0000 UTC m=+41.228748524" observedRunningTime="2024-09-04 17:18:52.68037213 +0000 UTC m=+41.913299627" watchObservedRunningTime="2024-09-04 17:18:52.684912618 +0000 UTC m=+41.917840115" Sep 4 17:18:53.684929 systemd[1]: run-containerd-runc-k8s.io-70cdda3b40962e8d4d4fec969b19aee2bbd0f0d430edd59cbe6241714e102e05-runc.2tp5Jo.mount: Deactivated successfully. Sep 4 17:18:55.296033 kernel: bpftool[4664]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:18:55.706005 systemd-networkd[1847]: vxlan.calico: Link UP Sep 4 17:18:55.706022 systemd-networkd[1847]: vxlan.calico: Gained carrier Sep 4 17:18:55.718181 (udev-worker)[4687]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:55.764126 (udev-worker)[4698]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:55.770056 (udev-worker)[4685]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:56.284488 systemd[1]: Started sshd@7-172.31.22.59:22-139.178.89.65:38456.service - OpenSSH per-connection server daemon (139.178.89.65:38456). Sep 4 17:18:56.497071 sshd[4709]: Accepted publickey for core from 139.178.89.65 port 38456 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:56.505007 sshd[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:56.523435 systemd-logind[1989]: New session 8 of user core. Sep 4 17:18:56.540307 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:18:56.820590 sshd[4709]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:56.828375 systemd[1]: sshd@7-172.31.22.59:22-139.178.89.65:38456.service: Deactivated successfully. Sep 4 17:18:56.832005 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:18:56.833590 systemd-logind[1989]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:18:56.836436 systemd-logind[1989]: Removed session 8. Sep 4 17:18:56.989506 systemd-networkd[1847]: vxlan.calico: Gained IPv6LL Sep 4 17:18:57.166399 containerd[2021]: time="2024-09-04T17:18:57.165510296Z" level=info msg="StopPodSandbox for \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\"" Sep 4 17:18:57.168106 containerd[2021]: time="2024-09-04T17:18:57.167548388Z" level=info msg="StopPodSandbox for \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\"" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.323 [INFO][4780] k8s.go 608: Cleaning up netns ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.324 [INFO][4780] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" iface="eth0" netns="/var/run/netns/cni-9a8dd9a5-cb9f-bc28-6cb8-6a5198ae35bf" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.327 [INFO][4780] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" iface="eth0" netns="/var/run/netns/cni-9a8dd9a5-cb9f-bc28-6cb8-6a5198ae35bf" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.329 [INFO][4780] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" iface="eth0" netns="/var/run/netns/cni-9a8dd9a5-cb9f-bc28-6cb8-6a5198ae35bf" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.329 [INFO][4780] k8s.go 615: Releasing IP address(es) ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.330 [INFO][4780] utils.go 188: Calico CNI releasing IP address ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.419 [INFO][4792] ipam_plugin.go 417: Releasing address using handleID ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" HandleID="k8s-pod-network.595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.420 [INFO][4792] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.420 [INFO][4792] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.436 [WARNING][4792] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" HandleID="k8s-pod-network.595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.436 [INFO][4792] ipam_plugin.go 445: Releasing address using workloadID ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" HandleID="k8s-pod-network.595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.440 [INFO][4792] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:57.451077 containerd[2021]: 2024-09-04 17:18:57.446 [INFO][4780] k8s.go 621: Teardown processing complete. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:18:57.453788 containerd[2021]: time="2024-09-04T17:18:57.453172438Z" level=info msg="TearDown network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\" successfully" Sep 4 17:18:57.453788 containerd[2021]: time="2024-09-04T17:18:57.453235822Z" level=info msg="StopPodSandbox for \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\" returns successfully" Sep 4 17:18:57.457002 systemd[1]: run-netns-cni\x2d9a8dd9a5\x2dcb9f\x2dbc28\x2d6cb8\x2d6a5198ae35bf.mount: Deactivated successfully. Sep 4 17:18:57.458206 containerd[2021]: time="2024-09-04T17:18:57.457901314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdx2g,Uid:34e7c28a-1deb-4336-b6f8-a4fcfffd40c0,Namespace:calico-system,Attempt:1,}" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.315 [INFO][4779] k8s.go 608: Cleaning up netns ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.316 [INFO][4779] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" iface="eth0" netns="/var/run/netns/cni-75ea96ee-aab5-a008-55a7-2308cbd89205" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.317 [INFO][4779] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" iface="eth0" netns="/var/run/netns/cni-75ea96ee-aab5-a008-55a7-2308cbd89205" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.318 [INFO][4779] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" iface="eth0" netns="/var/run/netns/cni-75ea96ee-aab5-a008-55a7-2308cbd89205" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.320 [INFO][4779] k8s.go 615: Releasing IP address(es) ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.320 [INFO][4779] utils.go 188: Calico CNI releasing IP address ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.419 [INFO][4791] ipam_plugin.go 417: Releasing address using handleID ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" HandleID="k8s-pod-network.c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.420 [INFO][4791] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.440 [INFO][4791] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.469 [WARNING][4791] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" HandleID="k8s-pod-network.c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.469 [INFO][4791] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" HandleID="k8s-pod-network.c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.474 [INFO][4791] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:57.485657 containerd[2021]: 2024-09-04 17:18:57.479 [INFO][4779] k8s.go 621: Teardown processing complete. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:18:57.487465 containerd[2021]: time="2024-09-04T17:18:57.486184090Z" level=info msg="TearDown network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\" successfully" Sep 4 17:18:57.487465 containerd[2021]: time="2024-09-04T17:18:57.486297238Z" level=info msg="StopPodSandbox for \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\" returns successfully" Sep 4 17:18:57.495986 containerd[2021]: time="2024-09-04T17:18:57.494041282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hpjrs,Uid:00846270-5600-49b2-8fa0-755bfb6bc2f1,Namespace:kube-system,Attempt:1,}" Sep 4 17:18:57.508295 systemd[1]: run-netns-cni\x2d75ea96ee\x2daab5\x2da008\x2d55a7\x2d2308cbd89205.mount: Deactivated successfully. Sep 4 17:18:57.869831 systemd-networkd[1847]: calic952d4056f7: Link UP Sep 4 17:18:57.878751 systemd-networkd[1847]: calic952d4056f7: Gained carrier Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.621 [INFO][4804] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0 csi-node-driver- calico-system 34e7c28a-1deb-4336-b6f8-a4fcfffd40c0 749 0 2024-09-04 17:18:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-22-59 csi-node-driver-zdx2g eth0 default [] [] [kns.calico-system ksa.calico-system.default] calic952d4056f7 [] []}} ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Namespace="calico-system" Pod="csi-node-driver-zdx2g" WorkloadEndpoint="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.623 [INFO][4804] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Namespace="calico-system" Pod="csi-node-driver-zdx2g" WorkloadEndpoint="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.757 [INFO][4826] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" HandleID="k8s-pod-network.389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.781 [INFO][4826] ipam_plugin.go 270: Auto assigning IP ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" HandleID="k8s-pod-network.389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032fee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-59", "pod":"csi-node-driver-zdx2g", "timestamp":"2024-09-04 17:18:57.757475867 +0000 UTC"}, Hostname:"ip-172-31-22-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.781 [INFO][4826] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.782 [INFO][4826] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.782 [INFO][4826] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-59' Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.785 [INFO][4826] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" host="ip-172-31-22-59" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.795 [INFO][4826] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-59" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.804 [INFO][4826] ipam.go 489: Trying affinity for 192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.809 [INFO][4826] ipam.go 155: Attempting to load block cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.817 [INFO][4826] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.818 [INFO][4826] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" host="ip-172-31-22-59" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.823 [INFO][4826] ipam.go 1685: Creating new handle: k8s-pod-network.389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862 Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.831 [INFO][4826] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" host="ip-172-31-22-59" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.841 [INFO][4826] ipam.go 1216: Successfully claimed IPs: [192.168.3.65/26] block=192.168.3.64/26 handle="k8s-pod-network.389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" host="ip-172-31-22-59" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.842 [INFO][4826] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.65/26] handle="k8s-pod-network.389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" host="ip-172-31-22-59" Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.842 [INFO][4826] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:57.940107 containerd[2021]: 2024-09-04 17:18:57.842 [INFO][4826] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.3.65/26] IPv6=[] ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" HandleID="k8s-pod-network.389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.942543 containerd[2021]: 2024-09-04 17:18:57.850 [INFO][4804] k8s.go 386: Populated endpoint ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Namespace="calico-system" Pod="csi-node-driver-zdx2g" WorkloadEndpoint="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"", Pod:"csi-node-driver-zdx2g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic952d4056f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:57.942543 containerd[2021]: 2024-09-04 17:18:57.850 [INFO][4804] k8s.go 387: Calico CNI using IPs: [192.168.3.65/32] ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Namespace="calico-system" Pod="csi-node-driver-zdx2g" WorkloadEndpoint="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.942543 containerd[2021]: 2024-09-04 17:18:57.850 [INFO][4804] dataplane_linux.go 68: Setting the host side veth name to calic952d4056f7 ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Namespace="calico-system" Pod="csi-node-driver-zdx2g" WorkloadEndpoint="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.942543 containerd[2021]: 2024-09-04 17:18:57.876 [INFO][4804] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Namespace="calico-system" Pod="csi-node-driver-zdx2g" WorkloadEndpoint="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.942543 containerd[2021]: 2024-09-04 17:18:57.883 [INFO][4804] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Namespace="calico-system" Pod="csi-node-driver-zdx2g" WorkloadEndpoint="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862", Pod:"csi-node-driver-zdx2g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic952d4056f7", MAC:"de:73:21:e1:51:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:57.942543 containerd[2021]: 2024-09-04 17:18:57.933 [INFO][4804] k8s.go 500: Wrote updated endpoint to datastore ContainerID="389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862" Namespace="calico-system" Pod="csi-node-driver-zdx2g" WorkloadEndpoint="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:18:57.993078 systemd-networkd[1847]: calif92c301b58b: Link UP Sep 4 17:18:57.995288 systemd-networkd[1847]: calif92c301b58b: Gained carrier Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.654 [INFO][4815] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0 coredns-5dd5756b68- kube-system 00846270-5600-49b2-8fa0-755bfb6bc2f1 748 0 2024-09-04 17:18:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-59 coredns-5dd5756b68-hpjrs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif92c301b58b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Namespace="kube-system" Pod="coredns-5dd5756b68-hpjrs" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.656 [INFO][4815] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Namespace="kube-system" Pod="coredns-5dd5756b68-hpjrs" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.765 [INFO][4830] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" HandleID="k8s-pod-network.576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.790 [INFO][4830] ipam_plugin.go 270: Auto assigning IP ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" HandleID="k8s-pod-network.576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035cab0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-59", "pod":"coredns-5dd5756b68-hpjrs", "timestamp":"2024-09-04 17:18:57.765035675 +0000 UTC"}, Hostname:"ip-172-31-22-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.790 [INFO][4830] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.842 [INFO][4830] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.843 [INFO][4830] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-59' Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.848 [INFO][4830] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" host="ip-172-31-22-59" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.870 [INFO][4830] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-59" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.892 [INFO][4830] ipam.go 489: Trying affinity for 192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.901 [INFO][4830] ipam.go 155: Attempting to load block cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.914 [INFO][4830] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.914 [INFO][4830] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" host="ip-172-31-22-59" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.932 [INFO][4830] ipam.go 1685: Creating new handle: k8s-pod-network.576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438 Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.944 [INFO][4830] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" host="ip-172-31-22-59" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.961 [INFO][4830] ipam.go 1216: Successfully claimed IPs: [192.168.3.66/26] block=192.168.3.64/26 handle="k8s-pod-network.576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" host="ip-172-31-22-59" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.962 [INFO][4830] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.66/26] handle="k8s-pod-network.576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" host="ip-172-31-22-59" Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.962 [INFO][4830] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:58.060455 containerd[2021]: 2024-09-04 17:18:57.963 [INFO][4830] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.3.66/26] IPv6=[] ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" HandleID="k8s-pod-network.576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:58.065145 containerd[2021]: 2024-09-04 17:18:57.973 [INFO][4815] k8s.go 386: Populated endpoint ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Namespace="kube-system" Pod="coredns-5dd5756b68-hpjrs" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"00846270-5600-49b2-8fa0-755bfb6bc2f1", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"", Pod:"coredns-5dd5756b68-hpjrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif92c301b58b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:58.065145 containerd[2021]: 2024-09-04 17:18:57.974 [INFO][4815] k8s.go 387: Calico CNI using IPs: [192.168.3.66/32] ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Namespace="kube-system" Pod="coredns-5dd5756b68-hpjrs" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:58.065145 containerd[2021]: 2024-09-04 17:18:57.976 [INFO][4815] dataplane_linux.go 68: Setting the host side veth name to calif92c301b58b ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Namespace="kube-system" Pod="coredns-5dd5756b68-hpjrs" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:58.065145 containerd[2021]: 2024-09-04 17:18:57.996 [INFO][4815] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Namespace="kube-system" Pod="coredns-5dd5756b68-hpjrs" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:58.065145 containerd[2021]: 2024-09-04 17:18:58.007 [INFO][4815] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Namespace="kube-system" Pod="coredns-5dd5756b68-hpjrs" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"00846270-5600-49b2-8fa0-755bfb6bc2f1", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438", Pod:"coredns-5dd5756b68-hpjrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif92c301b58b", MAC:"6e:8d:10:0c:20:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:58.065145 containerd[2021]: 2024-09-04 17:18:58.050 [INFO][4815] k8s.go 500: Wrote updated endpoint to datastore ContainerID="576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438" Namespace="kube-system" Pod="coredns-5dd5756b68-hpjrs" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:18:58.097133 containerd[2021]: time="2024-09-04T17:18:58.095264397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:58.097133 containerd[2021]: time="2024-09-04T17:18:58.095469225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:58.097133 containerd[2021]: time="2024-09-04T17:18:58.095504673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:58.098097 containerd[2021]: time="2024-09-04T17:18:58.095801901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:58.174098 containerd[2021]: time="2024-09-04T17:18:58.173039373Z" level=info msg="StopPodSandbox for \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\"" Sep 4 17:18:58.191056 systemd[1]: Started cri-containerd-389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862.scope - libcontainer container 389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862. Sep 4 17:18:58.236484 containerd[2021]: time="2024-09-04T17:18:58.235935622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:58.236484 containerd[2021]: time="2024-09-04T17:18:58.236115310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:58.236484 containerd[2021]: time="2024-09-04T17:18:58.236154286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:58.238330 containerd[2021]: time="2024-09-04T17:18:58.236382538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:58.296658 systemd[1]: Started cri-containerd-576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438.scope - libcontainer container 576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438. Sep 4 17:18:58.349722 containerd[2021]: time="2024-09-04T17:18:58.349655578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zdx2g,Uid:34e7c28a-1deb-4336-b6f8-a4fcfffd40c0,Namespace:calico-system,Attempt:1,} returns sandbox id \"389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862\"" Sep 4 17:18:58.357398 containerd[2021]: time="2024-09-04T17:18:58.357346954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:18:58.409093 containerd[2021]: time="2024-09-04T17:18:58.408978406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hpjrs,Uid:00846270-5600-49b2-8fa0-755bfb6bc2f1,Namespace:kube-system,Attempt:1,} returns sandbox id \"576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438\"" Sep 4 17:18:58.432756 containerd[2021]: time="2024-09-04T17:18:58.432577042Z" level=info msg="CreateContainer within sandbox \"576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:18:58.495015 containerd[2021]: time="2024-09-04T17:18:58.494916587Z" level=info msg="CreateContainer within sandbox \"576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3150765e1782a2b71664e07f93497035cc24bb11ced044ef8a6e334ddf5b354\"" Sep 4 17:18:58.498706 containerd[2021]: time="2024-09-04T17:18:58.498615539Z" level=info msg="StartContainer for \"f3150765e1782a2b71664e07f93497035cc24bb11ced044ef8a6e334ddf5b354\"" Sep 4 17:18:58.596517 systemd[1]: run-containerd-runc-k8s.io-f3150765e1782a2b71664e07f93497035cc24bb11ced044ef8a6e334ddf5b354-runc.0CbJ1B.mount: Deactivated successfully. Sep 4 17:18:58.618222 systemd[1]: Started cri-containerd-f3150765e1782a2b71664e07f93497035cc24bb11ced044ef8a6e334ddf5b354.scope - libcontainer container f3150765e1782a2b71664e07f93497035cc24bb11ced044ef8a6e334ddf5b354. Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.464 [INFO][4936] k8s.go 608: Cleaning up netns ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.465 [INFO][4936] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" iface="eth0" netns="/var/run/netns/cni-ac6fec8d-6594-6d83-e09e-9692ec577c15" Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.465 [INFO][4936] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" iface="eth0" netns="/var/run/netns/cni-ac6fec8d-6594-6d83-e09e-9692ec577c15" Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.475 [INFO][4936] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" iface="eth0" netns="/var/run/netns/cni-ac6fec8d-6594-6d83-e09e-9692ec577c15" Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.475 [INFO][4936] k8s.go 615: Releasing IP address(es) ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.476 [INFO][4936] utils.go 188: Calico CNI releasing IP address ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.605 [INFO][4967] ipam_plugin.go 417: Releasing address using handleID ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" HandleID="k8s-pod-network.b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.608 [INFO][4967] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.611 [INFO][4967] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.632 [WARNING][4967] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" HandleID="k8s-pod-network.b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.632 [INFO][4967] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" HandleID="k8s-pod-network.b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.635 [INFO][4967] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:58.644039 containerd[2021]: 2024-09-04 17:18:58.639 [INFO][4936] k8s.go 621: Teardown processing complete. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:18:58.646994 containerd[2021]: time="2024-09-04T17:18:58.645110316Z" level=info msg="TearDown network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\" successfully" Sep 4 17:18:58.646994 containerd[2021]: time="2024-09-04T17:18:58.645163980Z" level=info msg="StopPodSandbox for \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\" returns successfully" Sep 4 17:18:58.646994 containerd[2021]: time="2024-09-04T17:18:58.646644924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r55tp,Uid:ab091938-ea71-49ca-bf29-e750e1990329,Namespace:kube-system,Attempt:1,}" Sep 4 17:18:58.709709 containerd[2021]: time="2024-09-04T17:18:58.708273780Z" level=info msg="StartContainer for \"f3150765e1782a2b71664e07f93497035cc24bb11ced044ef8a6e334ddf5b354\" returns successfully" Sep 4 17:18:58.978864 systemd-networkd[1847]: cali51aafd934c9: Link UP Sep 4 17:18:58.982813 systemd-networkd[1847]: cali51aafd934c9: Gained carrier Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.828 [INFO][5006] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0 coredns-5dd5756b68- kube-system ab091938-ea71-49ca-bf29-e750e1990329 764 0 2024-09-04 17:18:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-59 coredns-5dd5756b68-r55tp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali51aafd934c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Namespace="kube-system" Pod="coredns-5dd5756b68-r55tp" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.828 [INFO][5006] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Namespace="kube-system" Pod="coredns-5dd5756b68-r55tp" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.888 [INFO][5024] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" HandleID="k8s-pod-network.21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.910 [INFO][5024] ipam_plugin.go 270: Auto assigning IP ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" HandleID="k8s-pod-network.21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002450c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-59", "pod":"coredns-5dd5756b68-r55tp", "timestamp":"2024-09-04 17:18:58.888452557 +0000 UTC"}, Hostname:"ip-172-31-22-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.911 [INFO][5024] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.911 [INFO][5024] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.911 [INFO][5024] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-59' Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.915 [INFO][5024] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" host="ip-172-31-22-59" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.923 [INFO][5024] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-59" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.932 [INFO][5024] ipam.go 489: Trying affinity for 192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.936 [INFO][5024] ipam.go 155: Attempting to load block cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.942 [INFO][5024] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.942 [INFO][5024] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" host="ip-172-31-22-59" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.946 [INFO][5024] ipam.go 1685: Creating new handle: k8s-pod-network.21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026 Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.957 [INFO][5024] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" host="ip-172-31-22-59" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.967 [INFO][5024] ipam.go 1216: Successfully claimed IPs: [192.168.3.67/26] block=192.168.3.64/26 handle="k8s-pod-network.21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" host="ip-172-31-22-59" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.967 [INFO][5024] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.67/26] handle="k8s-pod-network.21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" host="ip-172-31-22-59" Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.967 [INFO][5024] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:59.017034 containerd[2021]: 2024-09-04 17:18:58.967 [INFO][5024] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.3.67/26] IPv6=[] ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" HandleID="k8s-pod-network.21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:59.027392 containerd[2021]: 2024-09-04 17:18:58.971 [INFO][5006] k8s.go 386: Populated endpoint ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Namespace="kube-system" Pod="coredns-5dd5756b68-r55tp" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ab091938-ea71-49ca-bf29-e750e1990329", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"", Pod:"coredns-5dd5756b68-r55tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51aafd934c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:59.027392 containerd[2021]: 2024-09-04 17:18:58.971 [INFO][5006] k8s.go 387: Calico CNI using IPs: [192.168.3.67/32] ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Namespace="kube-system" Pod="coredns-5dd5756b68-r55tp" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:59.027392 containerd[2021]: 2024-09-04 17:18:58.971 [INFO][5006] dataplane_linux.go 68: Setting the host side veth name to cali51aafd934c9 ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Namespace="kube-system" Pod="coredns-5dd5756b68-r55tp" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:59.027392 containerd[2021]: 2024-09-04 17:18:58.979 [INFO][5006] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Namespace="kube-system" Pod="coredns-5dd5756b68-r55tp" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:59.027392 containerd[2021]: 2024-09-04 17:18:58.981 [INFO][5006] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Namespace="kube-system" Pod="coredns-5dd5756b68-r55tp" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ab091938-ea71-49ca-bf29-e750e1990329", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026", Pod:"coredns-5dd5756b68-r55tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51aafd934c9", MAC:"46:fa:bf:80:99:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:59.027392 containerd[2021]: 2024-09-04 17:18:59.006 [INFO][5006] k8s.go 500: Wrote updated endpoint to datastore ContainerID="21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026" Namespace="kube-system" Pod="coredns-5dd5756b68-r55tp" WorkloadEndpoint="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:18:59.039186 systemd-networkd[1847]: calic952d4056f7: Gained IPv6LL Sep 4 17:18:59.079094 containerd[2021]: time="2024-09-04T17:18:59.077887018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:59.079094 containerd[2021]: time="2024-09-04T17:18:59.077997514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:59.079094 containerd[2021]: time="2024-09-04T17:18:59.078052486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:59.080926 containerd[2021]: time="2024-09-04T17:18:59.080456710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:59.134221 systemd[1]: Started cri-containerd-21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026.scope - libcontainer container 21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026. Sep 4 17:18:59.235752 containerd[2021]: time="2024-09-04T17:18:59.235564726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r55tp,Uid:ab091938-ea71-49ca-bf29-e750e1990329,Namespace:kube-system,Attempt:1,} returns sandbox id \"21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026\"" Sep 4 17:18:59.248388 containerd[2021]: time="2024-09-04T17:18:59.248267651Z" level=info msg="CreateContainer within sandbox \"21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:18:59.276436 containerd[2021]: time="2024-09-04T17:18:59.276199151Z" level=info msg="CreateContainer within sandbox \"21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcfe1b724f3ca6ac203b695fe4643e28e1002edddd6204d08c8a77df293f1c69\"" Sep 4 17:18:59.279512 containerd[2021]: time="2024-09-04T17:18:59.277861331Z" level=info msg="StartContainer for \"dcfe1b724f3ca6ac203b695fe4643e28e1002edddd6204d08c8a77df293f1c69\"" Sep 4 17:18:59.337387 systemd[1]: Started cri-containerd-dcfe1b724f3ca6ac203b695fe4643e28e1002edddd6204d08c8a77df293f1c69.scope - libcontainer container dcfe1b724f3ca6ac203b695fe4643e28e1002edddd6204d08c8a77df293f1c69. Sep 4 17:18:59.448238 containerd[2021]: time="2024-09-04T17:18:59.448012308Z" level=info msg="StartContainer for \"dcfe1b724f3ca6ac203b695fe4643e28e1002edddd6204d08c8a77df293f1c69\" returns successfully" Sep 4 17:18:59.475671 systemd[1]: run-netns-cni\x2dac6fec8d\x2d6594\x2d6d83\x2de09e\x2d9692ec577c15.mount: Deactivated successfully. Sep 4 17:18:59.489309 systemd-networkd[1847]: calif92c301b58b: Gained IPv6LL Sep 4 17:18:59.881275 kubelet[3213]: I0904 17:18:59.880786 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-r55tp" podStartSLOduration=35.880729598 podCreationTimestamp="2024-09-04 17:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:59.800813653 +0000 UTC m=+49.033741162" watchObservedRunningTime="2024-09-04 17:18:59.880729598 +0000 UTC m=+49.113657071" Sep 4 17:18:59.884387 kubelet[3213]: I0904 17:18:59.882618 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hpjrs" podStartSLOduration=35.882523922 podCreationTimestamp="2024-09-04 17:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:59.879091946 +0000 UTC m=+49.112019443" watchObservedRunningTime="2024-09-04 17:18:59.882523922 +0000 UTC m=+49.115451419" Sep 4 17:18:59.980050 containerd[2021]: time="2024-09-04T17:18:59.979935998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:59.983452 containerd[2021]: time="2024-09-04T17:18:59.983336366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:18:59.984633 containerd[2021]: time="2024-09-04T17:18:59.984548786Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:59.995747 containerd[2021]: time="2024-09-04T17:18:59.995657858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:00.000553 containerd[2021]: time="2024-09-04T17:19:00.000474574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.642431224s" Sep 4 17:19:00.000553 containerd[2021]: time="2024-09-04T17:19:00.000545770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:19:00.010564 containerd[2021]: time="2024-09-04T17:19:00.009709738Z" level=info msg="CreateContainer within sandbox \"389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:19:00.086914 containerd[2021]: time="2024-09-04T17:19:00.086830547Z" level=info msg="CreateContainer within sandbox \"389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2dd85343e58b8f1de759766258431b39edb919f830632de11164ac2136e3c2e5\"" Sep 4 17:19:00.102835 containerd[2021]: time="2024-09-04T17:19:00.102724595Z" level=info msg="StartContainer for \"2dd85343e58b8f1de759766258431b39edb919f830632de11164ac2136e3c2e5\"" Sep 4 17:19:00.125336 systemd-networkd[1847]: cali51aafd934c9: Gained IPv6LL Sep 4 17:19:00.168007 containerd[2021]: time="2024-09-04T17:19:00.164811455Z" level=info msg="StopPodSandbox for \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\"" Sep 4 17:19:00.263663 systemd[1]: Started cri-containerd-2dd85343e58b8f1de759766258431b39edb919f830632de11164ac2136e3c2e5.scope - libcontainer container 2dd85343e58b8f1de759766258431b39edb919f830632de11164ac2136e3c2e5. Sep 4 17:19:00.459530 systemd[1]: run-containerd-runc-k8s.io-2dd85343e58b8f1de759766258431b39edb919f830632de11164ac2136e3c2e5-runc.xKBKaz.mount: Deactivated successfully. Sep 4 17:19:00.505179 containerd[2021]: time="2024-09-04T17:19:00.505081033Z" level=info msg="StartContainer for \"2dd85343e58b8f1de759766258431b39edb919f830632de11164ac2136e3c2e5\" returns successfully" Sep 4 17:19:00.533900 containerd[2021]: time="2024-09-04T17:19:00.531716161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.404 [INFO][5155] k8s.go 608: Cleaning up netns ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.405 [INFO][5155] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" iface="eth0" netns="/var/run/netns/cni-321320ac-5639-e375-3301-8b21d794b1cc" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.406 [INFO][5155] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" iface="eth0" netns="/var/run/netns/cni-321320ac-5639-e375-3301-8b21d794b1cc" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.408 [INFO][5155] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" iface="eth0" netns="/var/run/netns/cni-321320ac-5639-e375-3301-8b21d794b1cc" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.409 [INFO][5155] k8s.go 615: Releasing IP address(es) ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.409 [INFO][5155] utils.go 188: Calico CNI releasing IP address ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.502 [INFO][5173] ipam_plugin.go 417: Releasing address using handleID ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" HandleID="k8s-pod-network.4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.504 [INFO][5173] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.504 [INFO][5173] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.544 [WARNING][5173] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" HandleID="k8s-pod-network.4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.544 [INFO][5173] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" HandleID="k8s-pod-network.4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.551 [INFO][5173] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:00.563881 containerd[2021]: 2024-09-04 17:19:00.557 [INFO][5155] k8s.go 621: Teardown processing complete. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:00.565292 containerd[2021]: time="2024-09-04T17:19:00.565227493Z" level=info msg="TearDown network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\" successfully" Sep 4 17:19:00.565292 containerd[2021]: time="2024-09-04T17:19:00.565284625Z" level=info msg="StopPodSandbox for \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\" returns successfully" Sep 4 17:19:00.581246 systemd[1]: run-netns-cni\x2d321320ac\x2d5639\x2de375\x2d3301\x2d8b21d794b1cc.mount: Deactivated successfully. Sep 4 17:19:00.603750 containerd[2021]: time="2024-09-04T17:19:00.603684421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844466c8d6-b5z26,Uid:9e686dba-47ca-4edd-83cd-098666d46a40,Namespace:calico-system,Attempt:1,}" Sep 4 17:19:01.036689 systemd-networkd[1847]: calif94e9194684: Link UP Sep 4 17:19:01.039781 systemd-networkd[1847]: calif94e9194684: Gained carrier Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.820 [INFO][5191] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0 calico-kube-controllers-844466c8d6- calico-system 9e686dba-47ca-4edd-83cd-098666d46a40 801 0 2024-09-04 17:18:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:844466c8d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-59 calico-kube-controllers-844466c8d6-b5z26 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif94e9194684 [] []}} ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Namespace="calico-system" Pod="calico-kube-controllers-844466c8d6-b5z26" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.820 [INFO][5191] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Namespace="calico-system" Pod="calico-kube-controllers-844466c8d6-b5z26" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.942 [INFO][5201] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" HandleID="k8s-pod-network.cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.964 [INFO][5201] ipam_plugin.go 270: Auto assigning IP ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" HandleID="k8s-pod-network.cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000500600), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-59", "pod":"calico-kube-controllers-844466c8d6-b5z26", "timestamp":"2024-09-04 17:19:00.942215211 +0000 UTC"}, Hostname:"ip-172-31-22-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.964 [INFO][5201] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.965 [INFO][5201] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.965 [INFO][5201] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-59' Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.969 [INFO][5201] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" host="ip-172-31-22-59" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.978 [INFO][5201] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-59" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.988 [INFO][5201] ipam.go 489: Trying affinity for 192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.992 [INFO][5201] ipam.go 155: Attempting to load block cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.998 [INFO][5201] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:00.998 [INFO][5201] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" host="ip-172-31-22-59" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:01.002 [INFO][5201] ipam.go 1685: Creating new handle: k8s-pod-network.cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89 Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:01.009 [INFO][5201] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" host="ip-172-31-22-59" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:01.019 [INFO][5201] ipam.go 1216: Successfully claimed IPs: [192.168.3.68/26] block=192.168.3.64/26 handle="k8s-pod-network.cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" host="ip-172-31-22-59" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:01.019 [INFO][5201] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.68/26] handle="k8s-pod-network.cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" host="ip-172-31-22-59" Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:01.019 [INFO][5201] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:01.084368 containerd[2021]: 2024-09-04 17:19:01.021 [INFO][5201] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.3.68/26] IPv6=[] ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" HandleID="k8s-pod-network.cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:01.088058 containerd[2021]: 2024-09-04 17:19:01.027 [INFO][5191] k8s.go 386: Populated endpoint ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Namespace="calico-system" Pod="calico-kube-controllers-844466c8d6-b5z26" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0", GenerateName:"calico-kube-controllers-844466c8d6-", Namespace:"calico-system", SelfLink:"", UID:"9e686dba-47ca-4edd-83cd-098666d46a40", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844466c8d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"", Pod:"calico-kube-controllers-844466c8d6-b5z26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif94e9194684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:01.088058 containerd[2021]: 2024-09-04 17:19:01.028 [INFO][5191] k8s.go 387: Calico CNI using IPs: [192.168.3.68/32] ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Namespace="calico-system" Pod="calico-kube-controllers-844466c8d6-b5z26" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:01.088058 containerd[2021]: 2024-09-04 17:19:01.028 [INFO][5191] dataplane_linux.go 68: Setting the host side veth name to calif94e9194684 ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Namespace="calico-system" Pod="calico-kube-controllers-844466c8d6-b5z26" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:01.088058 containerd[2021]: 2024-09-04 17:19:01.038 [INFO][5191] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Namespace="calico-system" Pod="calico-kube-controllers-844466c8d6-b5z26" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:01.088058 containerd[2021]: 2024-09-04 17:19:01.046 [INFO][5191] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Namespace="calico-system" Pod="calico-kube-controllers-844466c8d6-b5z26" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0", GenerateName:"calico-kube-controllers-844466c8d6-", Namespace:"calico-system", SelfLink:"", UID:"9e686dba-47ca-4edd-83cd-098666d46a40", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844466c8d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89", Pod:"calico-kube-controllers-844466c8d6-b5z26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif94e9194684", MAC:"e6:a8:31:ad:6a:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:01.088058 containerd[2021]: 2024-09-04 17:19:01.078 [INFO][5191] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89" Namespace="calico-system" Pod="calico-kube-controllers-844466c8d6-b5z26" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:01.137925 containerd[2021]: time="2024-09-04T17:19:01.137515320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:01.137925 containerd[2021]: time="2024-09-04T17:19:01.137625972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:01.137925 containerd[2021]: time="2024-09-04T17:19:01.137686008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:01.137925 containerd[2021]: time="2024-09-04T17:19:01.138264708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:01.193786 systemd[1]: Started cri-containerd-cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89.scope - libcontainer container cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89. Sep 4 17:19:01.284363 containerd[2021]: time="2024-09-04T17:19:01.284279233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844466c8d6-b5z26,Uid:9e686dba-47ca-4edd-83cd-098666d46a40,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89\"" Sep 4 17:19:01.865902 systemd[1]: Started sshd@8-172.31.22.59:22-139.178.89.65:37528.service - OpenSSH per-connection server daemon (139.178.89.65:37528). Sep 4 17:19:02.069267 sshd[5275]: Accepted publickey for core from 139.178.89.65 port 37528 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:02.076516 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:02.094387 systemd-logind[1989]: New session 9 of user core. Sep 4 17:19:02.108301 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:19:02.159705 containerd[2021]: time="2024-09-04T17:19:02.159516373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:02.161386 containerd[2021]: time="2024-09-04T17:19:02.161282101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:19:02.162814 containerd[2021]: time="2024-09-04T17:19:02.162757105Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:02.168894 containerd[2021]: time="2024-09-04T17:19:02.168796285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:02.170766 containerd[2021]: time="2024-09-04T17:19:02.170546425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.63876148s" Sep 4 17:19:02.170766 containerd[2021]: time="2024-09-04T17:19:02.170617501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:19:02.178334 containerd[2021]: time="2024-09-04T17:19:02.178045525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:19:02.181633 containerd[2021]: time="2024-09-04T17:19:02.181293781Z" level=info msg="CreateContainer within sandbox \"389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:19:02.221542 containerd[2021]: time="2024-09-04T17:19:02.221392429Z" level=info msg="CreateContainer within sandbox \"389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9e4d30f386d0535df8488cd6396a5aa6c25b9d8eb64a281286ef47417caba96e\"" Sep 4 17:19:02.229321 containerd[2021]: time="2024-09-04T17:19:02.228269521Z" level=info msg="StartContainer for \"9e4d30f386d0535df8488cd6396a5aa6c25b9d8eb64a281286ef47417caba96e\"" Sep 4 17:19:02.333325 systemd[1]: Started cri-containerd-9e4d30f386d0535df8488cd6396a5aa6c25b9d8eb64a281286ef47417caba96e.scope - libcontainer container 9e4d30f386d0535df8488cd6396a5aa6c25b9d8eb64a281286ef47417caba96e. Sep 4 17:19:02.430810 systemd-networkd[1847]: calif94e9194684: Gained IPv6LL Sep 4 17:19:02.491463 containerd[2021]: time="2024-09-04T17:19:02.491323203Z" level=info msg="StartContainer for \"9e4d30f386d0535df8488cd6396a5aa6c25b9d8eb64a281286ef47417caba96e\" returns successfully" Sep 4 17:19:02.598916 sshd[5275]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:02.605133 systemd-logind[1989]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:19:02.606365 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:19:02.608777 systemd[1]: sshd@8-172.31.22.59:22-139.178.89.65:37528.service: Deactivated successfully. Sep 4 17:19:02.624276 systemd-logind[1989]: Removed session 9. Sep 4 17:19:03.445395 kubelet[3213]: I0904 17:19:03.445282 3213 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:19:03.445395 kubelet[3213]: I0904 17:19:03.445408 3213 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:19:04.452788 ntpd[1981]: Listen normally on 8 vxlan.calico 192.168.3.64:123 Sep 4 17:19:04.452929 ntpd[1981]: Listen normally on 9 vxlan.calico [fe80::643b:f3ff:fe46:36d6%4]:123 Sep 4 17:19:04.453792 ntpd[1981]: 4 Sep 17:19:04 ntpd[1981]: Listen normally on 8 vxlan.calico 192.168.3.64:123 Sep 4 17:19:04.453792 ntpd[1981]: 4 Sep 17:19:04 ntpd[1981]: Listen normally on 9 vxlan.calico [fe80::643b:f3ff:fe46:36d6%4]:123 Sep 4 17:19:04.453792 ntpd[1981]: 4 Sep 17:19:04 ntpd[1981]: Listen normally on 10 calic952d4056f7 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:19:04.453792 ntpd[1981]: 4 Sep 17:19:04 ntpd[1981]: Listen normally on 11 calif92c301b58b [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:19:04.453792 ntpd[1981]: 4 Sep 17:19:04 ntpd[1981]: Listen normally on 12 cali51aafd934c9 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:19:04.453792 ntpd[1981]: 4 Sep 17:19:04 ntpd[1981]: Listen normally on 13 calif94e9194684 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:19:04.453057 ntpd[1981]: Listen normally on 10 calic952d4056f7 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:19:04.453129 ntpd[1981]: Listen normally on 11 calif92c301b58b [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:19:04.453197 ntpd[1981]: Listen normally on 12 cali51aafd934c9 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:19:04.453275 ntpd[1981]: Listen normally on 13 calif94e9194684 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:19:07.051089 containerd[2021]: time="2024-09-04T17:19:07.050994029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:07.054977 containerd[2021]: time="2024-09-04T17:19:07.053267477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:19:07.054977 containerd[2021]: time="2024-09-04T17:19:07.054746681Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:07.060888 containerd[2021]: time="2024-09-04T17:19:07.060824561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:07.063204 containerd[2021]: time="2024-09-04T17:19:07.063044225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 4.884477612s" Sep 4 17:19:07.063204 containerd[2021]: time="2024-09-04T17:19:07.063152465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:19:07.093595 containerd[2021]: time="2024-09-04T17:19:07.093181398Z" level=info msg="CreateContainer within sandbox \"cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:19:07.148378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906811782.mount: Deactivated successfully. Sep 4 17:19:07.156753 containerd[2021]: time="2024-09-04T17:19:07.156552270Z" level=info msg="CreateContainer within sandbox \"cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"200b5062b025de6d82a2d38a82ec5c03f3b176de601816db241a735e612fd5d3\"" Sep 4 17:19:07.158810 containerd[2021]: time="2024-09-04T17:19:07.158740854Z" level=info msg="StartContainer for \"200b5062b025de6d82a2d38a82ec5c03f3b176de601816db241a735e612fd5d3\"" Sep 4 17:19:07.246380 systemd[1]: Started cri-containerd-200b5062b025de6d82a2d38a82ec5c03f3b176de601816db241a735e612fd5d3.scope - libcontainer container 200b5062b025de6d82a2d38a82ec5c03f3b176de601816db241a735e612fd5d3. Sep 4 17:19:07.397186 containerd[2021]: time="2024-09-04T17:19:07.396928963Z" level=info msg="StartContainer for \"200b5062b025de6d82a2d38a82ec5c03f3b176de601816db241a735e612fd5d3\" returns successfully" Sep 4 17:19:07.652595 systemd[1]: Started sshd@9-172.31.22.59:22-139.178.89.65:35864.service - OpenSSH per-connection server daemon (139.178.89.65:35864). Sep 4 17:19:07.877215 sshd[5381]: Accepted publickey for core from 139.178.89.65 port 35864 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:07.885556 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:07.897143 kubelet[3213]: I0904 17:19:07.896849 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-zdx2g" podStartSLOduration=32.079652859 podCreationTimestamp="2024-09-04 17:18:32 +0000 UTC" firstStartedPulling="2024-09-04 17:18:58.354579622 +0000 UTC m=+47.587507107" lastFinishedPulling="2024-09-04 17:19:02.171662509 +0000 UTC m=+51.404590006" observedRunningTime="2024-09-04 17:19:02.835128448 +0000 UTC m=+52.068055945" watchObservedRunningTime="2024-09-04 17:19:07.896735758 +0000 UTC m=+57.129663267" Sep 4 17:19:07.907869 systemd-logind[1989]: New session 10 of user core. Sep 4 17:19:07.918113 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:19:08.177484 kubelet[3213]: I0904 17:19:08.177260 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-844466c8d6-b5z26" podStartSLOduration=30.400694690999998 podCreationTimestamp="2024-09-04 17:18:32 +0000 UTC" firstStartedPulling="2024-09-04 17:19:01.287292505 +0000 UTC m=+50.520219978" lastFinishedPulling="2024-09-04 17:19:07.063781733 +0000 UTC m=+56.296709206" observedRunningTime="2024-09-04 17:19:07.898894318 +0000 UTC m=+57.131821827" watchObservedRunningTime="2024-09-04 17:19:08.177183919 +0000 UTC m=+57.410111536" Sep 4 17:19:08.314314 sshd[5381]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:08.326510 systemd[1]: sshd@9-172.31.22.59:22-139.178.89.65:35864.service: Deactivated successfully. Sep 4 17:19:08.332916 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:19:08.336232 systemd-logind[1989]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:19:08.369173 systemd[1]: Started sshd@10-172.31.22.59:22-139.178.89.65:35878.service - OpenSSH per-connection server daemon (139.178.89.65:35878). Sep 4 17:19:08.375491 systemd-logind[1989]: Removed session 10. Sep 4 17:19:08.565414 sshd[5417]: Accepted publickey for core from 139.178.89.65 port 35878 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:08.568611 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:08.581569 systemd-logind[1989]: New session 11 of user core. Sep 4 17:19:08.593840 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:19:09.439260 sshd[5417]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:09.450507 systemd[1]: sshd@10-172.31.22.59:22-139.178.89.65:35878.service: Deactivated successfully. Sep 4 17:19:09.460990 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:19:09.465525 systemd-logind[1989]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:19:09.492523 systemd[1]: Started sshd@11-172.31.22.59:22-139.178.89.65:35886.service - OpenSSH per-connection server daemon (139.178.89.65:35886). Sep 4 17:19:09.499176 systemd-logind[1989]: Removed session 11. Sep 4 17:19:09.714097 sshd[5428]: Accepted publickey for core from 139.178.89.65 port 35886 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:09.714310 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:09.735584 systemd-logind[1989]: New session 12 of user core. Sep 4 17:19:09.740876 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:19:10.089263 sshd[5428]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:10.104372 systemd[1]: sshd@11-172.31.22.59:22-139.178.89.65:35886.service: Deactivated successfully. Sep 4 17:19:10.116558 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:19:10.124144 systemd-logind[1989]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:19:10.128497 systemd-logind[1989]: Removed session 12. Sep 4 17:19:11.092066 containerd[2021]: time="2024-09-04T17:19:11.092000985Z" level=info msg="StopPodSandbox for \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\"" Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.196 [WARNING][5452] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"00846270-5600-49b2-8fa0-755bfb6bc2f1", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438", Pod:"coredns-5dd5756b68-hpjrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif92c301b58b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.197 [INFO][5452] k8s.go 608: Cleaning up netns ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.197 [INFO][5452] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" iface="eth0" netns="" Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.197 [INFO][5452] k8s.go 615: Releasing IP address(es) ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.197 [INFO][5452] utils.go 188: Calico CNI releasing IP address ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.241 [INFO][5461] ipam_plugin.go 417: Releasing address using handleID ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" HandleID="k8s-pod-network.c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.242 [INFO][5461] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.242 [INFO][5461] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.254 [WARNING][5461] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" HandleID="k8s-pod-network.c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.254 [INFO][5461] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" HandleID="k8s-pod-network.c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.257 [INFO][5461] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:11.263414 containerd[2021]: 2024-09-04 17:19:11.260 [INFO][5452] k8s.go 621: Teardown processing complete. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:19:11.265192 containerd[2021]: time="2024-09-04T17:19:11.263480338Z" level=info msg="TearDown network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\" successfully" Sep 4 17:19:11.265192 containerd[2021]: time="2024-09-04T17:19:11.263544874Z" level=info msg="StopPodSandbox for \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\" returns successfully" Sep 4 17:19:11.265192 containerd[2021]: time="2024-09-04T17:19:11.264383098Z" level=info msg="RemovePodSandbox for \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\"" Sep 4 17:19:11.265192 containerd[2021]: time="2024-09-04T17:19:11.264431302Z" level=info msg="Forcibly stopping sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\"" Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.337 [WARNING][5481] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"00846270-5600-49b2-8fa0-755bfb6bc2f1", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"576d38a8ce3b9c47f17b12e271a87c7f2c7509f8d0618f500ff56848602e1438", Pod:"coredns-5dd5756b68-hpjrs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif92c301b58b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.338 [INFO][5481] k8s.go 608: Cleaning up netns ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.338 [INFO][5481] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" iface="eth0" netns="" Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.338 [INFO][5481] k8s.go 615: Releasing IP address(es) ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.338 [INFO][5481] utils.go 188: Calico CNI releasing IP address ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.393 [INFO][5487] ipam_plugin.go 417: Releasing address using handleID ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" HandleID="k8s-pod-network.c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.394 [INFO][5487] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.394 [INFO][5487] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.412 [WARNING][5487] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" HandleID="k8s-pod-network.c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.412 [INFO][5487] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" HandleID="k8s-pod-network.c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--hpjrs-eth0" Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.417 [INFO][5487] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:11.427232 containerd[2021]: 2024-09-04 17:19:11.422 [INFO][5481] k8s.go 621: Teardown processing complete. ContainerID="c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97" Sep 4 17:19:11.431319 containerd[2021]: time="2024-09-04T17:19:11.427790087Z" level=info msg="TearDown network for sandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\" successfully" Sep 4 17:19:11.433589 containerd[2021]: time="2024-09-04T17:19:11.433503695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:11.433738 containerd[2021]: time="2024-09-04T17:19:11.433620371Z" level=info msg="RemovePodSandbox \"c676971c2c301223536311a83843a5b255fa539f48e1c3c76f778c904c158e97\" returns successfully" Sep 4 17:19:11.434737 containerd[2021]: time="2024-09-04T17:19:11.434501123Z" level=info msg="StopPodSandbox for \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\"" Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.515 [WARNING][5506] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ab091938-ea71-49ca-bf29-e750e1990329", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026", Pod:"coredns-5dd5756b68-r55tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51aafd934c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.516 [INFO][5506] k8s.go 608: Cleaning up netns ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.516 [INFO][5506] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" iface="eth0" netns="" Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.516 [INFO][5506] k8s.go 615: Releasing IP address(es) ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.516 [INFO][5506] utils.go 188: Calico CNI releasing IP address ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.564 [INFO][5512] ipam_plugin.go 417: Releasing address using handleID ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" HandleID="k8s-pod-network.b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.564 [INFO][5512] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.564 [INFO][5512] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.577 [WARNING][5512] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" HandleID="k8s-pod-network.b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.577 [INFO][5512] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" HandleID="k8s-pod-network.b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.579 [INFO][5512] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:11.586050 containerd[2021]: 2024-09-04 17:19:11.583 [INFO][5506] k8s.go 621: Teardown processing complete. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:19:11.588352 containerd[2021]: time="2024-09-04T17:19:11.586089384Z" level=info msg="TearDown network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\" successfully" Sep 4 17:19:11.588352 containerd[2021]: time="2024-09-04T17:19:11.586128204Z" level=info msg="StopPodSandbox for \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\" returns successfully" Sep 4 17:19:11.588352 containerd[2021]: time="2024-09-04T17:19:11.587650860Z" level=info msg="RemovePodSandbox for \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\"" Sep 4 17:19:11.588352 containerd[2021]: time="2024-09-04T17:19:11.587704020Z" level=info msg="Forcibly stopping sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\"" Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.664 [WARNING][5530] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ab091938-ea71-49ca-bf29-e750e1990329", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"21bceec82a4e5c0715b307e365c98c4a523aed5ef3f23756a5fff6268c130026", Pod:"coredns-5dd5756b68-r55tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51aafd934c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.664 [INFO][5530] k8s.go 608: Cleaning up netns ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.664 [INFO][5530] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" iface="eth0" netns="" Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.665 [INFO][5530] k8s.go 615: Releasing IP address(es) ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.665 [INFO][5530] utils.go 188: Calico CNI releasing IP address ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.714 [INFO][5536] ipam_plugin.go 417: Releasing address using handleID ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" HandleID="k8s-pod-network.b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.714 [INFO][5536] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.714 [INFO][5536] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.728 [WARNING][5536] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" HandleID="k8s-pod-network.b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.728 [INFO][5536] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" HandleID="k8s-pod-network.b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Workload="ip--172--31--22--59-k8s-coredns--5dd5756b68--r55tp-eth0" Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.732 [INFO][5536] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:11.740032 containerd[2021]: 2024-09-04 17:19:11.735 [INFO][5530] k8s.go 621: Teardown processing complete. ContainerID="b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543" Sep 4 17:19:11.740032 containerd[2021]: time="2024-09-04T17:19:11.739974889Z" level=info msg="TearDown network for sandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\" successfully" Sep 4 17:19:11.746574 containerd[2021]: time="2024-09-04T17:19:11.746389873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:11.746981 containerd[2021]: time="2024-09-04T17:19:11.746768953Z" level=info msg="RemovePodSandbox \"b56227c4e0cbfb4f844da8f08f9f56e76d9f05714fcadc9f4759cdd38aa98543\" returns successfully" Sep 4 17:19:11.749086 containerd[2021]: time="2024-09-04T17:19:11.748773505Z" level=info msg="StopPodSandbox for \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\"" Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.844 [WARNING][5555] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862", Pod:"csi-node-driver-zdx2g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic952d4056f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.844 [INFO][5555] k8s.go 608: Cleaning up netns ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.844 [INFO][5555] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" iface="eth0" netns="" Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.844 [INFO][5555] k8s.go 615: Releasing IP address(es) ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.845 [INFO][5555] utils.go 188: Calico CNI releasing IP address ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.912 [INFO][5561] ipam_plugin.go 417: Releasing address using handleID ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" HandleID="k8s-pod-network.595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.913 [INFO][5561] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.913 [INFO][5561] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.930 [WARNING][5561] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" HandleID="k8s-pod-network.595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.931 [INFO][5561] ipam_plugin.go 445: Releasing address using workloadID ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" HandleID="k8s-pod-network.595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.934 [INFO][5561] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:11.944674 containerd[2021]: 2024-09-04 17:19:11.940 [INFO][5555] k8s.go 621: Teardown processing complete. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:19:11.948083 containerd[2021]: time="2024-09-04T17:19:11.944714918Z" level=info msg="TearDown network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\" successfully" Sep 4 17:19:11.948083 containerd[2021]: time="2024-09-04T17:19:11.944755334Z" level=info msg="StopPodSandbox for \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\" returns successfully" Sep 4 17:19:11.948083 containerd[2021]: time="2024-09-04T17:19:11.947280206Z" level=info msg="RemovePodSandbox for \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\"" Sep 4 17:19:11.948083 containerd[2021]: time="2024-09-04T17:19:11.947376194Z" level=info msg="Forcibly stopping sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\"" Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.027 [WARNING][5579] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34e7c28a-1deb-4336-b6f8-a4fcfffd40c0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"389a7e11737933374079c951036da5a9f26a1559433e2487a41f0947c7542862", Pod:"csi-node-driver-zdx2g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calic952d4056f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.028 [INFO][5579] k8s.go 608: Cleaning up netns ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.028 [INFO][5579] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" iface="eth0" netns="" Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.028 [INFO][5579] k8s.go 615: Releasing IP address(es) ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.028 [INFO][5579] utils.go 188: Calico CNI releasing IP address ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.092 [INFO][5585] ipam_plugin.go 417: Releasing address using handleID ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" HandleID="k8s-pod-network.595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.093 [INFO][5585] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.093 [INFO][5585] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.123 [WARNING][5585] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" HandleID="k8s-pod-network.595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.123 [INFO][5585] ipam_plugin.go 445: Releasing address using workloadID ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" HandleID="k8s-pod-network.595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Workload="ip--172--31--22--59-k8s-csi--node--driver--zdx2g-eth0" Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.126 [INFO][5585] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:12.132479 containerd[2021]: 2024-09-04 17:19:12.129 [INFO][5579] k8s.go 621: Teardown processing complete. ContainerID="595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333" Sep 4 17:19:12.133758 containerd[2021]: time="2024-09-04T17:19:12.132482123Z" level=info msg="TearDown network for sandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\" successfully" Sep 4 17:19:12.147971 containerd[2021]: time="2024-09-04T17:19:12.146473451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:12.147971 containerd[2021]: time="2024-09-04T17:19:12.146693087Z" level=info msg="RemovePodSandbox \"595c3cac20baac24f61a351ad993d0ad7ee3604b582ee8e203fcc9f069d6f333\" returns successfully" Sep 4 17:19:12.148932 containerd[2021]: time="2024-09-04T17:19:12.148879547Z" level=info msg="StopPodSandbox for \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\"" Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.230 [WARNING][5603] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0", GenerateName:"calico-kube-controllers-844466c8d6-", Namespace:"calico-system", SelfLink:"", UID:"9e686dba-47ca-4edd-83cd-098666d46a40", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844466c8d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89", Pod:"calico-kube-controllers-844466c8d6-b5z26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif94e9194684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.231 [INFO][5603] k8s.go 608: Cleaning up netns ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.231 [INFO][5603] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" iface="eth0" netns="" Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.231 [INFO][5603] k8s.go 615: Releasing IP address(es) ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.231 [INFO][5603] utils.go 188: Calico CNI releasing IP address ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.291 [INFO][5609] ipam_plugin.go 417: Releasing address using handleID ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" HandleID="k8s-pod-network.4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.291 [INFO][5609] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.291 [INFO][5609] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.311 [WARNING][5609] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" HandleID="k8s-pod-network.4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.311 [INFO][5609] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" HandleID="k8s-pod-network.4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.315 [INFO][5609] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:12.322199 containerd[2021]: 2024-09-04 17:19:12.318 [INFO][5603] k8s.go 621: Teardown processing complete. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:12.323358 containerd[2021]: time="2024-09-04T17:19:12.323231339Z" level=info msg="TearDown network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\" successfully" Sep 4 17:19:12.323358 containerd[2021]: time="2024-09-04T17:19:12.323341931Z" level=info msg="StopPodSandbox for \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\" returns successfully" Sep 4 17:19:12.324432 containerd[2021]: time="2024-09-04T17:19:12.324250175Z" level=info msg="RemovePodSandbox for \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\"" Sep 4 17:19:12.324432 containerd[2021]: time="2024-09-04T17:19:12.324310751Z" level=info msg="Forcibly stopping sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\"" Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.404 [WARNING][5627] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0", GenerateName:"calico-kube-controllers-844466c8d6-", Namespace:"calico-system", SelfLink:"", UID:"9e686dba-47ca-4edd-83cd-098666d46a40", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844466c8d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"cd3df15b77773a1176af3a3561d4e5a680d2c81e70c862345c7047f939ee2c89", Pod:"calico-kube-controllers-844466c8d6-b5z26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif94e9194684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.404 [INFO][5627] k8s.go 608: Cleaning up netns ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.405 [INFO][5627] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" iface="eth0" netns="" Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.405 [INFO][5627] k8s.go 615: Releasing IP address(es) ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.405 [INFO][5627] utils.go 188: Calico CNI releasing IP address ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.459 [INFO][5633] ipam_plugin.go 417: Releasing address using handleID ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" HandleID="k8s-pod-network.4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.460 [INFO][5633] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.460 [INFO][5633] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.476 [WARNING][5633] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" HandleID="k8s-pod-network.4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.476 [INFO][5633] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" HandleID="k8s-pod-network.4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Workload="ip--172--31--22--59-k8s-calico--kube--controllers--844466c8d6--b5z26-eth0" Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.479 [INFO][5633] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:12.487146 containerd[2021]: 2024-09-04 17:19:12.482 [INFO][5627] k8s.go 621: Teardown processing complete. ContainerID="4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686" Sep 4 17:19:12.487146 containerd[2021]: time="2024-09-04T17:19:12.486889824Z" level=info msg="TearDown network for sandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\" successfully" Sep 4 17:19:12.493780 containerd[2021]: time="2024-09-04T17:19:12.493684536Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:12.494161 containerd[2021]: time="2024-09-04T17:19:12.493800696Z" level=info msg="RemovePodSandbox \"4dcd6a948b00abc855564fef6a88b75729cba0a7e3e4f163b986b1292fb93686\" returns successfully" Sep 4 17:19:15.141466 systemd[1]: Started sshd@12-172.31.22.59:22-139.178.89.65:35900.service - OpenSSH per-connection server daemon (139.178.89.65:35900). Sep 4 17:19:15.322409 sshd[5659]: Accepted publickey for core from 139.178.89.65 port 35900 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:15.326362 sshd[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:15.340134 systemd-logind[1989]: New session 13 of user core. Sep 4 17:19:15.348305 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:19:15.681119 sshd[5659]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:15.687774 systemd-logind[1989]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:19:15.691496 systemd[1]: sshd@12-172.31.22.59:22-139.178.89.65:35900.service: Deactivated successfully. Sep 4 17:19:15.697135 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:19:15.704091 systemd-logind[1989]: Removed session 13. Sep 4 17:19:20.729609 systemd[1]: Started sshd@13-172.31.22.59:22-139.178.89.65:41216.service - OpenSSH per-connection server daemon (139.178.89.65:41216). Sep 4 17:19:20.915407 sshd[5689]: Accepted publickey for core from 139.178.89.65 port 41216 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:20.919654 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:20.933078 systemd-logind[1989]: New session 14 of user core. Sep 4 17:19:20.942851 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:19:21.275032 sshd[5689]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:21.283178 systemd[1]: sshd@13-172.31.22.59:22-139.178.89.65:41216.service: Deactivated successfully. Sep 4 17:19:21.289455 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:19:21.291769 systemd-logind[1989]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:19:21.294912 systemd-logind[1989]: Removed session 14. Sep 4 17:19:26.317744 systemd[1]: Started sshd@14-172.31.22.59:22-139.178.89.65:41228.service - OpenSSH per-connection server daemon (139.178.89.65:41228). Sep 4 17:19:26.513539 sshd[5727]: Accepted publickey for core from 139.178.89.65 port 41228 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:26.517591 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:26.531216 systemd-logind[1989]: New session 15 of user core. Sep 4 17:19:26.540295 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:19:26.823509 sshd[5727]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:26.831822 systemd[1]: sshd@14-172.31.22.59:22-139.178.89.65:41228.service: Deactivated successfully. Sep 4 17:19:26.837505 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:19:26.842698 systemd-logind[1989]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:19:26.845889 systemd-logind[1989]: Removed session 15. Sep 4 17:19:31.869569 systemd[1]: Started sshd@15-172.31.22.59:22-139.178.89.65:58816.service - OpenSSH per-connection server daemon (139.178.89.65:58816). Sep 4 17:19:32.049019 sshd[5745]: Accepted publickey for core from 139.178.89.65 port 58816 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:32.053486 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:32.071496 systemd-logind[1989]: New session 16 of user core. Sep 4 17:19:32.079503 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:19:32.348109 sshd[5745]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:32.354927 systemd[1]: sshd@15-172.31.22.59:22-139.178.89.65:58816.service: Deactivated successfully. Sep 4 17:19:32.360563 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:19:32.362922 systemd-logind[1989]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:19:32.367480 systemd-logind[1989]: Removed session 16. Sep 4 17:19:32.395194 systemd[1]: Started sshd@16-172.31.22.59:22-139.178.89.65:58832.service - OpenSSH per-connection server daemon (139.178.89.65:58832). Sep 4 17:19:32.592229 sshd[5758]: Accepted publickey for core from 139.178.89.65 port 58832 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:32.596724 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:32.613223 systemd-logind[1989]: New session 17 of user core. Sep 4 17:19:32.619335 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:19:33.192522 sshd[5758]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:33.205466 systemd[1]: sshd@16-172.31.22.59:22-139.178.89.65:58832.service: Deactivated successfully. Sep 4 17:19:33.218006 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:19:33.224391 systemd-logind[1989]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:19:33.253507 systemd[1]: Started sshd@17-172.31.22.59:22-139.178.89.65:58848.service - OpenSSH per-connection server daemon (139.178.89.65:58848). Sep 4 17:19:33.257301 systemd-logind[1989]: Removed session 17. Sep 4 17:19:33.450093 sshd[5769]: Accepted publickey for core from 139.178.89.65 port 58848 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:33.454223 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:33.468362 systemd-logind[1989]: New session 18 of user core. Sep 4 17:19:33.472658 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:19:35.498412 sshd[5769]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:35.512536 systemd[1]: sshd@17-172.31.22.59:22-139.178.89.65:58848.service: Deactivated successfully. Sep 4 17:19:35.522628 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:19:35.525187 systemd[1]: session-18.scope: Consumed 1.226s CPU time. Sep 4 17:19:35.530607 systemd-logind[1989]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:19:35.596618 systemd[1]: Started sshd@18-172.31.22.59:22-139.178.89.65:58860.service - OpenSSH per-connection server daemon (139.178.89.65:58860). Sep 4 17:19:35.599093 systemd-logind[1989]: Removed session 18. Sep 4 17:19:35.796262 sshd[5790]: Accepted publickey for core from 139.178.89.65 port 58860 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:35.798859 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:35.812822 systemd-logind[1989]: New session 19 of user core. Sep 4 17:19:35.820419 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:19:36.806615 sshd[5790]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:36.824292 systemd[1]: sshd@18-172.31.22.59:22-139.178.89.65:58860.service: Deactivated successfully. Sep 4 17:19:36.836311 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:19:36.838858 systemd-logind[1989]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:19:36.863623 systemd[1]: Started sshd@19-172.31.22.59:22-139.178.89.65:58866.service - OpenSSH per-connection server daemon (139.178.89.65:58866). Sep 4 17:19:36.866240 systemd-logind[1989]: Removed session 19. Sep 4 17:19:37.052174 sshd[5812]: Accepted publickey for core from 139.178.89.65 port 58866 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:37.056445 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:37.075852 systemd-logind[1989]: New session 20 of user core. Sep 4 17:19:37.084298 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:19:37.417482 sshd[5812]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:37.427529 systemd[1]: sshd@19-172.31.22.59:22-139.178.89.65:58866.service: Deactivated successfully. Sep 4 17:19:37.432927 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:19:37.435626 systemd-logind[1989]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:19:37.439697 systemd-logind[1989]: Removed session 20. Sep 4 17:19:42.462760 systemd[1]: Started sshd@20-172.31.22.59:22-139.178.89.65:40256.service - OpenSSH per-connection server daemon (139.178.89.65:40256). Sep 4 17:19:42.653177 sshd[5832]: Accepted publickey for core from 139.178.89.65 port 40256 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:42.656239 sshd[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:42.666497 systemd-logind[1989]: New session 21 of user core. Sep 4 17:19:42.677257 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:19:43.001701 sshd[5832]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:43.008494 systemd[1]: sshd@20-172.31.22.59:22-139.178.89.65:40256.service: Deactivated successfully. Sep 4 17:19:43.014432 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:19:43.020423 systemd-logind[1989]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:19:43.024180 systemd-logind[1989]: Removed session 21. Sep 4 17:19:48.044332 systemd[1]: Started sshd@21-172.31.22.59:22-139.178.89.65:48946.service - OpenSSH per-connection server daemon (139.178.89.65:48946). Sep 4 17:19:48.237782 sshd[5891]: Accepted publickey for core from 139.178.89.65 port 48946 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:48.240595 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:48.249217 systemd-logind[1989]: New session 22 of user core. Sep 4 17:19:48.258194 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:19:48.533984 sshd[5891]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:48.542528 systemd[1]: sshd@21-172.31.22.59:22-139.178.89.65:48946.service: Deactivated successfully. Sep 4 17:19:48.549482 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:19:48.552279 systemd-logind[1989]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:19:48.557434 systemd-logind[1989]: Removed session 22. Sep 4 17:19:51.499887 kubelet[3213]: I0904 17:19:51.499797 3213 topology_manager.go:215] "Topology Admit Handler" podUID="0f66c2fc-a962-4f69-bfa4-ee05713eb4fb" podNamespace="calico-apiserver" podName="calico-apiserver-858797dd54-qstwm" Sep 4 17:19:51.521972 systemd[1]: Created slice kubepods-besteffort-pod0f66c2fc_a962_4f69_bfa4_ee05713eb4fb.slice - libcontainer container kubepods-besteffort-pod0f66c2fc_a962_4f69_bfa4_ee05713eb4fb.slice. Sep 4 17:19:51.677099 kubelet[3213]: I0904 17:19:51.676268 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0f66c2fc-a962-4f69-bfa4-ee05713eb4fb-calico-apiserver-certs\") pod \"calico-apiserver-858797dd54-qstwm\" (UID: \"0f66c2fc-a962-4f69-bfa4-ee05713eb4fb\") " pod="calico-apiserver/calico-apiserver-858797dd54-qstwm" Sep 4 17:19:51.677099 kubelet[3213]: I0904 17:19:51.676411 3213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkmd4\" (UniqueName: \"kubernetes.io/projected/0f66c2fc-a962-4f69-bfa4-ee05713eb4fb-kube-api-access-wkmd4\") pod \"calico-apiserver-858797dd54-qstwm\" (UID: \"0f66c2fc-a962-4f69-bfa4-ee05713eb4fb\") " pod="calico-apiserver/calico-apiserver-858797dd54-qstwm" Sep 4 17:19:51.778763 kubelet[3213]: E0904 17:19:51.778204 3213 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:19:51.778763 kubelet[3213]: E0904 17:19:51.778357 3213 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f66c2fc-a962-4f69-bfa4-ee05713eb4fb-calico-apiserver-certs podName:0f66c2fc-a962-4f69-bfa4-ee05713eb4fb nodeName:}" failed. No retries permitted until 2024-09-04 17:19:52.278319079 +0000 UTC m=+101.511246552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0f66c2fc-a962-4f69-bfa4-ee05713eb4fb-calico-apiserver-certs") pod "calico-apiserver-858797dd54-qstwm" (UID: "0f66c2fc-a962-4f69-bfa4-ee05713eb4fb") : secret "calico-apiserver-certs" not found Sep 4 17:19:52.436363 containerd[2021]: time="2024-09-04T17:19:52.436225527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858797dd54-qstwm,Uid:0f66c2fc-a962-4f69-bfa4-ee05713eb4fb,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:19:52.808636 systemd-networkd[1847]: calie4c3f98b854: Link UP Sep 4 17:19:52.811360 systemd-networkd[1847]: calie4c3f98b854: Gained carrier Sep 4 17:19:52.822362 (udev-worker)[5929]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.574 [INFO][5909] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0 calico-apiserver-858797dd54- calico-apiserver 0f66c2fc-a962-4f69-bfa4-ee05713eb4fb 1102 0 2024-09-04 17:19:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:858797dd54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-59 calico-apiserver-858797dd54-qstwm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie4c3f98b854 [] []}} ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Namespace="calico-apiserver" Pod="calico-apiserver-858797dd54-qstwm" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.575 [INFO][5909] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Namespace="calico-apiserver" Pod="calico-apiserver-858797dd54-qstwm" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.682 [INFO][5920] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" HandleID="k8s-pod-network.484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Workload="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.703 [INFO][5920] ipam_plugin.go 270: Auto assigning IP ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" HandleID="k8s-pod-network.484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Workload="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000364970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-59", "pod":"calico-apiserver-858797dd54-qstwm", "timestamp":"2024-09-04 17:19:52.6828424 +0000 UTC"}, Hostname:"ip-172-31-22-59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.704 [INFO][5920] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.704 [INFO][5920] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.704 [INFO][5920] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-59' Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.707 [INFO][5920] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" host="ip-172-31-22-59" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.720 [INFO][5920] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-59" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.733 [INFO][5920] ipam.go 489: Trying affinity for 192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.739 [INFO][5920] ipam.go 155: Attempting to load block cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.746 [INFO][5920] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ip-172-31-22-59" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.746 [INFO][5920] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" host="ip-172-31-22-59" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.751 [INFO][5920] ipam.go 1685: Creating new handle: k8s-pod-network.484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019 Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.761 [INFO][5920] ipam.go 1203: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" host="ip-172-31-22-59" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.787 [INFO][5920] ipam.go 1216: Successfully claimed IPs: [192.168.3.69/26] block=192.168.3.64/26 handle="k8s-pod-network.484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" host="ip-172-31-22-59" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.788 [INFO][5920] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.69/26] handle="k8s-pod-network.484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" host="ip-172-31-22-59" Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.788 [INFO][5920] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:52.862093 containerd[2021]: 2024-09-04 17:19:52.788 [INFO][5920] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.3.69/26] IPv6=[] ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" HandleID="k8s-pod-network.484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Workload="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" Sep 4 17:19:52.864399 containerd[2021]: 2024-09-04 17:19:52.796 [INFO][5909] k8s.go 386: Populated endpoint ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Namespace="calico-apiserver" Pod="calico-apiserver-858797dd54-qstwm" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0", GenerateName:"calico-apiserver-858797dd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f66c2fc-a962-4f69-bfa4-ee05713eb4fb", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858797dd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"", Pod:"calico-apiserver-858797dd54-qstwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4c3f98b854", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:52.864399 containerd[2021]: 2024-09-04 17:19:52.797 [INFO][5909] k8s.go 387: Calico CNI using IPs: [192.168.3.69/32] ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Namespace="calico-apiserver" Pod="calico-apiserver-858797dd54-qstwm" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" Sep 4 17:19:52.864399 containerd[2021]: 2024-09-04 17:19:52.797 [INFO][5909] dataplane_linux.go 68: Setting the host side veth name to calie4c3f98b854 ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Namespace="calico-apiserver" Pod="calico-apiserver-858797dd54-qstwm" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" Sep 4 17:19:52.864399 containerd[2021]: 2024-09-04 17:19:52.813 [INFO][5909] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Namespace="calico-apiserver" Pod="calico-apiserver-858797dd54-qstwm" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" Sep 4 17:19:52.864399 containerd[2021]: 2024-09-04 17:19:52.817 [INFO][5909] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Namespace="calico-apiserver" Pod="calico-apiserver-858797dd54-qstwm" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0", GenerateName:"calico-apiserver-858797dd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f66c2fc-a962-4f69-bfa4-ee05713eb4fb", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858797dd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-59", ContainerID:"484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019", Pod:"calico-apiserver-858797dd54-qstwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4c3f98b854", MAC:"76:99:17:53:7f:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:52.864399 containerd[2021]: 2024-09-04 17:19:52.851 [INFO][5909] k8s.go 500: Wrote updated endpoint to datastore ContainerID="484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019" Namespace="calico-apiserver" Pod="calico-apiserver-858797dd54-qstwm" WorkloadEndpoint="ip--172--31--22--59-k8s-calico--apiserver--858797dd54--qstwm-eth0" Sep 4 17:19:52.982790 containerd[2021]: time="2024-09-04T17:19:52.982516301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:52.982790 containerd[2021]: time="2024-09-04T17:19:52.982653941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:52.982790 containerd[2021]: time="2024-09-04T17:19:52.982732961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:52.986117 containerd[2021]: time="2024-09-04T17:19:52.984162545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:53.078055 systemd[1]: Started cri-containerd-484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019.scope - libcontainer container 484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019. Sep 4 17:19:53.258363 containerd[2021]: time="2024-09-04T17:19:53.258264315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858797dd54-qstwm,Uid:0f66c2fc-a962-4f69-bfa4-ee05713eb4fb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019\"" Sep 4 17:19:53.263815 containerd[2021]: time="2024-09-04T17:19:53.263109951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:19:53.575762 systemd[1]: Started sshd@22-172.31.22.59:22-139.178.89.65:48962.service - OpenSSH per-connection server daemon (139.178.89.65:48962). Sep 4 17:19:53.774771 sshd[6007]: Accepted publickey for core from 139.178.89.65 port 48962 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:53.778562 sshd[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:53.789031 systemd-logind[1989]: New session 23 of user core. Sep 4 17:19:53.801298 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:19:54.088577 sshd[6007]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:54.102243 systemd[1]: sshd@22-172.31.22.59:22-139.178.89.65:48962.service: Deactivated successfully. Sep 4 17:19:54.107896 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:19:54.111023 systemd-logind[1989]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:19:54.114013 systemd-logind[1989]: Removed session 23. Sep 4 17:19:54.269734 systemd-networkd[1847]: calie4c3f98b854: Gained IPv6LL Sep 4 17:19:56.399121 containerd[2021]: time="2024-09-04T17:19:56.398796306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:56.400791 containerd[2021]: time="2024-09-04T17:19:56.400618434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Sep 4 17:19:56.401822 containerd[2021]: time="2024-09-04T17:19:56.401723058Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:56.407183 containerd[2021]: time="2024-09-04T17:19:56.407059542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:56.410226 containerd[2021]: time="2024-09-04T17:19:56.409810638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 3.146626299s" Sep 4 17:19:56.410226 containerd[2021]: time="2024-09-04T17:19:56.409884186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:19:56.416734 containerd[2021]: time="2024-09-04T17:19:56.416665963Z" level=info msg="CreateContainer within sandbox \"484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:19:56.444725 containerd[2021]: time="2024-09-04T17:19:56.444623167Z" level=info msg="CreateContainer within sandbox \"484742eddb950ca12a967e151bf4a28b9ffbabce514693325a70eba099325019\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"30baa4e79af797b5330b0f136e4c9906894f2658d701edf22b64230a8178b02b\"" Sep 4 17:19:56.447731 containerd[2021]: time="2024-09-04T17:19:56.447346543Z" level=info msg="StartContainer for \"30baa4e79af797b5330b0f136e4c9906894f2658d701edf22b64230a8178b02b\"" Sep 4 17:19:56.453174 ntpd[1981]: Listen normally on 14 calie4c3f98b854 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:19:56.454822 ntpd[1981]: 4 Sep 17:19:56 ntpd[1981]: Listen normally on 14 calie4c3f98b854 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:19:56.525671 systemd[1]: Started cri-containerd-30baa4e79af797b5330b0f136e4c9906894f2658d701edf22b64230a8178b02b.scope - libcontainer container 30baa4e79af797b5330b0f136e4c9906894f2658d701edf22b64230a8178b02b. Sep 4 17:19:56.623475 containerd[2021]: time="2024-09-04T17:19:56.623237240Z" level=info msg="StartContainer for \"30baa4e79af797b5330b0f136e4c9906894f2658d701edf22b64230a8178b02b\" returns successfully" Sep 4 17:19:57.126813 kubelet[3213]: I0904 17:19:57.125040 3213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-858797dd54-qstwm" podStartSLOduration=2.975982387 podCreationTimestamp="2024-09-04 17:19:51 +0000 UTC" firstStartedPulling="2024-09-04 17:19:53.261856803 +0000 UTC m=+102.494784264" lastFinishedPulling="2024-09-04 17:19:56.410807562 +0000 UTC m=+105.643735035" observedRunningTime="2024-09-04 17:19:57.12421833 +0000 UTC m=+106.357145839" watchObservedRunningTime="2024-09-04 17:19:57.124933158 +0000 UTC m=+106.357860643" Sep 4 17:19:59.139630 systemd[1]: Started sshd@23-172.31.22.59:22-139.178.89.65:34960.service - OpenSSH per-connection server daemon (139.178.89.65:34960). Sep 4 17:19:59.336590 sshd[6080]: Accepted publickey for core from 139.178.89.65 port 34960 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:59.341419 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:59.356806 systemd-logind[1989]: New session 24 of user core. Sep 4 17:19:59.366416 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:19:59.732119 sshd[6080]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:59.748710 systemd[1]: sshd@23-172.31.22.59:22-139.178.89.65:34960.service: Deactivated successfully. Sep 4 17:19:59.761298 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:19:59.768074 systemd-logind[1989]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:19:59.775730 systemd-logind[1989]: Removed session 24. Sep 4 17:20:04.778639 systemd[1]: Started sshd@24-172.31.22.59:22-139.178.89.65:34970.service - OpenSSH per-connection server daemon (139.178.89.65:34970). Sep 4 17:20:04.984501 sshd[6095]: Accepted publickey for core from 139.178.89.65 port 34970 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:20:04.990685 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:20:05.006715 systemd-logind[1989]: New session 25 of user core. Sep 4 17:20:05.015371 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:20:05.333491 sshd[6095]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:05.349437 systemd[1]: sshd@24-172.31.22.59:22-139.178.89.65:34970.service: Deactivated successfully. Sep 4 17:20:05.359870 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:20:05.366786 systemd-logind[1989]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:20:05.371197 systemd-logind[1989]: Removed session 25. Sep 4 17:20:10.378203 systemd[1]: Started sshd@25-172.31.22.59:22-139.178.89.65:52320.service - OpenSSH per-connection server daemon (139.178.89.65:52320). Sep 4 17:20:10.575779 sshd[6113]: Accepted publickey for core from 139.178.89.65 port 52320 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:20:10.580625 sshd[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:20:10.600104 systemd-logind[1989]: New session 26 of user core. Sep 4 17:20:10.608409 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:20:10.960598 sshd[6113]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:10.974098 systemd[1]: sshd@25-172.31.22.59:22-139.178.89.65:52320.service: Deactivated successfully. Sep 4 17:20:10.985342 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:20:10.991412 systemd-logind[1989]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:20:10.997005 systemd-logind[1989]: Removed session 26.