Jul 1 23:59:03.246871 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 1 23:59:03.246929 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 1 23:59:03.246957 kernel: KASLR disabled due to lack of seed Jul 1 23:59:03.246975 kernel: efi: EFI v2.7 by EDK II Jul 1 23:59:03.246991 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 1 23:59:03.247007 kernel: ACPI: Early table checksum verification disabled Jul 1 23:59:03.247025 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 1 23:59:03.247042 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 1 23:59:03.247059 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 1 23:59:03.247075 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 1 23:59:03.247097 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 1 23:59:03.247115 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 1 23:59:03.247131 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 1 23:59:03.247148 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 1 23:59:03.247167 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 1 23:59:03.247189 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 1 23:59:03.247207 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 1 23:59:03.247223 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 1 23:59:03.247240 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 1 23:59:03.247258 kernel: printk: bootconsole [uart0] enabled Jul 1 23:59:03.247275 kernel: NUMA: Failed to initialise from firmware Jul 1 23:59:03.247325 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 1 23:59:03.247357 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 1 23:59:03.247376 kernel: Zone ranges: Jul 1 23:59:03.247393 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 1 23:59:03.247409 kernel: DMA32 empty Jul 1 23:59:03.247437 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 1 23:59:03.247454 kernel: Movable zone start for each node Jul 1 23:59:03.247471 kernel: Early memory node ranges Jul 1 23:59:03.247487 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 1 23:59:03.247503 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 1 23:59:03.247520 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 1 23:59:03.247537 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 1 23:59:03.247553 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 1 23:59:03.247569 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 1 23:59:03.247586 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 1 23:59:03.247602 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 1 23:59:03.247620 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 1 23:59:03.247648 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 1 23:59:03.247667 kernel: psci: probing for conduit method from ACPI. Jul 1 23:59:03.247693 kernel: psci: PSCIv1.0 detected in firmware. Jul 1 23:59:03.247711 kernel: psci: Using standard PSCI v0.2 function IDs Jul 1 23:59:03.247732 kernel: psci: Trusted OS migration not required Jul 1 23:59:03.247757 kernel: psci: SMC Calling Convention v1.1 Jul 1 23:59:03.247776 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 1 23:59:03.247795 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 1 23:59:03.247814 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 1 23:59:03.247833 kernel: Detected PIPT I-cache on CPU0 Jul 1 23:59:03.247852 kernel: CPU features: detected: GIC system register CPU interface Jul 1 23:59:03.247871 kernel: CPU features: detected: Spectre-v2 Jul 1 23:59:03.247889 kernel: CPU features: detected: Spectre-v3a Jul 1 23:59:03.247907 kernel: CPU features: detected: Spectre-BHB Jul 1 23:59:03.247926 kernel: CPU features: detected: ARM erratum 1742098 Jul 1 23:59:03.247944 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 1 23:59:03.247969 kernel: alternatives: applying boot alternatives Jul 1 23:59:03.247991 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 1 23:59:03.248010 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 1 23:59:03.248028 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 1 23:59:03.248046 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 1 23:59:03.248064 kernel: Fallback order for Node 0: 0 Jul 1 23:59:03.248081 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 1 23:59:03.248098 kernel: Policy zone: Normal Jul 1 23:59:03.248116 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 1 23:59:03.248133 kernel: software IO TLB: area num 2. Jul 1 23:59:03.248151 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 1 23:59:03.248175 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 1 23:59:03.248194 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 1 23:59:03.248211 kernel: trace event string verifier disabled Jul 1 23:59:03.248228 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 1 23:59:03.248248 kernel: rcu: RCU event tracing is enabled. Jul 1 23:59:03.248267 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 1 23:59:03.248285 kernel: Trampoline variant of Tasks RCU enabled. Jul 1 23:59:03.248374 kernel: Tracing variant of Tasks RCU enabled. Jul 1 23:59:03.248395 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 1 23:59:03.248413 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 1 23:59:03.248432 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 1 23:59:03.248460 kernel: GICv3: 96 SPIs implemented Jul 1 23:59:03.248478 kernel: GICv3: 0 Extended SPIs implemented Jul 1 23:59:03.248496 kernel: Root IRQ handler: gic_handle_irq Jul 1 23:59:03.248513 kernel: GICv3: GICv3 features: 16 PPIs Jul 1 23:59:03.248531 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 1 23:59:03.248548 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 1 23:59:03.248566 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 1 23:59:03.248583 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 1 23:59:03.248601 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 1 23:59:03.248618 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 1 23:59:03.248636 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 1 23:59:03.248654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 1 23:59:03.248678 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 1 23:59:03.248696 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 1 23:59:03.248713 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 1 23:59:03.248731 kernel: Console: colour dummy device 80x25 Jul 1 23:59:03.248749 kernel: printk: console [tty1] enabled Jul 1 23:59:03.248767 kernel: ACPI: Core revision 20230628 Jul 1 23:59:03.248786 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 1 23:59:03.248803 kernel: pid_max: default: 32768 minimum: 301 Jul 1 23:59:03.248821 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 1 23:59:03.248838 kernel: SELinux: Initializing. Jul 1 23:59:03.248863 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 23:59:03.248881 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 23:59:03.248898 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 1 23:59:03.248916 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 1 23:59:03.248935 kernel: rcu: Hierarchical SRCU implementation. Jul 1 23:59:03.248954 kernel: rcu: Max phase no-delay instances is 400. Jul 1 23:59:03.248972 kernel: Platform MSI: ITS@0x10080000 domain created Jul 1 23:59:03.248989 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 1 23:59:03.249007 kernel: Remapping and enabling EFI services. Jul 1 23:59:03.249031 kernel: smp: Bringing up secondary CPUs ... Jul 1 23:59:03.249049 kernel: Detected PIPT I-cache on CPU1 Jul 1 23:59:03.249067 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 1 23:59:03.249085 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 1 23:59:03.249103 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 1 23:59:03.249121 kernel: smp: Brought up 1 node, 2 CPUs Jul 1 23:59:03.249138 kernel: SMP: Total of 2 processors activated. Jul 1 23:59:03.249156 kernel: CPU features: detected: 32-bit EL0 Support Jul 1 23:59:03.249175 kernel: CPU features: detected: 32-bit EL1 Support Jul 1 23:59:03.249199 kernel: CPU features: detected: CRC32 instructions Jul 1 23:59:03.249244 kernel: CPU: All CPU(s) started at EL1 Jul 1 23:59:03.249281 kernel: alternatives: applying system-wide alternatives Jul 1 23:59:03.251431 kernel: devtmpfs: initialized Jul 1 23:59:03.251455 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 1 23:59:03.251476 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 1 23:59:03.251495 kernel: pinctrl core: initialized pinctrl subsystem Jul 1 23:59:03.251515 kernel: SMBIOS 3.0.0 present. Jul 1 23:59:03.251534 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 1 23:59:03.251566 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 1 23:59:03.251585 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 1 23:59:03.251604 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 1 23:59:03.251623 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 1 23:59:03.251643 kernel: audit: initializing netlink subsys (disabled) Jul 1 23:59:03.251666 kernel: audit: type=2000 audit(0.332:1): state=initialized audit_enabled=0 res=1 Jul 1 23:59:03.251687 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 1 23:59:03.251712 kernel: cpuidle: using governor menu Jul 1 23:59:03.251731 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 1 23:59:03.251750 kernel: ASID allocator initialised with 65536 entries Jul 1 23:59:03.251769 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 1 23:59:03.251788 kernel: Serial: AMBA PL011 UART driver Jul 1 23:59:03.251806 kernel: Modules: 17600 pages in range for non-PLT usage Jul 1 23:59:03.251825 kernel: Modules: 509120 pages in range for PLT usage Jul 1 23:59:03.251843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 1 23:59:03.251862 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 1 23:59:03.251886 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 1 23:59:03.251905 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 1 23:59:03.251924 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 1 23:59:03.251942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 1 23:59:03.251961 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 1 23:59:03.251979 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 1 23:59:03.251998 kernel: ACPI: Added _OSI(Module Device) Jul 1 23:59:03.252016 kernel: ACPI: Added _OSI(Processor Device) Jul 1 23:59:03.252034 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 1 23:59:03.252058 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 1 23:59:03.252077 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 1 23:59:03.252095 kernel: ACPI: Interpreter enabled Jul 1 23:59:03.252113 kernel: ACPI: Using GIC for interrupt routing Jul 1 23:59:03.252131 kernel: ACPI: MCFG table detected, 1 entries Jul 1 23:59:03.252150 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 1 23:59:03.253618 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 1 23:59:03.253885 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 1 23:59:03.254126 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 1 23:59:03.255527 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 1 23:59:03.255793 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 1 23:59:03.255822 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 1 23:59:03.255842 kernel: acpiphp: Slot [1] registered Jul 1 23:59:03.255861 kernel: acpiphp: Slot [2] registered Jul 1 23:59:03.255879 kernel: acpiphp: Slot [3] registered Jul 1 23:59:03.255898 kernel: acpiphp: Slot [4] registered Jul 1 23:59:03.255916 kernel: acpiphp: Slot [5] registered Jul 1 23:59:03.255947 kernel: acpiphp: Slot [6] registered Jul 1 23:59:03.255965 kernel: acpiphp: Slot [7] registered Jul 1 23:59:03.255984 kernel: acpiphp: Slot [8] registered Jul 1 23:59:03.256002 kernel: acpiphp: Slot [9] registered Jul 1 23:59:03.256020 kernel: acpiphp: Slot [10] registered Jul 1 23:59:03.256038 kernel: acpiphp: Slot [11] registered Jul 1 23:59:03.256057 kernel: acpiphp: Slot [12] registered Jul 1 23:59:03.256075 kernel: acpiphp: Slot [13] registered Jul 1 23:59:03.256093 kernel: acpiphp: Slot [14] registered Jul 1 23:59:03.256117 kernel: acpiphp: Slot [15] registered Jul 1 23:59:03.256136 kernel: acpiphp: Slot [16] registered Jul 1 23:59:03.256154 kernel: acpiphp: Slot [17] registered Jul 1 23:59:03.256173 kernel: acpiphp: Slot [18] registered Jul 1 23:59:03.256191 kernel: acpiphp: Slot [19] registered Jul 1 23:59:03.256209 kernel: acpiphp: Slot [20] registered Jul 1 23:59:03.256228 kernel: acpiphp: Slot [21] registered Jul 1 23:59:03.256246 kernel: acpiphp: Slot [22] registered Jul 1 23:59:03.256265 kernel: acpiphp: Slot [23] registered Jul 1 23:59:03.256283 kernel: acpiphp: Slot [24] registered Jul 1 23:59:03.258438 kernel: acpiphp: Slot [25] registered Jul 1 23:59:03.258462 kernel: acpiphp: Slot [26] registered Jul 1 23:59:03.258482 kernel: acpiphp: Slot [27] registered Jul 1 23:59:03.258502 kernel: acpiphp: Slot [28] registered Jul 1 23:59:03.258522 kernel: acpiphp: Slot [29] registered Jul 1 23:59:03.258541 kernel: acpiphp: Slot [30] registered Jul 1 23:59:03.258560 kernel: acpiphp: Slot [31] registered Jul 1 23:59:03.258580 kernel: PCI host bridge to bus 0000:00 Jul 1 23:59:03.258903 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 1 23:59:03.259142 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 1 23:59:03.259401 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 1 23:59:03.259637 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 1 23:59:03.259916 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 1 23:59:03.260162 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 1 23:59:03.264525 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 1 23:59:03.264835 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 1 23:59:03.265094 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 1 23:59:03.265506 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 1 23:59:03.265780 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 1 23:59:03.266008 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 1 23:59:03.266221 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 1 23:59:03.268755 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 1 23:59:03.269013 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 1 23:59:03.269268 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 1 23:59:03.269552 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 1 23:59:03.269797 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 1 23:59:03.270024 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 1 23:59:03.270260 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 1 23:59:03.272596 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 1 23:59:03.272840 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 1 23:59:03.273073 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 1 23:59:03.273108 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 1 23:59:03.273129 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 1 23:59:03.273150 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 1 23:59:03.273170 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 1 23:59:03.273189 kernel: iommu: Default domain type: Translated Jul 1 23:59:03.273208 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 1 23:59:03.273267 kernel: efivars: Registered efivars operations Jul 1 23:59:03.273287 kernel: vgaarb: loaded Jul 1 23:59:03.273337 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 1 23:59:03.273358 kernel: VFS: Disk quotas dquot_6.6.0 Jul 1 23:59:03.273378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 1 23:59:03.273397 kernel: pnp: PnP ACPI init Jul 1 23:59:03.273690 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 1 23:59:03.273723 kernel: pnp: PnP ACPI: found 1 devices Jul 1 23:59:03.273754 kernel: NET: Registered PF_INET protocol family Jul 1 23:59:03.273773 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 1 23:59:03.273792 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 1 23:59:03.273811 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 1 23:59:03.273831 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 1 23:59:03.273850 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 1 23:59:03.273869 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 1 23:59:03.273888 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 23:59:03.273907 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 23:59:03.273932 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 1 23:59:03.273951 kernel: PCI: CLS 0 bytes, default 64 Jul 1 23:59:03.273970 kernel: kvm [1]: HYP mode not available Jul 1 23:59:03.273988 kernel: Initialise system trusted keyrings Jul 1 23:59:03.274007 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 1 23:59:03.274026 kernel: Key type asymmetric registered Jul 1 23:59:03.274044 kernel: Asymmetric key parser 'x509' registered Jul 1 23:59:03.274064 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 1 23:59:03.274082 kernel: io scheduler mq-deadline registered Jul 1 23:59:03.274108 kernel: io scheduler kyber registered Jul 1 23:59:03.274128 kernel: io scheduler bfq registered Jul 1 23:59:03.276532 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 1 23:59:03.276582 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 1 23:59:03.276602 kernel: ACPI: button: Power Button [PWRB] Jul 1 23:59:03.276622 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 1 23:59:03.276640 kernel: ACPI: button: Sleep Button [SLPB] Jul 1 23:59:03.276659 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 1 23:59:03.276693 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 1 23:59:03.276926 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 1 23:59:03.276955 kernel: printk: console [ttyS0] disabled Jul 1 23:59:03.276975 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 1 23:59:03.276994 kernel: printk: console [ttyS0] enabled Jul 1 23:59:03.277012 kernel: printk: bootconsole [uart0] disabled Jul 1 23:59:03.277030 kernel: thunder_xcv, ver 1.0 Jul 1 23:59:03.277048 kernel: thunder_bgx, ver 1.0 Jul 1 23:59:03.277067 kernel: nicpf, ver 1.0 Jul 1 23:59:03.277085 kernel: nicvf, ver 1.0 Jul 1 23:59:03.280536 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 1 23:59:03.280852 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-01T23:59:02 UTC (1719878342) Jul 1 23:59:03.280893 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 1 23:59:03.280913 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 1 23:59:03.280933 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 1 23:59:03.280952 kernel: watchdog: Hard watchdog permanently disabled Jul 1 23:59:03.280971 kernel: NET: Registered PF_INET6 protocol family Jul 1 23:59:03.280992 kernel: Segment Routing with IPv6 Jul 1 23:59:03.281027 kernel: In-situ OAM (IOAM) with IPv6 Jul 1 23:59:03.281046 kernel: NET: Registered PF_PACKET protocol family Jul 1 23:59:03.281065 kernel: Key type dns_resolver registered Jul 1 23:59:03.281084 kernel: registered taskstats version 1 Jul 1 23:59:03.281103 kernel: Loading compiled-in X.509 certificates Jul 1 23:59:03.281123 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 1 23:59:03.281142 kernel: Key type .fscrypt registered Jul 1 23:59:03.281161 kernel: Key type fscrypt-provisioning registered Jul 1 23:59:03.281180 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 1 23:59:03.281207 kernel: ima: Allocated hash algorithm: sha1 Jul 1 23:59:03.281260 kernel: ima: No architecture policies found Jul 1 23:59:03.281281 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 1 23:59:03.281341 kernel: clk: Disabling unused clocks Jul 1 23:59:03.281365 kernel: Freeing unused kernel memory: 39040K Jul 1 23:59:03.281384 kernel: Run /init as init process Jul 1 23:59:03.281403 kernel: with arguments: Jul 1 23:59:03.281422 kernel: /init Jul 1 23:59:03.281441 kernel: with environment: Jul 1 23:59:03.281468 kernel: HOME=/ Jul 1 23:59:03.281488 kernel: TERM=linux Jul 1 23:59:03.281507 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 1 23:59:03.281532 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 1 23:59:03.281557 systemd[1]: Detected virtualization amazon. Jul 1 23:59:03.281579 systemd[1]: Detected architecture arm64. Jul 1 23:59:03.281599 systemd[1]: Running in initrd. Jul 1 23:59:03.281618 systemd[1]: No hostname configured, using default hostname. Jul 1 23:59:03.281646 systemd[1]: Hostname set to . Jul 1 23:59:03.281670 systemd[1]: Initializing machine ID from VM UUID. Jul 1 23:59:03.281690 systemd[1]: Queued start job for default target initrd.target. Jul 1 23:59:03.281710 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 23:59:03.281731 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 23:59:03.281754 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 1 23:59:03.281775 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 23:59:03.281802 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 1 23:59:03.281824 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 1 23:59:03.281848 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 1 23:59:03.281872 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 1 23:59:03.281895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 23:59:03.281916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 23:59:03.281937 systemd[1]: Reached target paths.target - Path Units. Jul 1 23:59:03.281969 systemd[1]: Reached target slices.target - Slice Units. Jul 1 23:59:03.281993 systemd[1]: Reached target swap.target - Swaps. Jul 1 23:59:03.282014 systemd[1]: Reached target timers.target - Timer Units. Jul 1 23:59:03.282035 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 23:59:03.282057 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 23:59:03.282082 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 1 23:59:03.282103 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 1 23:59:03.282127 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 23:59:03.282148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 23:59:03.282182 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 23:59:03.282204 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 23:59:03.282226 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 1 23:59:03.282248 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 23:59:03.282271 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 1 23:59:03.287335 systemd[1]: Starting systemd-fsck-usr.service... Jul 1 23:59:03.287406 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 23:59:03.287429 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 23:59:03.287461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 23:59:03.287482 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 1 23:59:03.287503 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 23:59:03.287525 systemd[1]: Finished systemd-fsck-usr.service. Jul 1 23:59:03.287599 systemd-journald[251]: Collecting audit messages is disabled. Jul 1 23:59:03.287652 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 23:59:03.287673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:03.287696 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 1 23:59:03.287717 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 23:59:03.287743 systemd-journald[251]: Journal started Jul 1 23:59:03.287782 systemd-journald[251]: Runtime Journal (/run/log/journal/ec27cb98908cb84d41610fca11a560dc) is 8.0M, max 75.3M, 67.3M free. Jul 1 23:59:03.232392 systemd-modules-load[252]: Inserted module 'overlay' Jul 1 23:59:03.297452 kernel: Bridge firewalling registered Jul 1 23:59:03.296416 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 1 23:59:03.308266 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 23:59:03.315716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 23:59:03.316789 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 23:59:03.328637 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 23:59:03.334638 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 23:59:03.341790 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 1 23:59:03.381822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 23:59:03.396496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 1 23:59:03.401362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 23:59:03.423718 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 23:59:03.456410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 23:59:03.473853 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 1 23:59:03.504662 systemd-resolved[280]: Positive Trust Anchors: Jul 1 23:59:03.504700 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 23:59:03.504762 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 1 23:59:03.534221 dracut-cmdline[289]: dracut-dracut-053 Jul 1 23:59:03.540610 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 1 23:59:03.686338 kernel: SCSI subsystem initialized Jul 1 23:59:03.696324 kernel: Loading iSCSI transport class v2.0-870. Jul 1 23:59:03.707330 kernel: iscsi: registered transport (tcp) Jul 1 23:59:03.730849 kernel: iscsi: registered transport (qla4xxx) Jul 1 23:59:03.730924 kernel: QLogic iSCSI HBA Driver Jul 1 23:59:03.793340 kernel: random: crng init done Jul 1 23:59:03.793700 systemd-resolved[280]: Defaulting to hostname 'linux'. Jul 1 23:59:03.799377 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 23:59:03.804219 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 23:59:03.827474 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 1 23:59:03.841785 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 1 23:59:03.878343 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 1 23:59:03.878430 kernel: device-mapper: uevent: version 1.0.3 Jul 1 23:59:03.881335 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 1 23:59:03.949367 kernel: raid6: neonx8 gen() 6654 MB/s Jul 1 23:59:03.966345 kernel: raid6: neonx4 gen() 6522 MB/s Jul 1 23:59:03.983325 kernel: raid6: neonx2 gen() 5459 MB/s Jul 1 23:59:04.000326 kernel: raid6: neonx1 gen() 3962 MB/s Jul 1 23:59:04.017325 kernel: raid6: int64x8 gen() 3833 MB/s Jul 1 23:59:04.034325 kernel: raid6: int64x4 gen() 3722 MB/s Jul 1 23:59:04.051324 kernel: raid6: int64x2 gen() 3604 MB/s Jul 1 23:59:04.068993 kernel: raid6: int64x1 gen() 2770 MB/s Jul 1 23:59:04.069040 kernel: raid6: using algorithm neonx8 gen() 6654 MB/s Jul 1 23:59:04.086992 kernel: raid6: .... xor() 4899 MB/s, rmw enabled Jul 1 23:59:04.087031 kernel: raid6: using neon recovery algorithm Jul 1 23:59:04.096048 kernel: xor: measuring software checksum speed Jul 1 23:59:04.096104 kernel: 8regs : 11042 MB/sec Jul 1 23:59:04.098323 kernel: 32regs : 11940 MB/sec Jul 1 23:59:04.100359 kernel: arm64_neon : 9610 MB/sec Jul 1 23:59:04.100392 kernel: xor: using function: 32regs (11940 MB/sec) Jul 1 23:59:04.189353 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 1 23:59:04.211590 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 1 23:59:04.225635 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 23:59:04.264736 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 1 23:59:04.272897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 23:59:04.292842 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 1 23:59:04.326632 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Jul 1 23:59:04.390508 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 23:59:04.406766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 23:59:04.530398 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 23:59:04.551084 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 1 23:59:04.604418 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 1 23:59:04.615358 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 23:59:04.620930 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 23:59:04.626220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 23:59:04.645754 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 1 23:59:04.686202 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 1 23:59:04.796641 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 1 23:59:04.796722 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 1 23:59:04.847767 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 1 23:59:04.848138 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 1 23:59:04.848464 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:0c:78:94:3d:0b Jul 1 23:59:04.812449 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 1 23:59:04.862747 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 1 23:59:04.862813 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 1 23:59:04.812713 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 23:59:04.818623 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 23:59:04.821443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 23:59:04.821681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:04.824582 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 23:59:04.848187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 23:59:04.875522 (udev-worker)[542]: Network interface NamePolicy= disabled on kernel command line. Jul 1 23:59:04.892465 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 1 23:59:04.899483 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 1 23:59:04.899564 kernel: GPT:9289727 != 16777215 Jul 1 23:59:04.902417 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 1 23:59:04.902486 kernel: GPT:9289727 != 16777215 Jul 1 23:59:04.902513 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 1 23:59:04.902538 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:04.916792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:04.929767 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 23:59:04.987446 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 23:59:05.030152 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 1 23:59:05.054535 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (545) Jul 1 23:59:05.079362 kernel: BTRFS: device fsid 2e7aff7f-b51e-4094-8f16-54690a62fb17 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (518) Jul 1 23:59:05.140647 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 1 23:59:05.191964 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 1 23:59:05.198421 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 1 23:59:05.220209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 1 23:59:05.234730 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 1 23:59:05.258413 disk-uuid[660]: Primary Header is updated. Jul 1 23:59:05.258413 disk-uuid[660]: Secondary Entries is updated. Jul 1 23:59:05.258413 disk-uuid[660]: Secondary Header is updated. Jul 1 23:59:05.269355 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:05.276402 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:05.286357 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:06.286373 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 1 23:59:06.288031 disk-uuid[661]: The operation has completed successfully. Jul 1 23:59:06.502113 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 1 23:59:06.502412 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 1 23:59:06.554652 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 1 23:59:06.568542 sh[1004]: Success Jul 1 23:59:06.587349 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 1 23:59:06.710592 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 1 23:59:06.718534 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 1 23:59:06.734820 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 1 23:59:06.755660 kernel: BTRFS info (device dm-0): first mount of filesystem 2e7aff7f-b51e-4094-8f16-54690a62fb17 Jul 1 23:59:06.755734 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 1 23:59:06.755761 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 1 23:59:06.758536 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 1 23:59:06.758614 kernel: BTRFS info (device dm-0): using free space tree Jul 1 23:59:06.839340 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 1 23:59:06.883115 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 1 23:59:06.887039 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 1 23:59:06.903733 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 1 23:59:06.912627 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 1 23:59:06.952324 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:06.952431 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 1 23:59:06.954088 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 1 23:59:06.958341 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 1 23:59:06.980136 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 1 23:59:06.983426 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:07.007666 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 1 23:59:07.021056 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 1 23:59:07.143375 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 23:59:07.170604 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 23:59:07.225221 systemd-networkd[1198]: lo: Link UP Jul 1 23:59:07.225243 systemd-networkd[1198]: lo: Gained carrier Jul 1 23:59:07.230121 systemd-networkd[1198]: Enumeration completed Jul 1 23:59:07.230833 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 23:59:07.230840 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 23:59:07.238534 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 23:59:07.239064 systemd[1]: Reached target network.target - Network. Jul 1 23:59:07.245460 systemd-networkd[1198]: eth0: Link UP Jul 1 23:59:07.245468 systemd-networkd[1198]: eth0: Gained carrier Jul 1 23:59:07.245486 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 23:59:07.287426 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.26.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 1 23:59:07.402732 ignition[1116]: Ignition 2.18.0 Jul 1 23:59:07.402773 ignition[1116]: Stage: fetch-offline Jul 1 23:59:07.406789 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:07.406847 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:07.413539 ignition[1116]: Ignition finished successfully Jul 1 23:59:07.417327 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 23:59:07.430914 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 1 23:59:07.468559 ignition[1207]: Ignition 2.18.0 Jul 1 23:59:07.469086 ignition[1207]: Stage: fetch Jul 1 23:59:07.469791 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:07.469860 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:07.470016 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:07.482417 ignition[1207]: PUT result: OK Jul 1 23:59:07.486193 ignition[1207]: parsed url from cmdline: "" Jul 1 23:59:07.486225 ignition[1207]: no config URL provided Jul 1 23:59:07.486244 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Jul 1 23:59:07.486277 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Jul 1 23:59:07.486370 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:07.494212 ignition[1207]: PUT result: OK Jul 1 23:59:07.498579 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 1 23:59:07.504470 ignition[1207]: GET result: OK Jul 1 23:59:07.506213 ignition[1207]: parsing config with SHA512: c141e08746db27faabd88b440ec2aad7ea740508ec98f259f9f7412a90cea5a83bfd7ae241006f0ea37a40c21c612ba23bdbe36611ae577e82f2927764a7a767 Jul 1 23:59:07.513502 unknown[1207]: fetched base config from "system" Jul 1 23:59:07.513530 unknown[1207]: fetched base config from "system" Jul 1 23:59:07.515215 ignition[1207]: fetch: fetch complete Jul 1 23:59:07.513544 unknown[1207]: fetched user config from "aws" Jul 1 23:59:07.515230 ignition[1207]: fetch: fetch passed Jul 1 23:59:07.523131 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 1 23:59:07.515893 ignition[1207]: Ignition finished successfully Jul 1 23:59:07.547576 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 1 23:59:07.574010 ignition[1214]: Ignition 2.18.0 Jul 1 23:59:07.574581 ignition[1214]: Stage: kargs Jul 1 23:59:07.575231 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:07.575259 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:07.575459 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:07.578962 ignition[1214]: PUT result: OK Jul 1 23:59:07.592784 ignition[1214]: kargs: kargs passed Jul 1 23:59:07.594828 ignition[1214]: Ignition finished successfully Jul 1 23:59:07.599396 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 1 23:59:07.615715 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 1 23:59:07.640644 ignition[1222]: Ignition 2.18.0 Jul 1 23:59:07.640672 ignition[1222]: Stage: disks Jul 1 23:59:07.642513 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:07.642543 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:07.642709 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:07.646346 ignition[1222]: PUT result: OK Jul 1 23:59:07.656846 ignition[1222]: disks: disks passed Jul 1 23:59:07.656991 ignition[1222]: Ignition finished successfully Jul 1 23:59:07.663455 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 1 23:59:07.669165 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 1 23:59:07.671396 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 1 23:59:07.671788 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 23:59:07.672116 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 23:59:07.672834 systemd[1]: Reached target basic.target - Basic System. Jul 1 23:59:07.689810 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 1 23:59:07.748061 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 1 23:59:07.761710 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 1 23:59:07.773526 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 1 23:59:07.873810 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 95038baa-e9f1-4207-86a5-38a4ce3cff7d r/w with ordered data mode. Quota mode: none. Jul 1 23:59:07.874778 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 1 23:59:07.879194 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 1 23:59:07.898513 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 23:59:07.906507 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 1 23:59:07.910933 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 1 23:59:07.911032 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 1 23:59:07.911079 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 23:59:07.937010 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 1 23:59:07.952336 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Jul 1 23:59:07.956366 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:07.956447 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 1 23:59:07.956476 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 1 23:59:07.957711 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 1 23:59:07.968316 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 1 23:59:07.970947 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 23:59:08.348089 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Jul 1 23:59:08.357793 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Jul 1 23:59:08.365986 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Jul 1 23:59:08.375333 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Jul 1 23:59:08.564482 systemd-networkd[1198]: eth0: Gained IPv6LL Jul 1 23:59:08.744259 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 1 23:59:08.761616 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 1 23:59:08.771670 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 1 23:59:08.789955 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 1 23:59:08.792586 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:08.834278 ignition[1364]: INFO : Ignition 2.18.0 Jul 1 23:59:08.836571 ignition[1364]: INFO : Stage: mount Jul 1 23:59:08.839094 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:08.841668 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:08.841698 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 1 23:59:08.845339 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:08.852430 ignition[1364]: INFO : PUT result: OK Jul 1 23:59:08.858478 ignition[1364]: INFO : mount: mount passed Jul 1 23:59:08.860846 ignition[1364]: INFO : Ignition finished successfully Jul 1 23:59:08.864798 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 1 23:59:08.878517 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 1 23:59:08.901649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 23:59:08.937324 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1376) Jul 1 23:59:08.940561 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 1 23:59:08.940610 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 1 23:59:08.941733 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 1 23:59:08.946332 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 1 23:59:08.949583 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 23:59:08.984956 ignition[1393]: INFO : Ignition 2.18.0 Jul 1 23:59:08.988370 ignition[1393]: INFO : Stage: files Jul 1 23:59:08.988370 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:08.988370 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:08.988370 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:08.999723 ignition[1393]: INFO : PUT result: OK Jul 1 23:59:09.004815 ignition[1393]: DEBUG : files: compiled without relabeling support, skipping Jul 1 23:59:09.007605 ignition[1393]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 1 23:59:09.007605 ignition[1393]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 1 23:59:09.048370 ignition[1393]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 1 23:59:09.051655 ignition[1393]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 1 23:59:09.054638 ignition[1393]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 1 23:59:09.052842 unknown[1393]: wrote ssh authorized keys file for user: core Jul 1 23:59:09.065593 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 1 23:59:09.065593 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 1 23:59:09.126421 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 1 23:59:09.236662 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 1 23:59:09.243183 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 1 23:59:09.770839 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 1 23:59:10.185598 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 1 23:59:10.185598 ignition[1393]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 1 23:59:10.195545 ignition[1393]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 23:59:10.195545 ignition[1393]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 23:59:10.195545 ignition[1393]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 1 23:59:10.195545 ignition[1393]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 1 23:59:10.195545 ignition[1393]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 1 23:59:10.195545 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 1 23:59:10.195545 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 1 23:59:10.195545 ignition[1393]: INFO : files: files passed Jul 1 23:59:10.195545 ignition[1393]: INFO : Ignition finished successfully Jul 1 23:59:10.233437 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 1 23:59:10.254704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 1 23:59:10.262875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 1 23:59:10.282423 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 1 23:59:10.282668 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 1 23:59:10.302970 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 23:59:10.302970 initrd-setup-root-after-ignition[1422]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 1 23:59:10.311324 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 23:59:10.316141 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 23:59:10.321226 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 1 23:59:10.339659 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 1 23:59:10.394365 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 1 23:59:10.396738 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 1 23:59:10.400781 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 1 23:59:10.403876 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 1 23:59:10.408469 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 1 23:59:10.426749 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 1 23:59:10.456073 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 23:59:10.477199 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 1 23:59:10.505011 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 1 23:59:10.510826 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 23:59:10.513827 systemd[1]: Stopped target timers.target - Timer Units. Jul 1 23:59:10.516110 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 1 23:59:10.516364 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 23:59:10.519631 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 1 23:59:10.522224 systemd[1]: Stopped target basic.target - Basic System. Jul 1 23:59:10.524511 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 1 23:59:10.527186 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 23:59:10.529966 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 1 23:59:10.532684 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 1 23:59:10.535105 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 23:59:10.538013 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 1 23:59:10.540513 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 1 23:59:10.542964 systemd[1]: Stopped target swap.target - Swaps. Jul 1 23:59:10.544944 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 1 23:59:10.545166 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 1 23:59:10.548072 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 1 23:59:10.550701 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 23:59:10.553568 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 1 23:59:10.555439 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 23:59:10.557919 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 1 23:59:10.558147 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 1 23:59:10.560620 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 1 23:59:10.560838 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 23:59:10.564063 systemd[1]: ignition-files.service: Deactivated successfully. Jul 1 23:59:10.564495 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 1 23:59:10.583692 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 1 23:59:10.636737 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 1 23:59:10.638615 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 1 23:59:10.639111 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 23:59:10.651632 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 1 23:59:10.651873 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 23:59:10.673750 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 1 23:59:10.674603 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 1 23:59:10.697266 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 1 23:59:10.701348 ignition[1446]: INFO : Ignition 2.18.0 Jul 1 23:59:10.701348 ignition[1446]: INFO : Stage: umount Jul 1 23:59:10.705525 ignition[1446]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 23:59:10.705525 ignition[1446]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 1 23:59:10.705525 ignition[1446]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 1 23:59:10.705525 ignition[1446]: INFO : PUT result: OK Jul 1 23:59:10.718518 ignition[1446]: INFO : umount: umount passed Jul 1 23:59:10.720605 ignition[1446]: INFO : Ignition finished successfully Jul 1 23:59:10.724983 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 1 23:59:10.725380 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 1 23:59:10.732448 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 1 23:59:10.732542 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 1 23:59:10.734994 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 1 23:59:10.735075 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 1 23:59:10.737579 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 1 23:59:10.737655 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 1 23:59:10.740135 systemd[1]: Stopped target network.target - Network. Jul 1 23:59:10.742233 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 1 23:59:10.742337 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 23:59:10.745162 systemd[1]: Stopped target paths.target - Path Units. Jul 1 23:59:10.747305 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 1 23:59:10.772670 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 23:59:10.775702 systemd[1]: Stopped target slices.target - Slice Units. Jul 1 23:59:10.782764 systemd[1]: Stopped target sockets.target - Socket Units. Jul 1 23:59:10.784996 systemd[1]: iscsid.socket: Deactivated successfully. Jul 1 23:59:10.785075 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 23:59:10.787423 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 1 23:59:10.787494 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 23:59:10.789969 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 1 23:59:10.790060 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 1 23:59:10.806501 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 1 23:59:10.806596 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 1 23:59:10.809318 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 1 23:59:10.811531 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 1 23:59:10.830822 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 1 23:59:10.831049 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 1 23:59:10.832382 systemd-networkd[1198]: eth0: DHCPv6 lease lost Jul 1 23:59:10.837245 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 1 23:59:10.838434 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 1 23:59:10.850432 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 1 23:59:10.852953 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 1 23:59:10.857417 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 1 23:59:10.857530 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 1 23:59:10.877236 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 1 23:59:10.883793 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 1 23:59:10.883947 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 23:59:10.892586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 1 23:59:10.892704 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 1 23:59:10.895499 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 1 23:59:10.895617 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 1 23:59:10.903446 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 23:59:10.906917 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 1 23:59:10.907108 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 1 23:59:10.935676 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 1 23:59:10.935838 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 1 23:59:10.944945 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 1 23:59:10.948274 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 23:59:10.958724 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 1 23:59:10.958951 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 1 23:59:10.964557 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 1 23:59:10.964697 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 1 23:59:10.975947 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 1 23:59:10.976046 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 23:59:10.978415 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 1 23:59:10.978507 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 1 23:59:10.981323 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 1 23:59:10.981417 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 1 23:59:10.996643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 1 23:59:10.996742 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 23:59:11.008462 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 1 23:59:11.013065 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 1 23:59:11.013220 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 23:59:11.025806 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 1 23:59:11.025927 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 23:59:11.028904 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 1 23:59:11.029016 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 23:59:11.032969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 23:59:11.033084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:11.063687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 1 23:59:11.063880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 1 23:59:11.067881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 1 23:59:11.086728 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 1 23:59:11.106540 systemd[1]: Switching root. Jul 1 23:59:11.153772 systemd-journald[251]: Journal stopped Jul 1 23:59:14.988086 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 1 23:59:14.988231 kernel: SELinux: policy capability network_peer_controls=1 Jul 1 23:59:14.988278 kernel: SELinux: policy capability open_perms=1 Jul 1 23:59:14.988350 kernel: SELinux: policy capability extended_socket_class=1 Jul 1 23:59:14.988395 kernel: SELinux: policy capability always_check_network=0 Jul 1 23:59:14.988428 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 1 23:59:14.988462 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 1 23:59:14.988494 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 1 23:59:14.988526 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 1 23:59:14.988558 kernel: audit: type=1403 audit(1719878353.068:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 1 23:59:14.988604 systemd[1]: Successfully loaded SELinux policy in 73.703ms. Jul 1 23:59:14.988653 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.911ms. Jul 1 23:59:14.988696 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 1 23:59:14.988731 systemd[1]: Detected virtualization amazon. Jul 1 23:59:14.988772 systemd[1]: Detected architecture arm64. Jul 1 23:59:14.988804 systemd[1]: Detected first boot. Jul 1 23:59:14.988838 systemd[1]: Initializing machine ID from VM UUID. Jul 1 23:59:14.988874 zram_generator::config[1489]: No configuration found. Jul 1 23:59:14.988911 systemd[1]: Populated /etc with preset unit settings. Jul 1 23:59:14.988943 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 1 23:59:14.988982 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 1 23:59:14.989017 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 1 23:59:14.989051 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 1 23:59:14.989082 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 1 23:59:14.989114 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 1 23:59:14.989168 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 1 23:59:14.989222 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 1 23:59:14.989256 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 1 23:59:14.991371 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 1 23:59:14.991458 systemd[1]: Created slice user.slice - User and Session Slice. Jul 1 23:59:14.991490 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 23:59:14.991527 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 23:59:14.991561 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 1 23:59:14.991592 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 1 23:59:14.991624 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 1 23:59:14.991664 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 23:59:14.991697 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 1 23:59:14.991728 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 23:59:14.991771 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 1 23:59:14.991802 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 1 23:59:14.991835 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 1 23:59:14.991867 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 1 23:59:14.991900 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 23:59:14.991933 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 23:59:14.991971 systemd[1]: Reached target slices.target - Slice Units. Jul 1 23:59:14.992011 systemd[1]: Reached target swap.target - Swaps. Jul 1 23:59:14.992046 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 1 23:59:14.992079 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 1 23:59:14.992111 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 23:59:14.992145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 23:59:14.992182 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 23:59:14.992218 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 1 23:59:14.992252 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 1 23:59:14.992284 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 1 23:59:14.992364 systemd[1]: Mounting media.mount - External Media Directory... Jul 1 23:59:14.992413 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 1 23:59:14.992451 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 1 23:59:14.992486 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 1 23:59:14.992523 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 1 23:59:14.992557 systemd[1]: Reached target machines.target - Containers. Jul 1 23:59:14.992590 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 1 23:59:14.992624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 23:59:14.992663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 23:59:14.992703 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 1 23:59:14.992735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 23:59:14.992770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 23:59:14.992805 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 23:59:14.992838 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 1 23:59:14.992871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 23:59:14.992903 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 1 23:59:14.992935 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 1 23:59:14.992973 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 1 23:59:14.993007 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 1 23:59:14.993054 systemd[1]: Stopped systemd-fsck-usr.service. Jul 1 23:59:14.993086 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 23:59:14.993119 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 23:59:14.993175 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 23:59:14.993224 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 1 23:59:14.993261 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 23:59:14.996374 systemd[1]: verity-setup.service: Deactivated successfully. Jul 1 23:59:14.996473 systemd[1]: Stopped verity-setup.service. Jul 1 23:59:14.996513 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 1 23:59:14.996546 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 1 23:59:14.996579 systemd[1]: Mounted media.mount - External Media Directory. Jul 1 23:59:14.996618 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 1 23:59:14.996654 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 1 23:59:14.996696 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 1 23:59:14.996728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 23:59:14.996758 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 1 23:59:14.996793 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 1 23:59:14.996825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 23:59:14.996860 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 23:59:14.996890 kernel: fuse: init (API version 7.39) Jul 1 23:59:14.996922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 23:59:14.996961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 23:59:14.996991 kernel: loop: module loaded Jul 1 23:59:14.997021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 23:59:14.997054 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 23:59:14.997092 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 23:59:14.997124 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 23:59:14.997190 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 1 23:59:14.997224 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 23:59:14.997259 kernel: ACPI: bus type drm_connector registered Jul 1 23:59:14.997289 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 1 23:59:15.000473 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 1 23:59:15.000515 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 23:59:15.000559 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 1 23:59:15.000596 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 1 23:59:15.000700 systemd-journald[1567]: Collecting audit messages is disabled. Jul 1 23:59:15.000768 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 1 23:59:15.000805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 23:59:15.000837 systemd-journald[1567]: Journal started Jul 1 23:59:15.000886 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec27cb98908cb84d41610fca11a560dc) is 8.0M, max 75.3M, 67.3M free. Jul 1 23:59:15.008815 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 1 23:59:14.304677 systemd[1]: Queued start job for default target multi-user.target. Jul 1 23:59:14.374748 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 1 23:59:14.375616 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 1 23:59:15.017399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 23:59:15.030488 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 1 23:59:15.036360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 23:59:15.044024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 23:59:15.053075 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 1 23:59:15.065323 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 23:59:15.075003 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 23:59:15.079223 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 23:59:15.081687 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 23:59:15.085062 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 1 23:59:15.085548 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 1 23:59:15.088859 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 1 23:59:15.093123 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 1 23:59:15.158121 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 1 23:59:15.165943 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 1 23:59:15.178586 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 1 23:59:15.190624 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 1 23:59:15.199703 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 1 23:59:15.214698 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 1 23:59:15.215108 kernel: loop0: detected capacity change from 0 to 113672 Jul 1 23:59:15.218229 kernel: block loop0: the capability attribute has been deprecated. Jul 1 23:59:15.229621 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 1 23:59:15.291662 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 1 23:59:15.294396 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 1 23:59:15.309851 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 1 23:59:15.311811 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec27cb98908cb84d41610fca11a560dc is 48.501ms for 916 entries. Jul 1 23:59:15.311811 systemd-journald[1567]: System Journal (/var/log/journal/ec27cb98908cb84d41610fca11a560dc) is 8.0M, max 195.6M, 187.6M free. Jul 1 23:59:15.369907 systemd-journald[1567]: Received client request to flush runtime journal. Jul 1 23:59:15.369979 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 1 23:59:15.309886 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jul 1 23:59:15.331781 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 23:59:15.335622 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 23:59:15.362404 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 1 23:59:15.378280 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 1 23:59:15.408338 kernel: loop1: detected capacity change from 0 to 51896 Jul 1 23:59:15.484618 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 1 23:59:15.498830 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 23:59:15.524606 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 23:59:15.539628 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 1 23:59:15.563439 kernel: loop2: detected capacity change from 0 to 59672 Jul 1 23:59:15.575776 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jul 1 23:59:15.576348 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jul 1 23:59:15.589438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 23:59:15.602943 udevadm[1641]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 1 23:59:15.668849 kernel: loop3: detected capacity change from 0 to 194096 Jul 1 23:59:15.712356 kernel: loop4: detected capacity change from 0 to 113672 Jul 1 23:59:15.735349 kernel: loop5: detected capacity change from 0 to 51896 Jul 1 23:59:15.754440 kernel: loop6: detected capacity change from 0 to 59672 Jul 1 23:59:15.780719 kernel: loop7: detected capacity change from 0 to 194096 Jul 1 23:59:15.801057 (sd-merge)[1646]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 1 23:59:15.804359 (sd-merge)[1646]: Merged extensions into '/usr'. Jul 1 23:59:15.814052 systemd[1]: Reloading requested from client PID 1595 ('systemd-sysext') (unit systemd-sysext.service)... Jul 1 23:59:15.814088 systemd[1]: Reloading... Jul 1 23:59:15.992446 zram_generator::config[1667]: No configuration found. Jul 1 23:59:16.467668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 23:59:16.635237 systemd[1]: Reloading finished in 819 ms. Jul 1 23:59:16.705460 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 1 23:59:16.720722 systemd[1]: Starting ensure-sysext.service... Jul 1 23:59:16.745720 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 1 23:59:16.764568 ldconfig[1587]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 1 23:59:16.770974 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 1 23:59:16.781429 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 1 23:59:16.795940 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 23:59:16.800970 systemd[1]: Reloading requested from client PID 1721 ('systemctl') (unit ensure-sysext.service)... Jul 1 23:59:16.801004 systemd[1]: Reloading... Jul 1 23:59:16.864151 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 1 23:59:16.864972 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 1 23:59:16.869824 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 1 23:59:16.870712 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Jul 1 23:59:16.871073 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Jul 1 23:59:16.875620 systemd-udevd[1726]: Using default interface naming scheme 'v255'. Jul 1 23:59:16.888818 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 23:59:16.888841 systemd-tmpfiles[1722]: Skipping /boot Jul 1 23:59:16.921025 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 23:59:16.921054 systemd-tmpfiles[1722]: Skipping /boot Jul 1 23:59:17.047356 zram_generator::config[1749]: No configuration found. Jul 1 23:59:17.196342 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1785) Jul 1 23:59:17.219783 (udev-worker)[1773]: Network interface NamePolicy= disabled on kernel command line. Jul 1 23:59:17.414353 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 23:59:17.475355 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1773) Jul 1 23:59:17.562090 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 1 23:59:17.563096 systemd[1]: Reloading finished in 761 ms. Jul 1 23:59:17.590841 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 23:59:17.594848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 1 23:59:17.673738 systemd[1]: Finished ensure-sysext.service. Jul 1 23:59:17.716852 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 1 23:59:17.744914 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 1 23:59:17.748888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 23:59:17.769067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 23:59:17.783069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 23:59:17.791660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 23:59:17.823560 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 23:59:17.827369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 23:59:17.842661 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 1 23:59:17.854004 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 23:59:17.864649 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 23:59:17.868505 systemd[1]: Reached target time-set.target - System Time Set. Jul 1 23:59:17.875825 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 1 23:59:17.882128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 23:59:17.882752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 23:59:17.895123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 23:59:17.896454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 23:59:17.926474 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 23:59:17.937760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 23:59:17.944538 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 1 23:59:17.970216 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 23:59:17.971544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 23:59:17.993827 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 23:59:17.996179 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 23:59:18.053500 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 1 23:59:18.069441 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 1 23:59:18.079533 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 1 23:59:18.105407 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 1 23:59:18.116684 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 1 23:59:18.119806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 23:59:18.131659 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 1 23:59:18.134312 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 1 23:59:18.136511 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 1 23:59:18.153341 augenrules[1953]: No rules Jul 1 23:59:18.165647 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 1 23:59:18.170455 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 1 23:59:18.188219 lvm[1948]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 1 23:59:18.222018 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 1 23:59:18.242997 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 1 23:59:18.247434 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 1 23:59:18.252673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 23:59:18.264796 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 1 23:59:18.290419 lvm[1966]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 1 23:59:18.300934 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 1 23:59:18.336220 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 1 23:59:18.447436 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 23:59:18.482861 systemd-resolved[1928]: Positive Trust Anchors: Jul 1 23:59:18.482916 systemd-resolved[1928]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 23:59:18.482980 systemd-resolved[1928]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 1 23:59:18.492465 systemd-resolved[1928]: Defaulting to hostname 'linux'. Jul 1 23:59:18.496452 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 23:59:18.500081 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 23:59:18.505571 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 23:59:18.509008 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 1 23:59:18.511959 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 1 23:59:18.515732 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 1 23:59:18.518954 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 1 23:59:18.522217 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 1 23:59:18.525369 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 1 23:59:18.525441 systemd[1]: Reached target paths.target - Path Units. Jul 1 23:59:18.528538 systemd[1]: Reached target timers.target - Timer Units. Jul 1 23:59:18.534203 systemd-networkd[1927]: lo: Link UP Jul 1 23:59:18.534212 systemd-networkd[1927]: lo: Gained carrier Jul 1 23:59:18.537111 systemd-networkd[1927]: Enumeration completed Jul 1 23:59:18.537990 systemd-networkd[1927]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 23:59:18.537998 systemd-networkd[1927]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 23:59:18.542082 systemd-networkd[1927]: eth0: Link UP Jul 1 23:59:18.542192 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 1 23:59:18.546003 systemd-networkd[1927]: eth0: Gained carrier Jul 1 23:59:18.546042 systemd-networkd[1927]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 23:59:18.550508 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 1 23:59:18.558443 systemd-networkd[1927]: eth0: DHCPv4 address 172.31.26.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 1 23:59:18.563991 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 1 23:59:18.567980 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 23:59:18.571762 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 1 23:59:18.575111 systemd[1]: Reached target network.target - Network. Jul 1 23:59:18.577769 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 23:59:18.580501 systemd[1]: Reached target basic.target - Basic System. Jul 1 23:59:18.583842 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 1 23:59:18.583906 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 1 23:59:18.592635 systemd[1]: Starting containerd.service - containerd container runtime... Jul 1 23:59:18.604726 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 1 23:59:18.616959 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 1 23:59:18.632736 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 1 23:59:18.639788 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 1 23:59:18.642617 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 1 23:59:18.650775 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 1 23:59:18.666710 jq[1983]: false Jul 1 23:59:18.667819 systemd[1]: Started ntpd.service - Network Time Service. Jul 1 23:59:18.678606 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 1 23:59:18.687590 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 1 23:59:18.693654 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 1 23:59:18.705043 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 1 23:59:18.733743 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 1 23:59:18.745866 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 1 23:59:18.752028 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 1 23:59:18.755649 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 1 23:59:18.770575 systemd[1]: Starting update-engine.service - Update Engine... Jul 1 23:59:18.796676 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 1 23:59:18.812414 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 1 23:59:18.814462 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 1 23:59:18.817643 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 1 23:59:18.818071 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 1 23:59:18.867623 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 1 23:59:18.885524 jq[1998]: true Jul 1 23:59:18.894143 dbus-daemon[1982]: [system] SELinux support is enabled Jul 1 23:59:18.894570 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 1 23:59:18.901767 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 1 23:59:18.901836 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 1 23:59:18.907052 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 1 23:59:18.907110 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 1 23:59:18.961090 systemd[1]: motdgen.service: Deactivated successfully. Jul 1 23:59:18.961515 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1927 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 1 23:59:18.973373 extend-filesystems[1984]: Found loop4 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found loop5 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found loop6 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found loop7 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found nvme0n1 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found nvme0n1p1 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found nvme0n1p2 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found nvme0n1p3 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found usr Jul 1 23:59:18.973373 extend-filesystems[1984]: Found nvme0n1p4 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found nvme0n1p6 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found nvme0n1p7 Jul 1 23:59:18.973373 extend-filesystems[1984]: Found nvme0n1p9 Jul 1 23:59:18.963642 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 1 23:59:19.047166 coreos-metadata[1981]: Jul 01 23:59:19.022 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 1 23:59:19.047166 coreos-metadata[1981]: Jul 01 23:59:19.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 1 23:59:19.047166 coreos-metadata[1981]: Jul 01 23:59:19.046 INFO Fetch successful Jul 1 23:59:19.047166 coreos-metadata[1981]: Jul 01 23:59:19.046 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 1 23:59:19.055961 jq[2007]: true Jul 1 23:59:19.056212 extend-filesystems[1984]: Checking size of /dev/nvme0n1p9 Jul 1 23:59:19.006523 (ntainerd)[2020]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 1 23:59:19.060817 coreos-metadata[1981]: Jul 01 23:59:19.050 INFO Fetch successful Jul 1 23:59:19.060817 coreos-metadata[1981]: Jul 01 23:59:19.050 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 1 23:59:19.009095 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 1 23:59:19.061972 coreos-metadata[1981]: Jul 01 23:59:19.061 INFO Fetch successful Jul 1 23:59:19.070638 coreos-metadata[1981]: Jul 01 23:59:19.063 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 1 23:59:19.070638 coreos-metadata[1981]: Jul 01 23:59:19.069 INFO Fetch successful Jul 1 23:59:19.070638 coreos-metadata[1981]: Jul 01 23:59:19.069 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 1 23:59:19.070878 tar[2014]: linux-arm64/helm Jul 1 23:59:19.073338 update_engine[1996]: I0701 23:59:19.071254 1996 main.cc:92] Flatcar Update Engine starting Jul 1 23:59:19.075961 coreos-metadata[1981]: Jul 01 23:59:19.074 INFO Fetch failed with 404: resource not found Jul 1 23:59:19.075961 coreos-metadata[1981]: Jul 01 23:59:19.074 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 1 23:59:19.075961 coreos-metadata[1981]: Jul 01 23:59:19.075 INFO Fetch successful Jul 1 23:59:19.075961 coreos-metadata[1981]: Jul 01 23:59:19.075 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 1 23:59:19.080084 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 1 23:59:19.092574 coreos-metadata[1981]: Jul 01 23:59:19.081 INFO Fetch successful Jul 1 23:59:19.092574 coreos-metadata[1981]: Jul 01 23:59:19.081 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 1 23:59:19.092574 coreos-metadata[1981]: Jul 01 23:59:19.084 INFO Fetch successful Jul 1 23:59:19.092574 coreos-metadata[1981]: Jul 01 23:59:19.084 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 1 23:59:19.092574 coreos-metadata[1981]: Jul 01 23:59:19.090 INFO Fetch successful Jul 1 23:59:19.092574 coreos-metadata[1981]: Jul 01 23:59:19.090 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 1 23:59:19.092921 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 1 23:59:19.092921 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 1 23:59:19.092921 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: ---------------------------------------------------- Jul 1 23:59:19.092921 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Jul 1 23:59:19.092921 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 1 23:59:19.092921 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: corporation. Support and training for ntp-4 are Jul 1 23:59:19.092921 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: available at https://www.nwtime.org/support Jul 1 23:59:19.092921 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: ---------------------------------------------------- Jul 1 23:59:19.080166 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 1 23:59:19.091678 systemd[1]: Started update-engine.service - Update Engine. Jul 1 23:59:19.080188 ntpd[1986]: ---------------------------------------------------- Jul 1 23:59:19.101980 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: proto: precision = 0.096 usec (-23) Jul 1 23:59:19.101980 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: basedate set to 2024-06-19 Jul 1 23:59:19.101980 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: gps base set to 2024-06-23 (week 2320) Jul 1 23:59:19.102369 update_engine[1996]: I0701 23:59:19.097537 1996 update_check_scheduler.cc:74] Next update check in 9m23s Jul 1 23:59:19.080208 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Jul 1 23:59:19.080228 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 1 23:59:19.080247 ntpd[1986]: corporation. Support and training for ntp-4 are Jul 1 23:59:19.080267 ntpd[1986]: available at https://www.nwtime.org/support Jul 1 23:59:19.080288 ntpd[1986]: ---------------------------------------------------- Jul 1 23:59:19.100999 ntpd[1986]: proto: precision = 0.096 usec (-23) Jul 1 23:59:19.101528 ntpd[1986]: basedate set to 2024-06-19 Jul 1 23:59:19.101557 ntpd[1986]: gps base set to 2024-06-23 (week 2320) Jul 1 23:59:19.110214 coreos-metadata[1981]: Jul 01 23:59:19.105 INFO Fetch successful Jul 1 23:59:19.113888 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 1 23:59:19.131800 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Jul 1 23:59:19.141622 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Jul 1 23:59:19.141622 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 1 23:59:19.141622 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Jul 1 23:59:19.131916 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 1 23:59:19.132231 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Jul 1 23:59:19.144886 ntpd[1986]: Listen normally on 3 eth0 172.31.26.136:123 Jul 1 23:59:19.147179 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: Listen normally on 3 eth0 172.31.26.136:123 Jul 1 23:59:19.147179 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: Listen normally on 4 lo [::1]:123 Jul 1 23:59:19.147179 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: bind(21) AF_INET6 fe80::40c:78ff:fe94:3d0b%2#123 flags 0x11 failed: Cannot assign requested address Jul 1 23:59:19.147179 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: unable to create socket on eth0 (5) for fe80::40c:78ff:fe94:3d0b%2#123 Jul 1 23:59:19.147179 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: failed to init interface for address fe80::40c:78ff:fe94:3d0b%2 Jul 1 23:59:19.147179 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Jul 1 23:59:19.144998 ntpd[1986]: Listen normally on 4 lo [::1]:123 Jul 1 23:59:19.145084 ntpd[1986]: bind(21) AF_INET6 fe80::40c:78ff:fe94:3d0b%2#123 flags 0x11 failed: Cannot assign requested address Jul 1 23:59:19.145147 ntpd[1986]: unable to create socket on eth0 (5) for fe80::40c:78ff:fe94:3d0b%2#123 Jul 1 23:59:19.145178 ntpd[1986]: failed to init interface for address fe80::40c:78ff:fe94:3d0b%2 Jul 1 23:59:19.145246 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Jul 1 23:59:19.195471 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 1 23:59:19.195471 ntpd[1986]: 1 Jul 23:59:19 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 1 23:59:19.182009 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 1 23:59:19.195686 extend-filesystems[1984]: Resized partition /dev/nvme0n1p9 Jul 1 23:59:19.182086 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 1 23:59:19.211923 extend-filesystems[2043]: resize2fs 1.47.0 (5-Feb-2023) Jul 1 23:59:19.223047 systemd-logind[1992]: Watching system buttons on /dev/input/event0 (Power Button) Jul 1 23:59:19.223087 systemd-logind[1992]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 1 23:59:19.223570 systemd-logind[1992]: New seat seat0. Jul 1 23:59:19.230042 systemd[1]: Started systemd-logind.service - User Login Management. Jul 1 23:59:19.245169 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 1 23:59:19.255003 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 1 23:59:19.264327 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 1 23:59:19.266421 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 1 23:59:19.394353 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 1 23:59:19.462184 extend-filesystems[2043]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 1 23:59:19.462184 extend-filesystems[2043]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 1 23:59:19.462184 extend-filesystems[2043]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 1 23:59:19.472207 extend-filesystems[1984]: Resized filesystem in /dev/nvme0n1p9 Jul 1 23:59:19.485075 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1773) Jul 1 23:59:19.480048 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 1 23:59:19.481051 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 1 23:59:19.491041 bash[2066]: Updated "/home/core/.ssh/authorized_keys" Jul 1 23:59:19.492426 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 1 23:59:19.541658 systemd[1]: Starting sshkeys.service... Jul 1 23:59:19.618182 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 1 23:59:19.646494 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 1 23:59:19.701475 systemd-networkd[1927]: eth0: Gained IPv6LL Jul 1 23:59:19.721278 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 1 23:59:19.725804 systemd[1]: Reached target network-online.target - Network is Online. Jul 1 23:59:19.744921 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 1 23:59:19.757955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:19.768007 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 1 23:59:19.776686 locksmithd[2031]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 1 23:59:19.839087 containerd[2020]: time="2024-07-01T23:59:19.834832358Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 1 23:59:19.860876 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 1 23:59:19.867080 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2024 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 1 23:59:19.905573 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 1 23:59:19.921012 systemd[1]: Starting polkit.service - Authorization Manager... Jul 1 23:59:19.969642 polkitd[2138]: Started polkitd version 121 Jul 1 23:59:19.996781 polkitd[2138]: Loading rules from directory /etc/polkit-1/rules.d Jul 1 23:59:19.996953 polkitd[2138]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 1 23:59:20.000078 polkitd[2138]: Finished loading, compiling and executing 2 rules Jul 1 23:59:20.002763 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 1 23:59:20.003091 systemd[1]: Started polkit.service - Authorization Manager. Jul 1 23:59:20.006200 polkitd[2138]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.040288259Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.043525151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.048443435Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.048508067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.048955751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.049009199Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.050408639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.050624627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.050658683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.050824811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:20.071954 containerd[2020]: time="2024-07-01T23:59:20.051285563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:20.074586 containerd[2020]: time="2024-07-01T23:59:20.051414299Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 1 23:59:20.074586 containerd[2020]: time="2024-07-01T23:59:20.051440435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 1 23:59:20.074586 containerd[2020]: time="2024-07-01T23:59:20.051685991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 1 23:59:20.074586 containerd[2020]: time="2024-07-01T23:59:20.051722915Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 1 23:59:20.074586 containerd[2020]: time="2024-07-01T23:59:20.051900023Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 1 23:59:20.074586 containerd[2020]: time="2024-07-01T23:59:20.051935027Z" level=info msg="metadata content store policy set" policy=shared Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.079795307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.079885355Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.079940339Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080011883Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080051615Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080078855Z" level=info msg="NRI interface is disabled by configuration." Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080115035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080459195Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080506343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080537783Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080609183Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080648423Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080687123Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 1 23:59:20.086355 containerd[2020]: time="2024-07-01T23:59:20.080718683Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.080750459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.080785295Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.080817719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.080850443Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.080882231Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.081193619Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.085277303Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.085396115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.085436243Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.085490039Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.085630991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.085666439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.085699079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.087061 containerd[2020]: time="2024-07-01T23:59:20.085729823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.087774 containerd[2020]: time="2024-07-01T23:59:20.085760291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.087774 containerd[2020]: time="2024-07-01T23:59:20.085791767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.087774 containerd[2020]: time="2024-07-01T23:59:20.085822031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.087774 containerd[2020]: time="2024-07-01T23:59:20.085852715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.087774 containerd[2020]: time="2024-07-01T23:59:20.085887467Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 1 23:59:20.102964 systemd-hostnamed[2024]: Hostname set to (transient) Jul 1 23:59:20.108949 containerd[2020]: time="2024-07-01T23:59:20.086273099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.108949 containerd[2020]: time="2024-07-01T23:59:20.102942347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.108949 containerd[2020]: time="2024-07-01T23:59:20.103037483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.108949 containerd[2020]: time="2024-07-01T23:59:20.103103123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.108949 containerd[2020]: time="2024-07-01T23:59:20.103185971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.108949 containerd[2020]: time="2024-07-01T23:59:20.103228763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.108949 containerd[2020]: time="2024-07-01T23:59:20.103325507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.108949 containerd[2020]: time="2024-07-01T23:59:20.103365827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 1 23:59:20.103434 systemd-resolved[1928]: System hostname changed to 'ip-172-31-26-136'. Jul 1 23:59:20.132911 coreos-metadata[2092]: Jul 01 23:59:20.132 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 1 23:59:20.135524 containerd[2020]: time="2024-07-01T23:59:20.117253163Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 1 23:59:20.135524 containerd[2020]: time="2024-07-01T23:59:20.125245631Z" level=info msg="Connect containerd service" Jul 1 23:59:20.135524 containerd[2020]: time="2024-07-01T23:59:20.125462291Z" level=info msg="using legacy CRI server" Jul 1 23:59:20.135524 containerd[2020]: time="2024-07-01T23:59:20.125535851Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 1 23:59:20.135524 containerd[2020]: time="2024-07-01T23:59:20.127631363Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 1 23:59:20.147651 coreos-metadata[2092]: Jul 01 23:59:20.144 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 1 23:59:20.147651 coreos-metadata[2092]: Jul 01 23:59:20.147 INFO Fetch successful Jul 1 23:59:20.147651 coreos-metadata[2092]: Jul 01 23:59:20.147 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 1 23:59:20.149025 coreos-metadata[2092]: Jul 01 23:59:20.148 INFO Fetch successful Jul 1 23:59:20.157533 containerd[2020]: time="2024-07-01T23:59:20.150595847Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 23:59:20.157533 containerd[2020]: time="2024-07-01T23:59:20.153572219Z" level=info msg="Start subscribing containerd event" Jul 1 23:59:20.157533 containerd[2020]: time="2024-07-01T23:59:20.153684887Z" level=info msg="Start recovering state" Jul 1 23:59:20.157533 containerd[2020]: time="2024-07-01T23:59:20.153828935Z" level=info msg="Start event monitor" Jul 1 23:59:20.157533 containerd[2020]: time="2024-07-01T23:59:20.153856871Z" level=info msg="Start snapshots syncer" Jul 1 23:59:20.157533 containerd[2020]: time="2024-07-01T23:59:20.153881075Z" level=info msg="Start cni network conf syncer for default" Jul 1 23:59:20.173458 containerd[2020]: time="2024-07-01T23:59:20.153900707Z" level=info msg="Start streaming server" Jul 1 23:59:20.173458 containerd[2020]: time="2024-07-01T23:59:20.154708979Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 1 23:59:20.173458 containerd[2020]: time="2024-07-01T23:59:20.159217739Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 1 23:59:20.173458 containerd[2020]: time="2024-07-01T23:59:20.159248843Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 1 23:59:20.173458 containerd[2020]: time="2024-07-01T23:59:20.159280415Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 1 23:59:20.173458 containerd[2020]: time="2024-07-01T23:59:20.159704351Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 1 23:59:20.173458 containerd[2020]: time="2024-07-01T23:59:20.159818795Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 1 23:59:20.173458 containerd[2020]: time="2024-07-01T23:59:20.159939707Z" level=info msg="containerd successfully booted in 0.344994s" Jul 1 23:59:20.173918 sshd_keygen[2026]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 1 23:59:20.160106 systemd[1]: Started containerd.service - containerd container runtime. Jul 1 23:59:20.172402 unknown[2092]: wrote ssh authorized keys file for user: core Jul 1 23:59:20.218963 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 1 23:59:20.267362 update-ssh-keys[2189]: Updated "/home/core/.ssh/authorized_keys" Jul 1 23:59:20.277863 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 1 23:59:20.295452 systemd[1]: Finished sshkeys.service. Jul 1 23:59:20.316831 amazon-ssm-agent[2119]: Initializing new seelog logger Jul 1 23:59:20.324444 amazon-ssm-agent[2119]: New Seelog Logger Creation Complete Jul 1 23:59:20.324697 amazon-ssm-agent[2119]: 2024/07/01 23:59:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:20.331344 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:20.331344 amazon-ssm-agent[2119]: 2024/07/01 23:59:20 processing appconfig overrides Jul 1 23:59:20.333489 amazon-ssm-agent[2119]: 2024/07/01 23:59:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:20.334063 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:20.338330 amazon-ssm-agent[2119]: 2024/07/01 23:59:20 processing appconfig overrides Jul 1 23:59:20.338330 amazon-ssm-agent[2119]: 2024/07/01 23:59:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:20.338330 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:20.338330 amazon-ssm-agent[2119]: 2024/07/01 23:59:20 processing appconfig overrides Jul 1 23:59:20.339411 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO Proxy environment variables: Jul 1 23:59:20.351349 amazon-ssm-agent[2119]: 2024/07/01 23:59:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:20.351349 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 1 23:59:20.351547 amazon-ssm-agent[2119]: 2024/07/01 23:59:20 processing appconfig overrides Jul 1 23:59:20.399642 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 1 23:59:20.418554 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 1 23:59:20.425763 systemd[1]: Started sshd@0-172.31.26.136:22-147.75.109.163:41806.service - OpenSSH per-connection server daemon (147.75.109.163:41806). Jul 1 23:59:20.456366 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO https_proxy: Jul 1 23:59:20.485561 systemd[1]: issuegen.service: Deactivated successfully. Jul 1 23:59:20.486462 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 1 23:59:20.506173 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 1 23:59:20.553473 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO http_proxy: Jul 1 23:59:20.585423 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 1 23:59:20.602636 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 1 23:59:20.622655 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 1 23:59:20.628950 systemd[1]: Reached target getty.target - Login Prompts. Jul 1 23:59:20.653732 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO no_proxy: Jul 1 23:59:20.755153 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO Checking if agent identity type OnPrem can be assumed Jul 1 23:59:20.782423 sshd[2214]: Accepted publickey for core from 147.75.109.163 port 41806 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:20.787975 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:20.825968 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 1 23:59:20.840721 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 1 23:59:20.854576 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO Checking if agent identity type EC2 can be assumed Jul 1 23:59:20.855958 systemd-logind[1992]: New session 1 of user core. Jul 1 23:59:20.908237 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 1 23:59:20.929624 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 1 23:59:20.953342 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:20.965988 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO Agent will take identity from EC2 Jul 1 23:59:21.057582 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 1 23:59:21.158335 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 1 23:59:21.256351 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 1 23:59:21.357441 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 1 23:59:21.383799 systemd[2226]: Queued start job for default target default.target. Jul 1 23:59:21.391615 systemd[2226]: Created slice app.slice - User Application Slice. Jul 1 23:59:21.391694 systemd[2226]: Reached target paths.target - Paths. Jul 1 23:59:21.391731 systemd[2226]: Reached target timers.target - Timers. Jul 1 23:59:21.404569 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 1 23:59:21.447777 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 1 23:59:21.448051 systemd[2226]: Reached target sockets.target - Sockets. Jul 1 23:59:21.448087 systemd[2226]: Reached target basic.target - Basic System. Jul 1 23:59:21.448178 systemd[2226]: Reached target default.target - Main User Target. Jul 1 23:59:21.448243 systemd[2226]: Startup finished in 467ms. Jul 1 23:59:21.448978 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 1 23:59:21.453480 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 1 23:59:21.462624 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 1 23:59:21.553835 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO [amazon-ssm-agent] Starting Core Agent Jul 1 23:59:21.599522 tar[2014]: linux-arm64/LICENSE Jul 1 23:59:21.600920 tar[2014]: linux-arm64/README.md Jul 1 23:59:21.637923 systemd[1]: Started sshd@1-172.31.26.136:22-147.75.109.163:41812.service - OpenSSH per-connection server daemon (147.75.109.163:41812). Jul 1 23:59:21.652442 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 1 23:59:21.661341 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 1 23:59:21.761921 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO [Registrar] Starting registrar module Jul 1 23:59:21.862401 amazon-ssm-agent[2119]: 2024-07-01 23:59:20 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 1 23:59:21.908203 sshd[2241]: Accepted publickey for core from 147.75.109.163 port 41812 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:21.913646 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:21.931472 systemd-logind[1992]: New session 2 of user core. Jul 1 23:59:21.936666 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 1 23:59:22.004628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:22.008637 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 1 23:59:22.011996 systemd[1]: Startup finished in 1.256s (kernel) + 10.246s (initrd) + 9.015s (userspace) = 20.518s. Jul 1 23:59:22.014779 amazon-ssm-agent[2119]: 2024-07-01 23:59:22 INFO [EC2Identity] EC2 registration was successful. Jul 1 23:59:22.045861 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 23:59:22.061445 amazon-ssm-agent[2119]: 2024-07-01 23:59:22 INFO [CredentialRefresher] credentialRefresher has started Jul 1 23:59:22.061445 amazon-ssm-agent[2119]: 2024-07-01 23:59:22 INFO [CredentialRefresher] Starting credentials refresher loop Jul 1 23:59:22.061445 amazon-ssm-agent[2119]: 2024-07-01 23:59:22 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 1 23:59:22.081010 ntpd[1986]: Listen normally on 6 eth0 [fe80::40c:78ff:fe94:3d0b%2]:123 Jul 1 23:59:22.081650 ntpd[1986]: 1 Jul 23:59:22 ntpd[1986]: Listen normally on 6 eth0 [fe80::40c:78ff:fe94:3d0b%2]:123 Jul 1 23:59:22.091954 sshd[2241]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:22.103544 systemd[1]: sshd@1-172.31.26.136:22-147.75.109.163:41812.service: Deactivated successfully. Jul 1 23:59:22.108967 systemd[1]: session-2.scope: Deactivated successfully. Jul 1 23:59:22.111884 systemd-logind[1992]: Session 2 logged out. Waiting for processes to exit. Jul 1 23:59:22.116200 amazon-ssm-agent[2119]: 2024-07-01 23:59:22 INFO [CredentialRefresher] Next credential rotation will be in 31.916658515766667 minutes Jul 1 23:59:22.132966 systemd[1]: Started sshd@2-172.31.26.136:22-147.75.109.163:41822.service - OpenSSH per-connection server daemon (147.75.109.163:41822). Jul 1 23:59:22.137410 systemd-logind[1992]: Removed session 2. Jul 1 23:59:22.319055 sshd[2259]: Accepted publickey for core from 147.75.109.163 port 41822 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:22.322133 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:22.333887 systemd-logind[1992]: New session 3 of user core. Jul 1 23:59:22.343685 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 1 23:59:22.469634 sshd[2259]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:22.479382 systemd[1]: sshd@2-172.31.26.136:22-147.75.109.163:41822.service: Deactivated successfully. Jul 1 23:59:22.483692 systemd[1]: session-3.scope: Deactivated successfully. Jul 1 23:59:22.485702 systemd-logind[1992]: Session 3 logged out. Waiting for processes to exit. Jul 1 23:59:22.488649 systemd-logind[1992]: Removed session 3. Jul 1 23:59:22.506896 systemd[1]: Started sshd@3-172.31.26.136:22-147.75.109.163:41826.service - OpenSSH per-connection server daemon (147.75.109.163:41826). Jul 1 23:59:22.693095 sshd[2270]: Accepted publickey for core from 147.75.109.163 port 41826 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:22.694853 sshd[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:22.705430 systemd-logind[1992]: New session 4 of user core. Jul 1 23:59:22.711625 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 1 23:59:22.847610 sshd[2270]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:22.855966 systemd[1]: sshd@3-172.31.26.136:22-147.75.109.163:41826.service: Deactivated successfully. Jul 1 23:59:22.856172 systemd-logind[1992]: Session 4 logged out. Waiting for processes to exit. Jul 1 23:59:22.861846 systemd[1]: session-4.scope: Deactivated successfully. Jul 1 23:59:22.866952 systemd-logind[1992]: Removed session 4. Jul 1 23:59:22.883381 kubelet[2250]: E0701 23:59:22.880451 2250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 23:59:22.888957 systemd[1]: Started sshd@4-172.31.26.136:22-147.75.109.163:35436.service - OpenSSH per-connection server daemon (147.75.109.163:35436). Jul 1 23:59:22.891271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 23:59:22.891669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 23:59:22.892606 systemd[1]: kubelet.service: Consumed 1.442s CPU time. Jul 1 23:59:23.078202 sshd[2280]: Accepted publickey for core from 147.75.109.163 port 35436 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:23.081839 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:23.094853 systemd-logind[1992]: New session 5 of user core. Jul 1 23:59:23.101659 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 1 23:59:23.102720 amazon-ssm-agent[2119]: 2024-07-01 23:59:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 1 23:59:23.201585 amazon-ssm-agent[2119]: 2024-07-01 23:59:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2284) started Jul 1 23:59:23.229216 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 1 23:59:23.230459 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 1 23:59:23.248916 sudo[2289]: pam_unix(sudo:session): session closed for user root Jul 1 23:59:23.274060 sshd[2280]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:23.286066 systemd[1]: sshd@4-172.31.26.136:22-147.75.109.163:35436.service: Deactivated successfully. Jul 1 23:59:23.294069 systemd[1]: session-5.scope: Deactivated successfully. Jul 1 23:59:23.299104 systemd-logind[1992]: Session 5 logged out. Waiting for processes to exit. Jul 1 23:59:23.301979 amazon-ssm-agent[2119]: 2024-07-01 23:59:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 1 23:59:23.328839 systemd[1]: Started sshd@5-172.31.26.136:22-147.75.109.163:35448.service - OpenSSH per-connection server daemon (147.75.109.163:35448). Jul 1 23:59:23.332467 systemd-logind[1992]: Removed session 5. Jul 1 23:59:23.523385 sshd[2296]: Accepted publickey for core from 147.75.109.163 port 35448 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:23.526593 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:23.534008 systemd-logind[1992]: New session 6 of user core. Jul 1 23:59:23.542611 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 1 23:59:23.647810 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 1 23:59:23.648970 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 1 23:59:23.656631 sudo[2303]: pam_unix(sudo:session): session closed for user root Jul 1 23:59:23.667079 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 1 23:59:23.667678 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 1 23:59:23.695824 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 1 23:59:23.699409 auditctl[2306]: No rules Jul 1 23:59:23.700125 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 23:59:23.700569 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 1 23:59:23.705806 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 1 23:59:23.769538 augenrules[2324]: No rules Jul 1 23:59:23.772424 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 1 23:59:23.776657 sudo[2302]: pam_unix(sudo:session): session closed for user root Jul 1 23:59:23.800678 sshd[2296]: pam_unix(sshd:session): session closed for user core Jul 1 23:59:23.807171 systemd-logind[1992]: Session 6 logged out. Waiting for processes to exit. Jul 1 23:59:23.808866 systemd[1]: sshd@5-172.31.26.136:22-147.75.109.163:35448.service: Deactivated successfully. Jul 1 23:59:23.812812 systemd[1]: session-6.scope: Deactivated successfully. Jul 1 23:59:23.815048 systemd-logind[1992]: Removed session 6. Jul 1 23:59:23.850844 systemd[1]: Started sshd@6-172.31.26.136:22-147.75.109.163:35450.service - OpenSSH per-connection server daemon (147.75.109.163:35450). Jul 1 23:59:24.030727 sshd[2332]: Accepted publickey for core from 147.75.109.163 port 35450 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 1 23:59:24.033708 sshd[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 1 23:59:24.044434 systemd-logind[1992]: New session 7 of user core. Jul 1 23:59:24.053636 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 1 23:59:24.164968 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 1 23:59:24.165784 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 1 23:59:24.392842 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 1 23:59:24.397135 (dockerd)[2345]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 1 23:59:24.863370 dockerd[2345]: time="2024-07-01T23:59:24.863171983Z" level=info msg="Starting up" Jul 1 23:59:26.002534 dockerd[2345]: time="2024-07-01T23:59:26.002224372Z" level=info msg="Loading containers: start." Jul 1 23:59:25.644456 systemd-resolved[1928]: Clock change detected. Flushing caches. Jul 1 23:59:25.664097 systemd-journald[1567]: Time jumped backwards, rotating. Jul 1 23:59:25.794691 kernel: Initializing XFRM netlink socket Jul 1 23:59:25.850998 (udev-worker)[2362]: Network interface NamePolicy= disabled on kernel command line. Jul 1 23:59:25.942630 systemd-networkd[1927]: docker0: Link UP Jul 1 23:59:25.971516 dockerd[2345]: time="2024-07-01T23:59:25.971460965Z" level=info msg="Loading containers: done." Jul 1 23:59:26.173246 dockerd[2345]: time="2024-07-01T23:59:26.173150258Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 1 23:59:26.173519 dockerd[2345]: time="2024-07-01T23:59:26.173465150Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 1 23:59:26.173749 dockerd[2345]: time="2024-07-01T23:59:26.173712350Z" level=info msg="Daemon has completed initialization" Jul 1 23:59:26.243438 dockerd[2345]: time="2024-07-01T23:59:26.241745546Z" level=info msg="API listen on /run/docker.sock" Jul 1 23:59:26.250846 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 1 23:59:27.350741 containerd[2020]: time="2024-07-01T23:59:27.350229748Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 1 23:59:28.048784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669880152.mount: Deactivated successfully. Jul 1 23:59:31.077778 containerd[2020]: time="2024-07-01T23:59:31.077688402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:31.079954 containerd[2020]: time="2024-07-01T23:59:31.079872990Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940430" Jul 1 23:59:31.081297 containerd[2020]: time="2024-07-01T23:59:31.081199411Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:31.087335 containerd[2020]: time="2024-07-01T23:59:31.087239551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:31.090148 containerd[2020]: time="2024-07-01T23:59:31.089815999Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 3.739517563s" Jul 1 23:59:31.090148 containerd[2020]: time="2024-07-01T23:59:31.089891371Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jul 1 23:59:31.132232 containerd[2020]: time="2024-07-01T23:59:31.131812039Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 1 23:59:32.560493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 1 23:59:32.570060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:33.314110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:33.326611 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 23:59:33.434699 kubelet[2554]: E0701 23:59:33.434431 2554 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 23:59:33.441150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 23:59:33.441499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 23:59:34.113156 containerd[2020]: time="2024-07-01T23:59:34.113094646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:34.115724 containerd[2020]: time="2024-07-01T23:59:34.115647814Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881371" Jul 1 23:59:34.116943 containerd[2020]: time="2024-07-01T23:59:34.116898874Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:34.123018 containerd[2020]: time="2024-07-01T23:59:34.122941318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:34.125688 containerd[2020]: time="2024-07-01T23:59:34.125551258Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 2.993670291s" Jul 1 23:59:34.125858 containerd[2020]: time="2024-07-01T23:59:34.125642878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jul 1 23:59:34.167734 containerd[2020]: time="2024-07-01T23:59:34.167682298Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 1 23:59:35.971736 containerd[2020]: time="2024-07-01T23:59:35.971418747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:35.973924 containerd[2020]: time="2024-07-01T23:59:35.973838967Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155688" Jul 1 23:59:35.975471 containerd[2020]: time="2024-07-01T23:59:35.975351111Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:35.981841 containerd[2020]: time="2024-07-01T23:59:35.981765315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:35.984950 containerd[2020]: time="2024-07-01T23:59:35.984711339Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 1.816734237s" Jul 1 23:59:35.984950 containerd[2020]: time="2024-07-01T23:59:35.984784647Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jul 1 23:59:36.026884 containerd[2020]: time="2024-07-01T23:59:36.026733599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 1 23:59:38.111605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159979826.mount: Deactivated successfully. Jul 1 23:59:38.711076 containerd[2020]: time="2024-07-01T23:59:38.711008968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:38.713311 containerd[2020]: time="2024-07-01T23:59:38.713238328Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634092" Jul 1 23:59:38.716682 containerd[2020]: time="2024-07-01T23:59:38.714891304Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:38.719148 containerd[2020]: time="2024-07-01T23:59:38.719084140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:38.720809 containerd[2020]: time="2024-07-01T23:59:38.720740488Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 2.693939521s" Jul 1 23:59:38.720809 containerd[2020]: time="2024-07-01T23:59:38.720798916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jul 1 23:59:38.764690 containerd[2020]: time="2024-07-01T23:59:38.764615501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 1 23:59:39.500338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723762814.mount: Deactivated successfully. Jul 1 23:59:40.992095 containerd[2020]: time="2024-07-01T23:59:40.992027276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:40.994464 containerd[2020]: time="2024-07-01T23:59:40.994380500Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jul 1 23:59:40.995190 containerd[2020]: time="2024-07-01T23:59:40.995137556Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.003482 containerd[2020]: time="2024-07-01T23:59:41.003419224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.006185 containerd[2020]: time="2024-07-01T23:59:41.006116476Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.241415283s" Jul 1 23:59:41.006365 containerd[2020]: time="2024-07-01T23:59:41.006329980Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 1 23:59:41.048983 containerd[2020]: time="2024-07-01T23:59:41.048932908Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 1 23:59:41.627425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964076282.mount: Deactivated successfully. Jul 1 23:59:41.639003 containerd[2020]: time="2024-07-01T23:59:41.638912383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.642292 containerd[2020]: time="2024-07-01T23:59:41.642198631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 1 23:59:41.644046 containerd[2020]: time="2024-07-01T23:59:41.643900723Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.650416 containerd[2020]: time="2024-07-01T23:59:41.648986719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:41.651304 containerd[2020]: time="2024-07-01T23:59:41.651253063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 602.082759ms" Jul 1 23:59:41.651459 containerd[2020]: time="2024-07-01T23:59:41.651427195Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 1 23:59:41.691159 containerd[2020]: time="2024-07-01T23:59:41.691008883Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 1 23:59:42.405478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3926219465.mount: Deactivated successfully. Jul 1 23:59:43.560524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 1 23:59:43.571026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:43.916025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:43.920677 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 23:59:43.998123 kubelet[2696]: E0701 23:59:43.998032 2696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 23:59:44.002789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 23:59:44.003243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 23:59:47.562668 containerd[2020]: time="2024-07-01T23:59:47.562555392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:47.571787 containerd[2020]: time="2024-07-01T23:59:47.571698480Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jul 1 23:59:47.581273 containerd[2020]: time="2024-07-01T23:59:47.581196096Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:47.595514 containerd[2020]: time="2024-07-01T23:59:47.595423405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 23:59:47.599386 containerd[2020]: time="2024-07-01T23:59:47.598146505Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.906735898s" Jul 1 23:59:47.599719 containerd[2020]: time="2024-07-01T23:59:47.599542273Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jul 1 23:59:49.702623 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 1 23:59:53.048703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:53.064140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:53.115105 systemd[1]: Reloading requested from client PID 2778 ('systemctl') (unit session-7.scope)... Jul 1 23:59:53.115297 systemd[1]: Reloading... Jul 1 23:59:53.302703 zram_generator::config[2819]: No configuration found. Jul 1 23:59:53.527910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 23:59:53.697563 systemd[1]: Reloading finished in 581 ms. Jul 1 23:59:53.780105 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 1 23:59:53.780351 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 1 23:59:53.781796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:53.795297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 23:59:54.924175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 23:59:54.942231 (kubelet)[2876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 23:59:55.020204 kubelet[2876]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 23:59:55.020204 kubelet[2876]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 1 23:59:55.020204 kubelet[2876]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 23:59:55.022528 kubelet[2876]: I0701 23:59:55.022414 2876 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 23:59:56.378705 kubelet[2876]: I0701 23:59:56.378615 2876 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 1 23:59:56.378705 kubelet[2876]: I0701 23:59:56.378673 2876 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 23:59:56.379326 kubelet[2876]: I0701 23:59:56.379015 2876 server.go:927] "Client rotation is on, will bootstrap in background" Jul 1 23:59:56.410707 kubelet[2876]: E0701 23:59:56.410584 2876 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:56.411334 kubelet[2876]: I0701 23:59:56.411139 2876 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 23:59:56.423927 kubelet[2876]: I0701 23:59:56.423879 2876 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 23:59:56.426165 kubelet[2876]: I0701 23:59:56.426084 2876 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 23:59:56.426449 kubelet[2876]: I0701 23:59:56.426156 2876 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 1 23:59:56.426638 kubelet[2876]: I0701 23:59:56.426471 2876 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 23:59:56.426638 kubelet[2876]: I0701 23:59:56.426493 2876 container_manager_linux.go:301] "Creating device plugin manager" Jul 1 23:59:56.426794 kubelet[2876]: I0701 23:59:56.426779 2876 state_mem.go:36] "Initialized new in-memory state store" Jul 1 23:59:56.428270 kubelet[2876]: I0701 23:59:56.428198 2876 kubelet.go:400] "Attempting to sync node with API server" Jul 1 23:59:56.428270 kubelet[2876]: I0701 23:59:56.428245 2876 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 23:59:56.428446 kubelet[2876]: I0701 23:59:56.428372 2876 kubelet.go:312] "Adding apiserver pod source" Jul 1 23:59:56.428446 kubelet[2876]: I0701 23:59:56.428419 2876 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 23:59:56.430757 kubelet[2876]: W0701 23:59:56.430160 2876 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-136&limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:56.430757 kubelet[2876]: E0701 23:59:56.430243 2876 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-136&limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:56.430757 kubelet[2876]: W0701 23:59:56.430347 2876 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:56.430757 kubelet[2876]: E0701 23:59:56.430401 2876 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:56.430757 kubelet[2876]: I0701 23:59:56.430581 2876 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 1 23:59:56.431403 kubelet[2876]: I0701 23:59:56.431378 2876 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 1 23:59:56.431592 kubelet[2876]: W0701 23:59:56.431573 2876 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 1 23:59:56.435825 kubelet[2876]: I0701 23:59:56.435784 2876 server.go:1264] "Started kubelet" Jul 1 23:59:56.440002 kubelet[2876]: I0701 23:59:56.439965 2876 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 23:59:56.449138 kubelet[2876]: I0701 23:59:56.448672 2876 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 23:59:56.451091 kubelet[2876]: I0701 23:59:56.450794 2876 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 1 23:59:56.451091 kubelet[2876]: I0701 23:59:56.450827 2876 server.go:455] "Adding debug handlers to kubelet server" Jul 1 23:59:56.452989 kubelet[2876]: I0701 23:59:56.452953 2876 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 1 23:59:56.456311 kubelet[2876]: I0701 23:59:56.453026 2876 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 23:59:56.456311 kubelet[2876]: I0701 23:59:56.454912 2876 reconciler.go:26] "Reconciler: start to sync state" Jul 1 23:59:56.456499 kubelet[2876]: I0701 23:59:56.456345 2876 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 23:59:56.458357 kubelet[2876]: W0701 23:59:56.458100 2876 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:56.458357 kubelet[2876]: E0701 23:59:56.458205 2876 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:56.458357 kubelet[2876]: E0701 23:59:56.458128 2876 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.136:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.136:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-136.17de3c4a0c6bad68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-136,UID:ip-172-31-26-136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-136,},FirstTimestamp:2024-07-01 23:59:56.43573796 +0000 UTC m=+1.487398124,LastTimestamp:2024-07-01 23:59:56.43573796 +0000 UTC m=+1.487398124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-136,}" Jul 1 23:59:56.458357 kubelet[2876]: E0701 23:59:56.458322 2876 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-136?timeout=10s\": dial tcp 172.31.26.136:6443: connect: connection refused" interval="200ms" Jul 1 23:59:56.459305 kubelet[2876]: I0701 23:59:56.458942 2876 factory.go:221] Registration of the systemd container factory successfully Jul 1 23:59:56.459305 kubelet[2876]: I0701 23:59:56.459111 2876 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 23:59:56.463749 kubelet[2876]: E0701 23:59:56.463492 2876 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 23:59:56.465310 kubelet[2876]: I0701 23:59:56.465162 2876 factory.go:221] Registration of the containerd container factory successfully Jul 1 23:59:56.498051 kubelet[2876]: I0701 23:59:56.497976 2876 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 1 23:59:56.501345 kubelet[2876]: I0701 23:59:56.501275 2876 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 1 23:59:56.501488 kubelet[2876]: I0701 23:59:56.501398 2876 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 1 23:59:56.501488 kubelet[2876]: I0701 23:59:56.501429 2876 kubelet.go:2337] "Starting kubelet main sync loop" Jul 1 23:59:56.501585 kubelet[2876]: E0701 23:59:56.501495 2876 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 23:59:56.503175 kubelet[2876]: W0701 23:59:56.502501 2876 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:56.503175 kubelet[2876]: E0701 23:59:56.502584 2876 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:56.507430 kubelet[2876]: I0701 23:59:56.507394 2876 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 1 23:59:56.507619 kubelet[2876]: I0701 23:59:56.507596 2876 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 1 23:59:56.507773 kubelet[2876]: I0701 23:59:56.507754 2876 state_mem.go:36] "Initialized new in-memory state store" Jul 1 23:59:56.515945 kubelet[2876]: I0701 23:59:56.515882 2876 policy_none.go:49] "None policy: Start" Jul 1 23:59:56.517203 kubelet[2876]: I0701 23:59:56.517107 2876 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 1 23:59:56.517328 kubelet[2876]: I0701 23:59:56.517234 2876 state_mem.go:35] "Initializing new in-memory state store" Jul 1 23:59:56.549463 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 1 23:59:56.553147 kubelet[2876]: I0701 23:59:56.553096 2876 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-136" Jul 1 23:59:56.553620 kubelet[2876]: E0701 23:59:56.553568 2876 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.136:6443/api/v1/nodes\": dial tcp 172.31.26.136:6443: connect: connection refused" node="ip-172-31-26-136" Jul 1 23:59:56.567630 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 1 23:59:56.574590 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 1 23:59:56.585076 kubelet[2876]: I0701 23:59:56.584313 2876 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 1 23:59:56.585076 kubelet[2876]: I0701 23:59:56.584627 2876 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 23:59:56.585076 kubelet[2876]: I0701 23:59:56.584833 2876 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 23:59:56.589742 kubelet[2876]: E0701 23:59:56.589691 2876 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-136\" not found" Jul 1 23:59:56.602349 kubelet[2876]: I0701 23:59:56.602283 2876 topology_manager.go:215] "Topology Admit Handler" podUID="06250786a74470180f493877c911cce7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-136" Jul 1 23:59:56.604503 kubelet[2876]: I0701 23:59:56.604437 2876 topology_manager.go:215] "Topology Admit Handler" podUID="7e991c222dfb1ecd75ae308a7ae4363d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-136" Jul 1 23:59:56.607178 kubelet[2876]: I0701 23:59:56.606925 2876 topology_manager.go:215] "Topology Admit Handler" podUID="10075af84c62bab7a0082cbd83c6265e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-136" Jul 1 23:59:56.619804 systemd[1]: Created slice kubepods-burstable-pod06250786a74470180f493877c911cce7.slice - libcontainer container kubepods-burstable-pod06250786a74470180f493877c911cce7.slice. Jul 1 23:59:56.633731 systemd[1]: Created slice kubepods-burstable-pod7e991c222dfb1ecd75ae308a7ae4363d.slice - libcontainer container kubepods-burstable-pod7e991c222dfb1ecd75ae308a7ae4363d.slice. Jul 1 23:59:56.647502 systemd[1]: Created slice kubepods-burstable-pod10075af84c62bab7a0082cbd83c6265e.slice - libcontainer container kubepods-burstable-pod10075af84c62bab7a0082cbd83c6265e.slice. Jul 1 23:59:56.659614 kubelet[2876]: E0701 23:59:56.659554 2876 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-136?timeout=10s\": dial tcp 172.31.26.136:6443: connect: connection refused" interval="400ms" Jul 1 23:59:56.755566 kubelet[2876]: I0701 23:59:56.755502 2876 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-136" Jul 1 23:59:56.756056 kubelet[2876]: E0701 23:59:56.756011 2876 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.136:6443/api/v1/nodes\": dial tcp 172.31.26.136:6443: connect: connection refused" node="ip-172-31-26-136" Jul 1 23:59:56.757549 kubelet[2876]: I0701 23:59:56.757122 2876 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 1 23:59:56.757549 kubelet[2876]: I0701 23:59:56.757181 2876 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 1 23:59:56.757549 kubelet[2876]: I0701 23:59:56.757220 2876 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 1 23:59:56.757549 kubelet[2876]: I0701 23:59:56.757259 2876 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06250786a74470180f493877c911cce7-ca-certs\") pod \"kube-apiserver-ip-172-31-26-136\" (UID: \"06250786a74470180f493877c911cce7\") " pod="kube-system/kube-apiserver-ip-172-31-26-136" Jul 1 23:59:56.757549 kubelet[2876]: I0701 23:59:56.757296 2876 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06250786a74470180f493877c911cce7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-136\" (UID: \"06250786a74470180f493877c911cce7\") " pod="kube-system/kube-apiserver-ip-172-31-26-136" Jul 1 23:59:56.757861 kubelet[2876]: I0701 23:59:56.757330 2876 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 1 23:59:56.757861 kubelet[2876]: I0701 23:59:56.757361 2876 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 1 23:59:56.757861 kubelet[2876]: I0701 23:59:56.757392 2876 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10075af84c62bab7a0082cbd83c6265e-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-136\" (UID: \"10075af84c62bab7a0082cbd83c6265e\") " pod="kube-system/kube-scheduler-ip-172-31-26-136" Jul 1 23:59:56.757861 kubelet[2876]: I0701 23:59:56.757426 2876 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06250786a74470180f493877c911cce7-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-136\" (UID: \"06250786a74470180f493877c911cce7\") " pod="kube-system/kube-apiserver-ip-172-31-26-136" Jul 1 23:59:56.929760 containerd[2020]: time="2024-07-01T23:59:56.929310479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-136,Uid:06250786a74470180f493877c911cce7,Namespace:kube-system,Attempt:0,}" Jul 1 23:59:56.944025 containerd[2020]: time="2024-07-01T23:59:56.943566203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-136,Uid:7e991c222dfb1ecd75ae308a7ae4363d,Namespace:kube-system,Attempt:0,}" Jul 1 23:59:56.952887 containerd[2020]: time="2024-07-01T23:59:56.952363751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-136,Uid:10075af84c62bab7a0082cbd83c6265e,Namespace:kube-system,Attempt:0,}" Jul 1 23:59:57.061026 kubelet[2876]: E0701 23:59:57.060945 2876 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-136?timeout=10s\": dial tcp 172.31.26.136:6443: connect: connection refused" interval="800ms" Jul 1 23:59:57.158547 kubelet[2876]: I0701 23:59:57.158487 2876 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-136" Jul 1 23:59:57.159067 kubelet[2876]: E0701 23:59:57.159021 2876 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.136:6443/api/v1/nodes\": dial tcp 172.31.26.136:6443: connect: connection refused" node="ip-172-31-26-136" Jul 1 23:59:57.324508 kubelet[2876]: W0701 23:59:57.304408 2876 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:57.324508 kubelet[2876]: E0701 23:59:57.304500 2876 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:57.566826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733422178.mount: Deactivated successfully. Jul 1 23:59:57.577995 containerd[2020]: time="2024-07-01T23:59:57.577846630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 23:59:57.580425 containerd[2020]: time="2024-07-01T23:59:57.580260070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 1 23:59:57.582283 containerd[2020]: time="2024-07-01T23:59:57.581636518Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 23:59:57.583293 containerd[2020]: time="2024-07-01T23:59:57.583238770Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 23:59:57.586074 containerd[2020]: time="2024-07-01T23:59:57.586025290Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 1 23:59:57.588236 containerd[2020]: time="2024-07-01T23:59:57.587970370Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 1 23:59:57.588236 containerd[2020]: time="2024-07-01T23:59:57.588114910Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 23:59:57.600433 containerd[2020]: time="2024-07-01T23:59:57.600286498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 23:59:57.603637 containerd[2020]: time="2024-07-01T23:59:57.603355282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 659.618403ms" Jul 1 23:59:57.607104 containerd[2020]: time="2024-07-01T23:59:57.607017634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 654.504387ms" Jul 1 23:59:57.608683 containerd[2020]: time="2024-07-01T23:59:57.608472466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 679.013895ms" Jul 1 23:59:57.691466 kubelet[2876]: W0701 23:59:57.685865 2876 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-136&limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:57.691466 kubelet[2876]: E0701 23:59:57.685973 2876 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-136&limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:57.755101 kubelet[2876]: W0701 23:59:57.754909 2876 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:57.755101 kubelet[2876]: E0701 23:59:57.755032 2876 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:57.862719 kubelet[2876]: E0701 23:59:57.862486 2876 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-136?timeout=10s\": dial tcp 172.31.26.136:6443: connect: connection refused" interval="1.6s" Jul 1 23:59:57.919224 containerd[2020]: time="2024-07-01T23:59:57.918975804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 1 23:59:57.919553 containerd[2020]: time="2024-07-01T23:59:57.919341048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 1 23:59:57.920823 containerd[2020]: time="2024-07-01T23:59:57.919793628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 1 23:59:57.920823 containerd[2020]: time="2024-07-01T23:59:57.920569800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 1 23:59:57.926790 containerd[2020]: time="2024-07-01T23:59:57.926605152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 1 23:59:57.928306 containerd[2020]: time="2024-07-01T23:59:57.926757696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 1 23:59:57.928452 containerd[2020]: time="2024-07-01T23:59:57.928342320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 1 23:59:57.928510 containerd[2020]: time="2024-07-01T23:59:57.928438704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 1 23:59:57.930506 containerd[2020]: time="2024-07-01T23:59:57.930057984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 1 23:59:57.933213 containerd[2020]: time="2024-07-01T23:59:57.930176556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 1 23:59:57.933213 containerd[2020]: time="2024-07-01T23:59:57.932905992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 1 23:59:57.933213 containerd[2020]: time="2024-07-01T23:59:57.932984880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 1 23:59:57.962079 kubelet[2876]: I0701 23:59:57.961867 2876 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-136" Jul 1 23:59:57.962482 kubelet[2876]: E0701 23:59:57.962362 2876 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.136:6443/api/v1/nodes\": dial tcp 172.31.26.136:6443: connect: connection refused" node="ip-172-31-26-136" Jul 1 23:59:57.984006 systemd[1]: Started cri-containerd-3f25c8030ee44abfe0e76ac6ddf2ac8cffde0521555cb1f2917c9270d265dd01.scope - libcontainer container 3f25c8030ee44abfe0e76ac6ddf2ac8cffde0521555cb1f2917c9270d265dd01. Jul 1 23:59:57.992066 systemd[1]: Started cri-containerd-bd55a85c59a9980edf595aeea9992ecf470871b8e6c6996a74abb553d0ffcb22.scope - libcontainer container bd55a85c59a9980edf595aeea9992ecf470871b8e6c6996a74abb553d0ffcb22. Jul 1 23:59:58.010019 systemd[1]: Started cri-containerd-b911cbe6a3076cf87242c698ae16a845041994f5bbe84376f925cfc7b571ceb9.scope - libcontainer container b911cbe6a3076cf87242c698ae16a845041994f5bbe84376f925cfc7b571ceb9. Jul 1 23:59:58.089740 kubelet[2876]: W0701 23:59:58.089496 2876 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:58.089740 kubelet[2876]: E0701 23:59:58.089567 2876 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:58.120850 containerd[2020]: time="2024-07-01T23:59:58.120308493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-136,Uid:06250786a74470180f493877c911cce7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f25c8030ee44abfe0e76ac6ddf2ac8cffde0521555cb1f2917c9270d265dd01\"" Jul 1 23:59:58.129707 containerd[2020]: time="2024-07-01T23:59:58.129354909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-136,Uid:7e991c222dfb1ecd75ae308a7ae4363d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd55a85c59a9980edf595aeea9992ecf470871b8e6c6996a74abb553d0ffcb22\"" Jul 1 23:59:58.141492 containerd[2020]: time="2024-07-01T23:59:58.141391293Z" level=info msg="CreateContainer within sandbox \"bd55a85c59a9980edf595aeea9992ecf470871b8e6c6996a74abb553d0ffcb22\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 1 23:59:58.141966 containerd[2020]: time="2024-07-01T23:59:58.141899817Z" level=info msg="CreateContainer within sandbox \"3f25c8030ee44abfe0e76ac6ddf2ac8cffde0521555cb1f2917c9270d265dd01\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 1 23:59:58.158307 containerd[2020]: time="2024-07-01T23:59:58.158228349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-136,Uid:10075af84c62bab7a0082cbd83c6265e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b911cbe6a3076cf87242c698ae16a845041994f5bbe84376f925cfc7b571ceb9\"" Jul 1 23:59:58.165514 containerd[2020]: time="2024-07-01T23:59:58.165222921Z" level=info msg="CreateContainer within sandbox \"b911cbe6a3076cf87242c698ae16a845041994f5bbe84376f925cfc7b571ceb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 1 23:59:58.187876 containerd[2020]: time="2024-07-01T23:59:58.187814241Z" level=info msg="CreateContainer within sandbox \"bd55a85c59a9980edf595aeea9992ecf470871b8e6c6996a74abb553d0ffcb22\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23ab9bd897739ad4ac2c7073b44a4bd868878bb66550202a9f1368e2fbe6d2f9\"" Jul 1 23:59:58.189227 containerd[2020]: time="2024-07-01T23:59:58.189142413Z" level=info msg="StartContainer for \"23ab9bd897739ad4ac2c7073b44a4bd868878bb66550202a9f1368e2fbe6d2f9\"" Jul 1 23:59:58.195699 containerd[2020]: time="2024-07-01T23:59:58.194970417Z" level=info msg="CreateContainer within sandbox \"3f25c8030ee44abfe0e76ac6ddf2ac8cffde0521555cb1f2917c9270d265dd01\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a263d28c8719e41fa772264fc764b549c0db0608069265331cf8dceb30d50004\"" Jul 1 23:59:58.197964 containerd[2020]: time="2024-07-01T23:59:58.197882301Z" level=info msg="StartContainer for \"a263d28c8719e41fa772264fc764b549c0db0608069265331cf8dceb30d50004\"" Jul 1 23:59:58.209504 containerd[2020]: time="2024-07-01T23:59:58.209440953Z" level=info msg="CreateContainer within sandbox \"b911cbe6a3076cf87242c698ae16a845041994f5bbe84376f925cfc7b571ceb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a8cf26c8d674ee2d414a27f1c934bb0fca9d0c5b2675280d3abad783363ff6f5\"" Jul 1 23:59:58.210959 containerd[2020]: time="2024-07-01T23:59:58.210900393Z" level=info msg="StartContainer for \"a8cf26c8d674ee2d414a27f1c934bb0fca9d0c5b2675280d3abad783363ff6f5\"" Jul 1 23:59:58.267197 systemd[1]: Started cri-containerd-23ab9bd897739ad4ac2c7073b44a4bd868878bb66550202a9f1368e2fbe6d2f9.scope - libcontainer container 23ab9bd897739ad4ac2c7073b44a4bd868878bb66550202a9f1368e2fbe6d2f9. Jul 1 23:59:58.298989 systemd[1]: Started cri-containerd-a8cf26c8d674ee2d414a27f1c934bb0fca9d0c5b2675280d3abad783363ff6f5.scope - libcontainer container a8cf26c8d674ee2d414a27f1c934bb0fca9d0c5b2675280d3abad783363ff6f5. Jul 1 23:59:58.313045 systemd[1]: Started cri-containerd-a263d28c8719e41fa772264fc764b549c0db0608069265331cf8dceb30d50004.scope - libcontainer container a263d28c8719e41fa772264fc764b549c0db0608069265331cf8dceb30d50004. Jul 1 23:59:58.466767 kubelet[2876]: E0701 23:59:58.439184 2876 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.136:6443: connect: connection refused Jul 1 23:59:58.467050 containerd[2020]: time="2024-07-01T23:59:58.466537259Z" level=info msg="StartContainer for \"23ab9bd897739ad4ac2c7073b44a4bd868878bb66550202a9f1368e2fbe6d2f9\" returns successfully" Jul 1 23:59:58.478904 containerd[2020]: time="2024-07-01T23:59:58.478821851Z" level=info msg="StartContainer for \"a263d28c8719e41fa772264fc764b549c0db0608069265331cf8dceb30d50004\" returns successfully" Jul 1 23:59:58.482876 containerd[2020]: time="2024-07-01T23:59:58.482794583Z" level=info msg="StartContainer for \"a8cf26c8d674ee2d414a27f1c934bb0fca9d0c5b2675280d3abad783363ff6f5\" returns successfully" Jul 1 23:59:59.566211 kubelet[2876]: I0701 23:59:59.565333 2876 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-136" Jul 2 00:00:02.143559 kubelet[2876]: E0702 00:00:02.142617 2876 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-136\" not found" node="ip-172-31-26-136" Jul 2 00:00:02.234763 kubelet[2876]: E0702 00:00:02.234321 2876 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-136.17de3c4a0c6bad68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-136,UID:ip-172-31-26-136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-136,},FirstTimestamp:2024-07-01 23:59:56.43573796 +0000 UTC m=+1.487398124,LastTimestamp:2024-07-01 23:59:56.43573796 +0000 UTC m=+1.487398124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-136,}" Jul 2 00:00:02.300539 kubelet[2876]: I0702 00:00:02.300468 2876 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-136" Jul 2 00:00:02.330263 kubelet[2876]: E0702 00:00:02.330098 2876 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-136.17de3c4a0e12de29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-136,UID:ip-172-31-26-136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-26-136,},FirstTimestamp:2024-07-01 23:59:56.463472169 +0000 UTC m=+1.515132357,LastTimestamp:2024-07-01 23:59:56.463472169 +0000 UTC m=+1.515132357,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-136,}" Jul 2 00:00:02.433086 kubelet[2876]: I0702 00:00:02.432619 2876 apiserver.go:52] "Watching apiserver" Jul 2 00:00:02.450696 kubelet[2876]: E0702 00:00:02.450515 2876 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-136.17de3c4a10948045 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-136,UID:ip-172-31-26-136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-26-136 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-26-136,},FirstTimestamp:2024-07-01 23:59:56.505522245 +0000 UTC m=+1.557182409,LastTimestamp:2024-07-01 23:59:56.505522245 +0000 UTC m=+1.557182409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-136,}" Jul 2 00:00:02.456741 kubelet[2876]: I0702 00:00:02.456682 2876 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:00:03.098493 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 2 00:00:03.130218 systemd[1]: logrotate.service: Deactivated successfully. Jul 2 00:00:04.063746 update_engine[1996]: I0702 00:00:04.062707 1996 update_attempter.cc:509] Updating boot flags... Jul 2 00:00:04.209702 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3166) Jul 2 00:00:04.495008 systemd[1]: Reloading requested from client PID 3249 ('systemctl') (unit session-7.scope)... Jul 2 00:00:04.495042 systemd[1]: Reloading... Jul 2 00:00:04.783686 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3167) Jul 2 00:00:04.813693 zram_generator::config[3299]: No configuration found. Jul 2 00:00:05.242164 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:00:05.453791 systemd[1]: Reloading finished in 957 ms. Jul 2 00:00:05.658936 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:00:05.698229 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:00:05.698898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:05.698975 systemd[1]: kubelet.service: Consumed 2.252s CPU time, 112.0M memory peak, 0B memory swap peak. Jul 2 00:00:05.715910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:00:06.186046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:06.206318 (kubelet)[3434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:00:06.351834 kubelet[3434]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:00:06.351834 kubelet[3434]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:00:06.351834 kubelet[3434]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:00:06.351834 kubelet[3434]: I0702 00:00:06.350605 3434 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:00:06.365737 kubelet[3434]: I0702 00:00:06.364414 3434 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:00:06.365737 kubelet[3434]: I0702 00:00:06.364466 3434 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:00:06.366429 kubelet[3434]: I0702 00:00:06.366355 3434 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:00:06.371765 kubelet[3434]: I0702 00:00:06.371200 3434 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:00:06.375415 kubelet[3434]: I0702 00:00:06.375343 3434 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:00:06.391376 kubelet[3434]: I0702 00:00:06.391305 3434 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:00:06.392725 kubelet[3434]: I0702 00:00:06.392599 3434 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:00:06.393040 kubelet[3434]: I0702 00:00:06.392683 3434 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:00:06.393227 kubelet[3434]: I0702 00:00:06.393058 3434 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:00:06.393227 kubelet[3434]: I0702 00:00:06.393082 3434 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:00:06.393227 kubelet[3434]: I0702 00:00:06.393153 3434 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:00:06.393409 kubelet[3434]: I0702 00:00:06.393370 3434 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:00:06.393409 kubelet[3434]: I0702 00:00:06.393399 3434 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:00:06.393527 kubelet[3434]: I0702 00:00:06.393451 3434 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:00:06.393527 kubelet[3434]: I0702 00:00:06.393480 3434 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:00:06.399944 kubelet[3434]: I0702 00:00:06.399890 3434 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:00:06.400476 kubelet[3434]: I0702 00:00:06.400409 3434 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:00:06.404763 kubelet[3434]: I0702 00:00:06.402902 3434 server.go:1264] "Started kubelet" Jul 2 00:00:06.410750 kubelet[3434]: I0702 00:00:06.410371 3434 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:00:06.432842 kubelet[3434]: I0702 00:00:06.432747 3434 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:00:06.443808 kubelet[3434]: I0702 00:00:06.440632 3434 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:00:06.444012 kubelet[3434]: I0702 00:00:06.443977 3434 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:00:06.445345 kubelet[3434]: I0702 00:00:06.444605 3434 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:00:06.459694 kubelet[3434]: I0702 00:00:06.458908 3434 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:00:06.482363 kubelet[3434]: I0702 00:00:06.481198 3434 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:00:06.482549 kubelet[3434]: I0702 00:00:06.482518 3434 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:00:06.498427 kubelet[3434]: I0702 00:00:06.498365 3434 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:00:06.498596 kubelet[3434]: I0702 00:00:06.498538 3434 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:00:06.503244 kubelet[3434]: E0702 00:00:06.503153 3434 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:00:06.504416 kubelet[3434]: I0702 00:00:06.504219 3434 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:00:06.519177 kubelet[3434]: I0702 00:00:06.519017 3434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:00:06.525885 kubelet[3434]: I0702 00:00:06.525819 3434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:00:06.526009 kubelet[3434]: I0702 00:00:06.525927 3434 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:00:06.526009 kubelet[3434]: I0702 00:00:06.526000 3434 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:00:06.526714 kubelet[3434]: E0702 00:00:06.526116 3434 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:00:06.575739 kubelet[3434]: I0702 00:00:06.574351 3434 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-136" Jul 2 00:00:06.600970 kubelet[3434]: I0702 00:00:06.600091 3434 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-26-136" Jul 2 00:00:06.600970 kubelet[3434]: I0702 00:00:06.600268 3434 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-136" Jul 2 00:00:06.626577 kubelet[3434]: E0702 00:00:06.626519 3434 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:00:06.692438 kubelet[3434]: I0702 00:00:06.692392 3434 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:00:06.692670 kubelet[3434]: I0702 00:00:06.692619 3434 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:00:06.692811 kubelet[3434]: I0702 00:00:06.692790 3434 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:00:06.693345 kubelet[3434]: I0702 00:00:06.693305 3434 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:00:06.694784 kubelet[3434]: I0702 00:00:06.693492 3434 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:00:06.694784 kubelet[3434]: I0702 00:00:06.693552 3434 policy_none.go:49] "None policy: Start" Jul 2 00:00:06.699645 kubelet[3434]: I0702 00:00:06.698090 3434 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:00:06.699645 kubelet[3434]: I0702 00:00:06.698146 3434 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:00:06.699645 kubelet[3434]: I0702 00:00:06.698457 3434 state_mem.go:75] "Updated machine memory state" Jul 2 00:00:06.714919 kubelet[3434]: I0702 00:00:06.714795 3434 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:00:06.719522 kubelet[3434]: I0702 00:00:06.717186 3434 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:00:06.727235 kubelet[3434]: I0702 00:00:06.725762 3434 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:00:06.827947 kubelet[3434]: I0702 00:00:06.827871 3434 topology_manager.go:215] "Topology Admit Handler" podUID="06250786a74470180f493877c911cce7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-136" Jul 2 00:00:06.828538 kubelet[3434]: I0702 00:00:06.828497 3434 topology_manager.go:215] "Topology Admit Handler" podUID="7e991c222dfb1ecd75ae308a7ae4363d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-136" Jul 2 00:00:06.829580 kubelet[3434]: I0702 00:00:06.829509 3434 topology_manager.go:215] "Topology Admit Handler" podUID="10075af84c62bab7a0082cbd83c6265e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-136" Jul 2 00:00:06.891861 kubelet[3434]: I0702 00:00:06.891805 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 2 00:00:06.892247 kubelet[3434]: I0702 00:00:06.892141 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 2 00:00:06.892924 kubelet[3434]: I0702 00:00:06.892497 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 2 00:00:06.892924 kubelet[3434]: I0702 00:00:06.892571 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06250786a74470180f493877c911cce7-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-136\" (UID: \"06250786a74470180f493877c911cce7\") " pod="kube-system/kube-apiserver-ip-172-31-26-136" Jul 2 00:00:06.892924 kubelet[3434]: I0702 00:00:06.892611 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 2 00:00:06.892924 kubelet[3434]: I0702 00:00:06.892648 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e991c222dfb1ecd75ae308a7ae4363d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-136\" (UID: \"7e991c222dfb1ecd75ae308a7ae4363d\") " pod="kube-system/kube-controller-manager-ip-172-31-26-136" Jul 2 00:00:06.892924 kubelet[3434]: I0702 00:00:06.892733 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06250786a74470180f493877c911cce7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-136\" (UID: \"06250786a74470180f493877c911cce7\") " pod="kube-system/kube-apiserver-ip-172-31-26-136" Jul 2 00:00:06.893331 kubelet[3434]: I0702 00:00:06.892775 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10075af84c62bab7a0082cbd83c6265e-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-136\" (UID: \"10075af84c62bab7a0082cbd83c6265e\") " pod="kube-system/kube-scheduler-ip-172-31-26-136" Jul 2 00:00:06.893331 kubelet[3434]: I0702 00:00:06.892810 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06250786a74470180f493877c911cce7-ca-certs\") pod \"kube-apiserver-ip-172-31-26-136\" (UID: \"06250786a74470180f493877c911cce7\") " pod="kube-system/kube-apiserver-ip-172-31-26-136" Jul 2 00:00:07.396965 kubelet[3434]: I0702 00:00:07.396901 3434 apiserver.go:52] "Watching apiserver" Jul 2 00:00:07.482676 kubelet[3434]: I0702 00:00:07.482519 3434 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:00:07.653208 kubelet[3434]: I0702 00:00:07.653025 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-136" podStartSLOduration=1.653003192 podStartE2EDuration="1.653003192s" podCreationTimestamp="2024-07-02 00:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:07.652467248 +0000 UTC m=+1.434340964" watchObservedRunningTime="2024-07-02 00:00:07.653003192 +0000 UTC m=+1.434876872" Jul 2 00:00:07.696627 kubelet[3434]: I0702 00:00:07.696305 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-136" podStartSLOduration=1.696282284 podStartE2EDuration="1.696282284s" podCreationTimestamp="2024-07-02 00:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:07.6731558 +0000 UTC m=+1.455029516" watchObservedRunningTime="2024-07-02 00:00:07.696282284 +0000 UTC m=+1.478155988" Jul 2 00:00:07.767351 kubelet[3434]: I0702 00:00:07.767250 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-136" podStartSLOduration=1.7672261169999999 podStartE2EDuration="1.767226117s" podCreationTimestamp="2024-07-02 00:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:07.701769728 +0000 UTC m=+1.483643432" watchObservedRunningTime="2024-07-02 00:00:07.767226117 +0000 UTC m=+1.549099821" Jul 2 00:00:11.718250 sudo[2335]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:11.744080 sshd[2332]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:11.752253 systemd[1]: sshd@6-172.31.26.136:22-147.75.109.163:35450.service: Deactivated successfully. Jul 2 00:00:11.756564 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:00:11.757274 systemd[1]: session-7.scope: Consumed 9.060s CPU time, 134.6M memory peak, 0B memory swap peak. Jul 2 00:00:11.758504 systemd-logind[1992]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:00:11.762035 systemd-logind[1992]: Removed session 7. Jul 2 00:00:18.777308 kubelet[3434]: I0702 00:00:18.776748 3434 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:00:18.778213 containerd[2020]: time="2024-07-02T00:00:18.777317587Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:00:18.780209 kubelet[3434]: I0702 00:00:18.779027 3434 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:00:19.717540 kubelet[3434]: I0702 00:00:19.717249 3434 topology_manager.go:215] "Topology Admit Handler" podUID="cfa0c2dc-6247-4a39-a870-e6985bf63e02" podNamespace="kube-system" podName="kube-proxy-f44gj" Jul 2 00:00:19.755359 systemd[1]: Created slice kubepods-besteffort-podcfa0c2dc_6247_4a39_a870_e6985bf63e02.slice - libcontainer container kubepods-besteffort-podcfa0c2dc_6247_4a39_a870_e6985bf63e02.slice. Jul 2 00:00:19.777577 kubelet[3434]: I0702 00:00:19.777250 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdc6r\" (UniqueName: \"kubernetes.io/projected/cfa0c2dc-6247-4a39-a870-e6985bf63e02-kube-api-access-vdc6r\") pod \"kube-proxy-f44gj\" (UID: \"cfa0c2dc-6247-4a39-a870-e6985bf63e02\") " pod="kube-system/kube-proxy-f44gj" Jul 2 00:00:19.777577 kubelet[3434]: I0702 00:00:19.777341 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfa0c2dc-6247-4a39-a870-e6985bf63e02-xtables-lock\") pod \"kube-proxy-f44gj\" (UID: \"cfa0c2dc-6247-4a39-a870-e6985bf63e02\") " pod="kube-system/kube-proxy-f44gj" Jul 2 00:00:19.777577 kubelet[3434]: I0702 00:00:19.777392 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfa0c2dc-6247-4a39-a870-e6985bf63e02-lib-modules\") pod \"kube-proxy-f44gj\" (UID: \"cfa0c2dc-6247-4a39-a870-e6985bf63e02\") " pod="kube-system/kube-proxy-f44gj" Jul 2 00:00:19.777577 kubelet[3434]: I0702 00:00:19.777430 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cfa0c2dc-6247-4a39-a870-e6985bf63e02-kube-proxy\") pod \"kube-proxy-f44gj\" (UID: \"cfa0c2dc-6247-4a39-a870-e6985bf63e02\") " pod="kube-system/kube-proxy-f44gj" Jul 2 00:00:19.978161 kubelet[3434]: I0702 00:00:19.977996 3434 topology_manager.go:215] "Topology Admit Handler" podUID="02aaf882-2744-4d5d-93be-582bb60af93b" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-jv255" Jul 2 00:00:19.996586 systemd[1]: Created slice kubepods-besteffort-pod02aaf882_2744_4d5d_93be_582bb60af93b.slice - libcontainer container kubepods-besteffort-pod02aaf882_2744_4d5d_93be_582bb60af93b.slice. Jul 2 00:00:20.070769 containerd[2020]: time="2024-07-02T00:00:20.070706970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f44gj,Uid:cfa0c2dc-6247-4a39-a870-e6985bf63e02,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:20.079766 kubelet[3434]: I0702 00:00:20.079712 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02aaf882-2744-4d5d-93be-582bb60af93b-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-jv255\" (UID: \"02aaf882-2744-4d5d-93be-582bb60af93b\") " pod="tigera-operator/tigera-operator-76ff79f7fd-jv255" Jul 2 00:00:20.080873 kubelet[3434]: I0702 00:00:20.079780 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfvpx\" (UniqueName: \"kubernetes.io/projected/02aaf882-2744-4d5d-93be-582bb60af93b-kube-api-access-jfvpx\") pod \"tigera-operator-76ff79f7fd-jv255\" (UID: \"02aaf882-2744-4d5d-93be-582bb60af93b\") " pod="tigera-operator/tigera-operator-76ff79f7fd-jv255" Jul 2 00:00:20.118554 containerd[2020]: time="2024-07-02T00:00:20.118367442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:20.118554 containerd[2020]: time="2024-07-02T00:00:20.118470330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:20.118984 containerd[2020]: time="2024-07-02T00:00:20.118518030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:20.118984 containerd[2020]: time="2024-07-02T00:00:20.118555290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:20.163009 systemd[1]: Started cri-containerd-dd02be1d70511e5e2210c7cc93e742de1ff4571fa0da9b7cd0f6c5585d87b64e.scope - libcontainer container dd02be1d70511e5e2210c7cc93e742de1ff4571fa0da9b7cd0f6c5585d87b64e. Jul 2 00:00:20.222474 containerd[2020]: time="2024-07-02T00:00:20.222084919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f44gj,Uid:cfa0c2dc-6247-4a39-a870-e6985bf63e02,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd02be1d70511e5e2210c7cc93e742de1ff4571fa0da9b7cd0f6c5585d87b64e\"" Jul 2 00:00:20.230934 containerd[2020]: time="2024-07-02T00:00:20.230574151Z" level=info msg="CreateContainer within sandbox \"dd02be1d70511e5e2210c7cc93e742de1ff4571fa0da9b7cd0f6c5585d87b64e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:00:20.263310 containerd[2020]: time="2024-07-02T00:00:20.263122591Z" level=info msg="CreateContainer within sandbox \"dd02be1d70511e5e2210c7cc93e742de1ff4571fa0da9b7cd0f6c5585d87b64e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e189ede5443e9b244cbc58ffce86334be6d5c6f5f096c44d6fdc4a8574d864ff\"" Jul 2 00:00:20.264294 containerd[2020]: time="2024-07-02T00:00:20.264191731Z" level=info msg="StartContainer for \"e189ede5443e9b244cbc58ffce86334be6d5c6f5f096c44d6fdc4a8574d864ff\"" Jul 2 00:00:20.307289 containerd[2020]: time="2024-07-02T00:00:20.306722335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-jv255,Uid:02aaf882-2744-4d5d-93be-582bb60af93b,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:00:20.320019 systemd[1]: Started cri-containerd-e189ede5443e9b244cbc58ffce86334be6d5c6f5f096c44d6fdc4a8574d864ff.scope - libcontainer container e189ede5443e9b244cbc58ffce86334be6d5c6f5f096c44d6fdc4a8574d864ff. Jul 2 00:00:20.384989 containerd[2020]: time="2024-07-02T00:00:20.379554847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:20.384989 containerd[2020]: time="2024-07-02T00:00:20.382367467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:20.384989 containerd[2020]: time="2024-07-02T00:00:20.382408435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:20.384989 containerd[2020]: time="2024-07-02T00:00:20.383260567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:20.415741 containerd[2020]: time="2024-07-02T00:00:20.415569824Z" level=info msg="StartContainer for \"e189ede5443e9b244cbc58ffce86334be6d5c6f5f096c44d6fdc4a8574d864ff\" returns successfully" Jul 2 00:00:20.436046 systemd[1]: Started cri-containerd-466a3c5007b17d61d23c2fa91987238bb88619f5846034fbdef1b1a27181cb09.scope - libcontainer container 466a3c5007b17d61d23c2fa91987238bb88619f5846034fbdef1b1a27181cb09. Jul 2 00:00:20.546392 containerd[2020]: time="2024-07-02T00:00:20.546221516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-jv255,Uid:02aaf882-2744-4d5d-93be-582bb60af93b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"466a3c5007b17d61d23c2fa91987238bb88619f5846034fbdef1b1a27181cb09\"" Jul 2 00:00:20.553185 containerd[2020]: time="2024-07-02T00:00:20.552784088Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:00:22.719893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3801474980.mount: Deactivated successfully. Jul 2 00:00:23.434605 containerd[2020]: time="2024-07-02T00:00:23.434482367Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:23.437202 containerd[2020]: time="2024-07-02T00:00:23.437103911Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473602" Jul 2 00:00:23.439995 containerd[2020]: time="2024-07-02T00:00:23.439884707Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:23.445204 containerd[2020]: time="2024-07-02T00:00:23.445068443Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:23.447290 containerd[2020]: time="2024-07-02T00:00:23.447065627Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.894210931s" Jul 2 00:00:23.447290 containerd[2020]: time="2024-07-02T00:00:23.447138191Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 00:00:23.454729 containerd[2020]: time="2024-07-02T00:00:23.454630487Z" level=info msg="CreateContainer within sandbox \"466a3c5007b17d61d23c2fa91987238bb88619f5846034fbdef1b1a27181cb09\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:00:23.513846 containerd[2020]: time="2024-07-02T00:00:23.513765707Z" level=info msg="CreateContainer within sandbox \"466a3c5007b17d61d23c2fa91987238bb88619f5846034fbdef1b1a27181cb09\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9e14177cbbddbd5c66191b9a2235bb4156de315cb03e0def72ba9dd59871148b\"" Jul 2 00:00:23.514864 containerd[2020]: time="2024-07-02T00:00:23.514783511Z" level=info msg="StartContainer for \"9e14177cbbddbd5c66191b9a2235bb4156de315cb03e0def72ba9dd59871148b\"" Jul 2 00:00:23.567307 systemd[1]: Started cri-containerd-9e14177cbbddbd5c66191b9a2235bb4156de315cb03e0def72ba9dd59871148b.scope - libcontainer container 9e14177cbbddbd5c66191b9a2235bb4156de315cb03e0def72ba9dd59871148b. Jul 2 00:00:23.619622 containerd[2020]: time="2024-07-02T00:00:23.619504799Z" level=info msg="StartContainer for \"9e14177cbbddbd5c66191b9a2235bb4156de315cb03e0def72ba9dd59871148b\" returns successfully" Jul 2 00:00:23.674394 kubelet[3434]: I0702 00:00:23.673868 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f44gj" podStartSLOduration=4.673827624 podStartE2EDuration="4.673827624s" podCreationTimestamp="2024-07-02 00:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:20.662116425 +0000 UTC m=+14.443990153" watchObservedRunningTime="2024-07-02 00:00:23.673827624 +0000 UTC m=+17.455701316" Jul 2 00:00:23.674394 kubelet[3434]: I0702 00:00:23.674086 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-jv255" podStartSLOduration=1.775757289 podStartE2EDuration="4.674071668s" podCreationTimestamp="2024-07-02 00:00:19 +0000 UTC" firstStartedPulling="2024-07-02 00:00:20.550848524 +0000 UTC m=+14.332722216" lastFinishedPulling="2024-07-02 00:00:23.449162903 +0000 UTC m=+17.231036595" observedRunningTime="2024-07-02 00:00:23.673784784 +0000 UTC m=+17.455658500" watchObservedRunningTime="2024-07-02 00:00:23.674071668 +0000 UTC m=+17.455945396" Jul 2 00:00:28.729985 kubelet[3434]: I0702 00:00:28.729908 3434 topology_manager.go:215] "Topology Admit Handler" podUID="ee52cc17-d686-403c-a8ad-50bfb9eaf7ff" podNamespace="calico-system" podName="calico-typha-68595d949d-jj5lr" Jul 2 00:00:28.745528 kubelet[3434]: I0702 00:00:28.745324 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w74gq\" (UniqueName: \"kubernetes.io/projected/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-kube-api-access-w74gq\") pod \"calico-typha-68595d949d-jj5lr\" (UID: \"ee52cc17-d686-403c-a8ad-50bfb9eaf7ff\") " pod="calico-system/calico-typha-68595d949d-jj5lr" Jul 2 00:00:28.745528 kubelet[3434]: I0702 00:00:28.745399 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-tigera-ca-bundle\") pod \"calico-typha-68595d949d-jj5lr\" (UID: \"ee52cc17-d686-403c-a8ad-50bfb9eaf7ff\") " pod="calico-system/calico-typha-68595d949d-jj5lr" Jul 2 00:00:28.745528 kubelet[3434]: I0702 00:00:28.745443 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-typha-certs\") pod \"calico-typha-68595d949d-jj5lr\" (UID: \"ee52cc17-d686-403c-a8ad-50bfb9eaf7ff\") " pod="calico-system/calico-typha-68595d949d-jj5lr" Jul 2 00:00:28.752870 systemd[1]: Created slice kubepods-besteffort-podee52cc17_d686_403c_a8ad_50bfb9eaf7ff.slice - libcontainer container kubepods-besteffort-podee52cc17_d686_403c_a8ad_50bfb9eaf7ff.slice. Jul 2 00:00:28.964869 kubelet[3434]: I0702 00:00:28.964727 3434 topology_manager.go:215] "Topology Admit Handler" podUID="697cd1d3-3a80-4700-b5f2-c6db20390077" podNamespace="calico-system" podName="calico-node-4qtjq" Jul 2 00:00:28.987834 systemd[1]: Created slice kubepods-besteffort-pod697cd1d3_3a80_4700_b5f2_c6db20390077.slice - libcontainer container kubepods-besteffort-pod697cd1d3_3a80_4700_b5f2_c6db20390077.slice. Jul 2 00:00:29.047517 kubelet[3434]: I0702 00:00:29.047393 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/697cd1d3-3a80-4700-b5f2-c6db20390077-tigera-ca-bundle\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.047517 kubelet[3434]: I0702 00:00:29.047469 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-log-dir\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.047517 kubelet[3434]: I0702 00:00:29.047522 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-flexvol-driver-host\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.048040 kubelet[3434]: I0702 00:00:29.047566 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-bin-dir\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.048040 kubelet[3434]: I0702 00:00:29.047623 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-policysync\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.048040 kubelet[3434]: I0702 00:00:29.047746 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-xtables-lock\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.048040 kubelet[3434]: I0702 00:00:29.047841 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-net-dir\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.048040 kubelet[3434]: I0702 00:00:29.047919 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/697cd1d3-3a80-4700-b5f2-c6db20390077-node-certs\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.048346 kubelet[3434]: I0702 00:00:29.047993 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-var-run-calico\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.048346 kubelet[3434]: I0702 00:00:29.048032 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-var-lib-calico\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.048346 kubelet[3434]: I0702 00:00:29.048135 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfn7z\" (UniqueName: \"kubernetes.io/projected/697cd1d3-3a80-4700-b5f2-c6db20390077-kube-api-access-wfn7z\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.048346 kubelet[3434]: I0702 00:00:29.048180 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-lib-modules\") pod \"calico-node-4qtjq\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " pod="calico-system/calico-node-4qtjq" Jul 2 00:00:29.060130 containerd[2020]: time="2024-07-02T00:00:29.060025034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68595d949d-jj5lr,Uid:ee52cc17-d686-403c-a8ad-50bfb9eaf7ff,Namespace:calico-system,Attempt:0,}" Jul 2 00:00:29.120899 kubelet[3434]: I0702 00:00:29.119005 3434 topology_manager.go:215] "Topology Admit Handler" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" podNamespace="calico-system" podName="csi-node-driver-vdgpm" Jul 2 00:00:29.120899 kubelet[3434]: E0702 00:00:29.119697 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:29.149821 containerd[2020]: time="2024-07-02T00:00:29.148914087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:29.149821 containerd[2020]: time="2024-07-02T00:00:29.149026335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:29.149821 containerd[2020]: time="2024-07-02T00:00:29.149083791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:29.149821 containerd[2020]: time="2024-07-02T00:00:29.149118927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:29.151109 kubelet[3434]: I0702 00:00:29.149983 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b82ac8fb-8024-40b1-9a10-88793e57ca39-kubelet-dir\") pod \"csi-node-driver-vdgpm\" (UID: \"b82ac8fb-8024-40b1-9a10-88793e57ca39\") " pod="calico-system/csi-node-driver-vdgpm" Jul 2 00:00:29.151109 kubelet[3434]: I0702 00:00:29.150048 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b82ac8fb-8024-40b1-9a10-88793e57ca39-registration-dir\") pod \"csi-node-driver-vdgpm\" (UID: \"b82ac8fb-8024-40b1-9a10-88793e57ca39\") " pod="calico-system/csi-node-driver-vdgpm" Jul 2 00:00:29.151109 kubelet[3434]: I0702 00:00:29.150298 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b82ac8fb-8024-40b1-9a10-88793e57ca39-varrun\") pod \"csi-node-driver-vdgpm\" (UID: \"b82ac8fb-8024-40b1-9a10-88793e57ca39\") " pod="calico-system/csi-node-driver-vdgpm" Jul 2 00:00:29.151109 kubelet[3434]: I0702 00:00:29.150341 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxpx4\" (UniqueName: \"kubernetes.io/projected/b82ac8fb-8024-40b1-9a10-88793e57ca39-kube-api-access-mxpx4\") pod \"csi-node-driver-vdgpm\" (UID: \"b82ac8fb-8024-40b1-9a10-88793e57ca39\") " pod="calico-system/csi-node-driver-vdgpm" Jul 2 00:00:29.151109 kubelet[3434]: I0702 00:00:29.150380 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b82ac8fb-8024-40b1-9a10-88793e57ca39-socket-dir\") pod \"csi-node-driver-vdgpm\" (UID: \"b82ac8fb-8024-40b1-9a10-88793e57ca39\") " pod="calico-system/csi-node-driver-vdgpm" Jul 2 00:00:29.161185 kubelet[3434]: E0702 00:00:29.159071 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.161185 kubelet[3434]: W0702 00:00:29.159240 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.161185 kubelet[3434]: E0702 00:00:29.159444 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.161185 kubelet[3434]: E0702 00:00:29.160725 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.161185 kubelet[3434]: W0702 00:00:29.160763 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.161185 kubelet[3434]: E0702 00:00:29.160800 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.166699 kubelet[3434]: E0702 00:00:29.164579 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.166699 kubelet[3434]: W0702 00:00:29.164621 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.166699 kubelet[3434]: E0702 00:00:29.164907 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.168848 kubelet[3434]: E0702 00:00:29.168803 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.171129 kubelet[3434]: W0702 00:00:29.169715 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.171688 kubelet[3434]: E0702 00:00:29.171412 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.177412 kubelet[3434]: E0702 00:00:29.174830 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.177412 kubelet[3434]: W0702 00:00:29.174869 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.177412 kubelet[3434]: E0702 00:00:29.174935 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.180081 kubelet[3434]: E0702 00:00:29.179038 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.181116 kubelet[3434]: W0702 00:00:29.180770 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.181116 kubelet[3434]: E0702 00:00:29.180931 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.184215 kubelet[3434]: E0702 00:00:29.184170 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.188319 kubelet[3434]: W0702 00:00:29.187446 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.188319 kubelet[3434]: E0702 00:00:29.187566 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.188319 kubelet[3434]: E0702 00:00:29.188030 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.188319 kubelet[3434]: W0702 00:00:29.188078 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.188319 kubelet[3434]: E0702 00:00:29.188178 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.189449 kubelet[3434]: E0702 00:00:29.189067 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.189449 kubelet[3434]: W0702 00:00:29.189106 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.190126 kubelet[3434]: E0702 00:00:29.190066 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.191884 kubelet[3434]: W0702 00:00:29.191373 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.192193 kubelet[3434]: E0702 00:00:29.191925 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.192193 kubelet[3434]: E0702 00:00:29.191970 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.193623 kubelet[3434]: E0702 00:00:29.193040 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.193623 kubelet[3434]: W0702 00:00:29.193078 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.195037 kubelet[3434]: E0702 00:00:29.194590 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.195407 kubelet[3434]: E0702 00:00:29.195367 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.197314 kubelet[3434]: W0702 00:00:29.195791 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.197314 kubelet[3434]: E0702 00:00:29.196749 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.198033 kubelet[3434]: E0702 00:00:29.197992 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.198253 kubelet[3434]: W0702 00:00:29.198211 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.198437 kubelet[3434]: E0702 00:00:29.198400 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.199988 kubelet[3434]: E0702 00:00:29.199929 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.199988 kubelet[3434]: W0702 00:00:29.199976 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.200253 kubelet[3434]: E0702 00:00:29.200027 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.203368 kubelet[3434]: E0702 00:00:29.203073 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.203368 kubelet[3434]: W0702 00:00:29.203118 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.203853 kubelet[3434]: E0702 00:00:29.203569 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.203853 kubelet[3434]: W0702 00:00:29.203633 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.204060 kubelet[3434]: E0702 00:00:29.203581 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.204060 kubelet[3434]: E0702 00:00:29.203965 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.205593 kubelet[3434]: E0702 00:00:29.205208 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.205593 kubelet[3434]: W0702 00:00:29.205251 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.206138 kubelet[3434]: E0702 00:00:29.205799 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.207725 kubelet[3434]: E0702 00:00:29.207064 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.207725 kubelet[3434]: W0702 00:00:29.207107 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.208176 kubelet[3434]: E0702 00:00:29.208070 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.208883 kubelet[3434]: E0702 00:00:29.208829 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.208883 kubelet[3434]: W0702 00:00:29.208872 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.210467 kubelet[3434]: E0702 00:00:29.208945 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.211837 kubelet[3434]: E0702 00:00:29.211271 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.211837 kubelet[3434]: W0702 00:00:29.211314 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.211837 kubelet[3434]: E0702 00:00:29.211351 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.217837 kubelet[3434]: E0702 00:00:29.217472 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.217837 kubelet[3434]: W0702 00:00:29.217828 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.218062 kubelet[3434]: E0702 00:00:29.217868 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.242720 kubelet[3434]: E0702 00:00:29.242051 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.242720 kubelet[3434]: W0702 00:00:29.242089 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.242720 kubelet[3434]: E0702 00:00:29.242124 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.246281 systemd[1]: Started cri-containerd-c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850.scope - libcontainer container c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850. Jul 2 00:00:29.252875 kubelet[3434]: E0702 00:00:29.252833 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.255718 kubelet[3434]: W0702 00:00:29.255178 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.255718 kubelet[3434]: E0702 00:00:29.255384 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.256523 kubelet[3434]: E0702 00:00:29.256381 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.256523 kubelet[3434]: W0702 00:00:29.256442 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.257068 kubelet[3434]: E0702 00:00:29.256478 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.257506 kubelet[3434]: E0702 00:00:29.257461 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.257632 kubelet[3434]: W0702 00:00:29.257526 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.257632 kubelet[3434]: E0702 00:00:29.257575 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.258206 kubelet[3434]: E0702 00:00:29.258165 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.258206 kubelet[3434]: W0702 00:00:29.258202 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.258826 kubelet[3434]: E0702 00:00:29.258418 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.258826 kubelet[3434]: E0702 00:00:29.258743 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.258826 kubelet[3434]: W0702 00:00:29.258769 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.258826 kubelet[3434]: E0702 00:00:29.258812 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.259472 kubelet[3434]: E0702 00:00:29.259404 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.259472 kubelet[3434]: W0702 00:00:29.259469 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.260016 kubelet[3434]: E0702 00:00:29.259579 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.260308 kubelet[3434]: E0702 00:00:29.260270 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.260415 kubelet[3434]: W0702 00:00:29.260309 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.260415 kubelet[3434]: E0702 00:00:29.260378 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.261363 kubelet[3434]: E0702 00:00:29.261282 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.261363 kubelet[3434]: W0702 00:00:29.261346 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.261728 kubelet[3434]: E0702 00:00:29.261607 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.262030 kubelet[3434]: E0702 00:00:29.261987 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.262030 kubelet[3434]: W0702 00:00:29.262022 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.262301 kubelet[3434]: E0702 00:00:29.262239 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.262758 kubelet[3434]: E0702 00:00:29.262712 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.262758 kubelet[3434]: W0702 00:00:29.262746 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.263125 kubelet[3434]: E0702 00:00:29.263004 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.267243 kubelet[3434]: E0702 00:00:29.267179 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.267243 kubelet[3434]: W0702 00:00:29.267222 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.267576 kubelet[3434]: E0702 00:00:29.267496 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.268916 kubelet[3434]: E0702 00:00:29.268862 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.268916 kubelet[3434]: W0702 00:00:29.268905 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.269385 kubelet[3434]: E0702 00:00:29.269074 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.271894 kubelet[3434]: E0702 00:00:29.271822 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.271894 kubelet[3434]: W0702 00:00:29.271867 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.272336 kubelet[3434]: E0702 00:00:29.272027 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.273889 kubelet[3434]: E0702 00:00:29.273829 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.273889 kubelet[3434]: W0702 00:00:29.273876 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.274267 kubelet[3434]: E0702 00:00:29.274047 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.275912 kubelet[3434]: E0702 00:00:29.275839 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.276061 kubelet[3434]: W0702 00:00:29.275903 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.277874 kubelet[3434]: E0702 00:00:29.277806 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.279866 kubelet[3434]: E0702 00:00:29.279804 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.279866 kubelet[3434]: W0702 00:00:29.279851 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.280330 kubelet[3434]: E0702 00:00:29.280019 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.282041 kubelet[3434]: E0702 00:00:29.281982 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.282041 kubelet[3434]: W0702 00:00:29.282026 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.282329 kubelet[3434]: E0702 00:00:29.282185 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.283769 kubelet[3434]: E0702 00:00:29.283714 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.283769 kubelet[3434]: W0702 00:00:29.283754 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.284129 kubelet[3434]: E0702 00:00:29.283904 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.285625 kubelet[3434]: E0702 00:00:29.285562 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.286134 kubelet[3434]: W0702 00:00:29.286080 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.286394 kubelet[3434]: E0702 00:00:29.286315 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.290382 kubelet[3434]: E0702 00:00:29.290032 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.290382 kubelet[3434]: W0702 00:00:29.290075 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.291951 kubelet[3434]: E0702 00:00:29.290707 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.292200 kubelet[3434]: E0702 00:00:29.291992 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.292200 kubelet[3434]: W0702 00:00:29.292022 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.292200 kubelet[3434]: E0702 00:00:29.292135 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.294897 kubelet[3434]: E0702 00:00:29.294842 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.294897 kubelet[3434]: W0702 00:00:29.294885 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.295340 kubelet[3434]: E0702 00:00:29.295053 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.297867 kubelet[3434]: E0702 00:00:29.297798 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.297867 kubelet[3434]: W0702 00:00:29.297857 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.299330 kubelet[3434]: E0702 00:00:29.298243 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.299330 kubelet[3434]: E0702 00:00:29.298439 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.299330 kubelet[3434]: W0702 00:00:29.298460 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.299330 kubelet[3434]: E0702 00:00:29.298611 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.300776 kubelet[3434]: E0702 00:00:29.300720 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.300776 kubelet[3434]: W0702 00:00:29.300764 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.301159 kubelet[3434]: E0702 00:00:29.300815 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.304597 kubelet[3434]: E0702 00:00:29.303941 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.304597 kubelet[3434]: W0702 00:00:29.303980 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.304597 kubelet[3434]: E0702 00:00:29.304016 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.314180 kubelet[3434]: E0702 00:00:29.314028 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.314180 kubelet[3434]: W0702 00:00:29.314070 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.314180 kubelet[3434]: E0702 00:00:29.314106 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.375074 kubelet[3434]: E0702 00:00:29.374998 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.375933 kubelet[3434]: W0702 00:00:29.375045 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.375933 kubelet[3434]: E0702 00:00:29.375461 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.384986 kubelet[3434]: E0702 00:00:29.384925 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:00:29.385159 kubelet[3434]: W0702 00:00:29.384990 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:00:29.385159 kubelet[3434]: E0702 00:00:29.385029 3434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:00:29.394315 containerd[2020]: time="2024-07-02T00:00:29.393977716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68595d949d-jj5lr,Uid:ee52cc17-d686-403c-a8ad-50bfb9eaf7ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\"" Jul 2 00:00:29.401040 containerd[2020]: time="2024-07-02T00:00:29.400830112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:00:29.596829 containerd[2020]: time="2024-07-02T00:00:29.596084441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4qtjq,Uid:697cd1d3-3a80-4700-b5f2-c6db20390077,Namespace:calico-system,Attempt:0,}" Jul 2 00:00:29.656759 containerd[2020]: time="2024-07-02T00:00:29.656473721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:29.656759 containerd[2020]: time="2024-07-02T00:00:29.656608361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:29.657010 containerd[2020]: time="2024-07-02T00:00:29.656755565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:29.657010 containerd[2020]: time="2024-07-02T00:00:29.656841137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:29.702893 systemd[1]: Started cri-containerd-ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea.scope - libcontainer container ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea. Jul 2 00:00:29.772883 containerd[2020]: time="2024-07-02T00:00:29.772785330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4qtjq,Uid:697cd1d3-3a80-4700-b5f2-c6db20390077,Namespace:calico-system,Attempt:0,} returns sandbox id \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\"" Jul 2 00:00:30.534713 kubelet[3434]: E0702 00:00:30.534454 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:32.399162 containerd[2020]: time="2024-07-02T00:00:32.399034735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:32.402873 containerd[2020]: time="2024-07-02T00:00:32.402449995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 00:00:32.403609 containerd[2020]: time="2024-07-02T00:00:32.403514947Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:32.413183 containerd[2020]: time="2024-07-02T00:00:32.413087251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:32.417234 containerd[2020]: time="2024-07-02T00:00:32.417020311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 3.016086627s" Jul 2 00:00:32.417234 containerd[2020]: time="2024-07-02T00:00:32.417092023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 00:00:32.420904 containerd[2020]: time="2024-07-02T00:00:32.420329311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:00:32.468855 containerd[2020]: time="2024-07-02T00:00:32.468390799Z" level=info msg="CreateContainer within sandbox \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:00:32.509640 containerd[2020]: time="2024-07-02T00:00:32.509572832Z" level=info msg="CreateContainer within sandbox \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\"" Jul 2 00:00:32.511071 containerd[2020]: time="2024-07-02T00:00:32.510835520Z" level=info msg="StartContainer for \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\"" Jul 2 00:00:32.529590 kubelet[3434]: E0702 00:00:32.529234 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:32.624445 systemd[1]: Started cri-containerd-f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea.scope - libcontainer container f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea. Jul 2 00:00:32.738886 containerd[2020]: time="2024-07-02T00:00:32.738376845Z" level=info msg="StartContainer for \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\" returns successfully" Jul 2 00:00:33.723829 containerd[2020]: time="2024-07-02T00:00:33.723708490Z" level=info msg="StopContainer for \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\" with timeout 300 (s)" Jul 2 00:00:33.727922 containerd[2020]: time="2024-07-02T00:00:33.726249874Z" level=info msg="Stop container \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\" with signal terminated" Jul 2 00:00:33.781887 systemd[1]: cri-containerd-f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea.scope: Deactivated successfully. Jul 2 00:00:33.814031 kubelet[3434]: I0702 00:00:33.812396 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68595d949d-jj5lr" podStartSLOduration=2.791574271 podStartE2EDuration="5.812372506s" podCreationTimestamp="2024-07-02 00:00:28 +0000 UTC" firstStartedPulling="2024-07-02 00:00:29.39861136 +0000 UTC m=+23.180485040" lastFinishedPulling="2024-07-02 00:00:32.419409583 +0000 UTC m=+26.201283275" observedRunningTime="2024-07-02 00:00:33.75749899 +0000 UTC m=+27.539372682" watchObservedRunningTime="2024-07-02 00:00:33.812372506 +0000 UTC m=+27.594246198" Jul 2 00:00:33.925549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea-rootfs.mount: Deactivated successfully. Jul 2 00:00:33.947002 containerd[2020]: time="2024-07-02T00:00:33.946095359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:33.951277 containerd[2020]: time="2024-07-02T00:00:33.951182459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 00:00:33.958154 containerd[2020]: time="2024-07-02T00:00:33.958038491Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:34.015792 containerd[2020]: time="2024-07-02T00:00:34.014244271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:34.016329 containerd[2020]: time="2024-07-02T00:00:34.016258159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.595859344s" Jul 2 00:00:34.016518 containerd[2020]: time="2024-07-02T00:00:34.016481239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 00:00:34.023562 containerd[2020]: time="2024-07-02T00:00:34.023488363Z" level=info msg="CreateContainer within sandbox \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:00:34.434432 containerd[2020]: time="2024-07-02T00:00:34.434344137Z" level=info msg="CreateContainer within sandbox \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87\"" Jul 2 00:00:34.435587 containerd[2020]: time="2024-07-02T00:00:34.435488013Z" level=info msg="StartContainer for \"6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87\"" Jul 2 00:00:34.526617 systemd[1]: Started cri-containerd-6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87.scope - libcontainer container 6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87. Jul 2 00:00:34.530720 kubelet[3434]: E0702 00:00:34.528965 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:34.772915 containerd[2020]: time="2024-07-02T00:00:34.772599479Z" level=info msg="StartContainer for \"6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87\" returns successfully" Jul 2 00:00:34.806560 systemd[1]: cri-containerd-6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87.scope: Deactivated successfully. Jul 2 00:00:34.904750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87-rootfs.mount: Deactivated successfully. Jul 2 00:00:35.166419 containerd[2020]: time="2024-07-02T00:00:35.166088121Z" level=info msg="shim disconnected" id=f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea namespace=k8s.io Jul 2 00:00:35.166419 containerd[2020]: time="2024-07-02T00:00:35.166348881Z" level=warning msg="cleaning up after shim disconnected" id=f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea namespace=k8s.io Jul 2 00:00:35.166419 containerd[2020]: time="2024-07-02T00:00:35.166380837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:35.167200 containerd[2020]: time="2024-07-02T00:00:35.166212237Z" level=info msg="shim disconnected" id=6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87 namespace=k8s.io Jul 2 00:00:35.167860 containerd[2020]: time="2024-07-02T00:00:35.166963833Z" level=warning msg="cleaning up after shim disconnected" id=6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87 namespace=k8s.io Jul 2 00:00:35.168551 containerd[2020]: time="2024-07-02T00:00:35.168107481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:35.223303 containerd[2020]: time="2024-07-02T00:00:35.221910669Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:00:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:00:35.227548 containerd[2020]: time="2024-07-02T00:00:35.227093109Z" level=info msg="StopContainer for \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\" returns successfully" Jul 2 00:00:35.230706 containerd[2020]: time="2024-07-02T00:00:35.230335917Z" level=info msg="StopPodSandbox for \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\"" Jul 2 00:00:35.230706 containerd[2020]: time="2024-07-02T00:00:35.230412933Z" level=info msg="Container to stop \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:00:35.240297 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850-shm.mount: Deactivated successfully. Jul 2 00:00:35.264759 systemd[1]: cri-containerd-c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850.scope: Deactivated successfully. Jul 2 00:00:35.341917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850-rootfs.mount: Deactivated successfully. Jul 2 00:00:35.345732 containerd[2020]: time="2024-07-02T00:00:35.342643666Z" level=info msg="shim disconnected" id=c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850 namespace=k8s.io Jul 2 00:00:35.345732 containerd[2020]: time="2024-07-02T00:00:35.345170914Z" level=warning msg="cleaning up after shim disconnected" id=c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850 namespace=k8s.io Jul 2 00:00:35.345732 containerd[2020]: time="2024-07-02T00:00:35.345242218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:35.378692 containerd[2020]: time="2024-07-02T00:00:35.378577354Z" level=info msg="TearDown network for sandbox \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\" successfully" Jul 2 00:00:35.379328 containerd[2020]: time="2024-07-02T00:00:35.379208002Z" level=info msg="StopPodSandbox for \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\" returns successfully" Jul 2 00:00:35.434735 kubelet[3434]: I0702 00:00:35.433363 3434 topology_manager.go:215] "Topology Admit Handler" podUID="7d480317-246b-4d44-b1d1-da3100d96755" podNamespace="calico-system" podName="calico-typha-867cb8566-xxxq8" Jul 2 00:00:35.434735 kubelet[3434]: E0702 00:00:35.433466 3434 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee52cc17-d686-403c-a8ad-50bfb9eaf7ff" containerName="calico-typha" Jul 2 00:00:35.434735 kubelet[3434]: I0702 00:00:35.433519 3434 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee52cc17-d686-403c-a8ad-50bfb9eaf7ff" containerName="calico-typha" Jul 2 00:00:35.457417 systemd[1]: Created slice kubepods-besteffort-pod7d480317_246b_4d44_b1d1_da3100d96755.slice - libcontainer container kubepods-besteffort-pod7d480317_246b_4d44_b1d1_da3100d96755.slice. Jul 2 00:00:35.542459 kubelet[3434]: I0702 00:00:35.541966 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-tigera-ca-bundle\") pod \"ee52cc17-d686-403c-a8ad-50bfb9eaf7ff\" (UID: \"ee52cc17-d686-403c-a8ad-50bfb9eaf7ff\") " Jul 2 00:00:35.542459 kubelet[3434]: I0702 00:00:35.542061 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w74gq\" (UniqueName: \"kubernetes.io/projected/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-kube-api-access-w74gq\") pod \"ee52cc17-d686-403c-a8ad-50bfb9eaf7ff\" (UID: \"ee52cc17-d686-403c-a8ad-50bfb9eaf7ff\") " Jul 2 00:00:35.542459 kubelet[3434]: I0702 00:00:35.542107 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-typha-certs\") pod \"ee52cc17-d686-403c-a8ad-50bfb9eaf7ff\" (UID: \"ee52cc17-d686-403c-a8ad-50bfb9eaf7ff\") " Jul 2 00:00:35.542459 kubelet[3434]: I0702 00:00:35.542218 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7d480317-246b-4d44-b1d1-da3100d96755-typha-certs\") pod \"calico-typha-867cb8566-xxxq8\" (UID: \"7d480317-246b-4d44-b1d1-da3100d96755\") " pod="calico-system/calico-typha-867cb8566-xxxq8" Jul 2 00:00:35.542459 kubelet[3434]: I0702 00:00:35.542269 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxd6n\" (UniqueName: \"kubernetes.io/projected/7d480317-246b-4d44-b1d1-da3100d96755-kube-api-access-mxd6n\") pod \"calico-typha-867cb8566-xxxq8\" (UID: \"7d480317-246b-4d44-b1d1-da3100d96755\") " pod="calico-system/calico-typha-867cb8566-xxxq8" Jul 2 00:00:35.542904 kubelet[3434]: I0702 00:00:35.542313 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d480317-246b-4d44-b1d1-da3100d96755-tigera-ca-bundle\") pod \"calico-typha-867cb8566-xxxq8\" (UID: \"7d480317-246b-4d44-b1d1-da3100d96755\") " pod="calico-system/calico-typha-867cb8566-xxxq8" Jul 2 00:00:35.556802 systemd[1]: var-lib-kubelet-pods-ee52cc17\x2dd686\x2d403c\x2da8ad\x2d50bfb9eaf7ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw74gq.mount: Deactivated successfully. Jul 2 00:00:35.559992 kubelet[3434]: I0702 00:00:35.557841 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-kube-api-access-w74gq" (OuterVolumeSpecName: "kube-api-access-w74gq") pod "ee52cc17-d686-403c-a8ad-50bfb9eaf7ff" (UID: "ee52cc17-d686-403c-a8ad-50bfb9eaf7ff"). InnerVolumeSpecName "kube-api-access-w74gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:00:35.564225 kubelet[3434]: I0702 00:00:35.564108 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "ee52cc17-d686-403c-a8ad-50bfb9eaf7ff" (UID: "ee52cc17-d686-403c-a8ad-50bfb9eaf7ff"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:00:35.569296 kubelet[3434]: I0702 00:00:35.569215 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ee52cc17-d686-403c-a8ad-50bfb9eaf7ff" (UID: "ee52cc17-d686-403c-a8ad-50bfb9eaf7ff"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:00:35.571406 systemd[1]: var-lib-kubelet-pods-ee52cc17\x2dd686\x2d403c\x2da8ad\x2d50bfb9eaf7ff-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jul 2 00:00:35.573993 systemd[1]: var-lib-kubelet-pods-ee52cc17\x2dd686\x2d403c\x2da8ad\x2d50bfb9eaf7ff-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jul 2 00:00:35.643717 kubelet[3434]: I0702 00:00:35.643440 3434 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-tigera-ca-bundle\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:35.644561 kubelet[3434]: I0702 00:00:35.644493 3434 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w74gq\" (UniqueName: \"kubernetes.io/projected/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-kube-api-access-w74gq\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:35.644803 kubelet[3434]: I0702 00:00:35.644588 3434 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff-typha-certs\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:35.767877 containerd[2020]: time="2024-07-02T00:00:35.767113272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867cb8566-xxxq8,Uid:7d480317-246b-4d44-b1d1-da3100d96755,Namespace:calico-system,Attempt:0,}" Jul 2 00:00:35.787280 kubelet[3434]: I0702 00:00:35.787219 3434 scope.go:117] "RemoveContainer" containerID="f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea" Jul 2 00:00:35.796349 containerd[2020]: time="2024-07-02T00:00:35.795477600Z" level=info msg="RemoveContainer for \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\"" Jul 2 00:00:35.808971 containerd[2020]: time="2024-07-02T00:00:35.808442076Z" level=info msg="StopPodSandbox for \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\"" Jul 2 00:00:35.808971 containerd[2020]: time="2024-07-02T00:00:35.808538940Z" level=info msg="Container to stop \"6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:00:35.820393 containerd[2020]: time="2024-07-02T00:00:35.816980508Z" level=info msg="RemoveContainer for \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\" returns successfully" Jul 2 00:00:35.821393 kubelet[3434]: I0702 00:00:35.821307 3434 scope.go:117] "RemoveContainer" containerID="f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea" Jul 2 00:00:35.823420 systemd[1]: Removed slice kubepods-besteffort-podee52cc17_d686_403c_a8ad_50bfb9eaf7ff.slice - libcontainer container kubepods-besteffort-podee52cc17_d686_403c_a8ad_50bfb9eaf7ff.slice. Jul 2 00:00:35.832122 containerd[2020]: time="2024-07-02T00:00:35.828586932Z" level=error msg="ContainerStatus for \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\": not found" Jul 2 00:00:35.834507 kubelet[3434]: E0702 00:00:35.833055 3434 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\": not found" containerID="f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea" Jul 2 00:00:35.834975 kubelet[3434]: I0702 00:00:35.834849 3434 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea"} err="failed to get container status \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\": rpc error: code = NotFound desc = an error occurred when try to find container \"f883fd97cfd4160a3b8135ebd3d9018cb3035aec86eafba605da531647755bea\": not found" Jul 2 00:00:35.872530 systemd[1]: cri-containerd-ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea.scope: Deactivated successfully. Jul 2 00:00:35.900788 containerd[2020]: time="2024-07-02T00:00:35.899883024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:35.900788 containerd[2020]: time="2024-07-02T00:00:35.900036600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:35.900788 containerd[2020]: time="2024-07-02T00:00:35.900086412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:35.900788 containerd[2020]: time="2024-07-02T00:00:35.900122736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:35.986012 systemd[1]: Started cri-containerd-6b82eeba8cf4f6418b933a9cbeff0157cd13d3878d1bd57b2ebdd74221925458.scope - libcontainer container 6b82eeba8cf4f6418b933a9cbeff0157cd13d3878d1bd57b2ebdd74221925458. Jul 2 00:00:36.031031 containerd[2020]: time="2024-07-02T00:00:36.030578529Z" level=info msg="shim disconnected" id=ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea namespace=k8s.io Jul 2 00:00:36.031031 containerd[2020]: time="2024-07-02T00:00:36.030730953Z" level=warning msg="cleaning up after shim disconnected" id=ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea namespace=k8s.io Jul 2 00:00:36.031031 containerd[2020]: time="2024-07-02T00:00:36.030917205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:36.081632 containerd[2020]: time="2024-07-02T00:00:36.081538653Z" level=info msg="TearDown network for sandbox \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\" successfully" Jul 2 00:00:36.082644 containerd[2020]: time="2024-07-02T00:00:36.082364541Z" level=info msg="StopPodSandbox for \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\" returns successfully" Jul 2 00:00:36.166496 containerd[2020]: time="2024-07-02T00:00:36.166417258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867cb8566-xxxq8,Uid:7d480317-246b-4d44-b1d1-da3100d96755,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b82eeba8cf4f6418b933a9cbeff0157cd13d3878d1bd57b2ebdd74221925458\"" Jul 2 00:00:36.193560 containerd[2020]: time="2024-07-02T00:00:36.193487374Z" level=info msg="CreateContainer within sandbox \"6b82eeba8cf4f6418b933a9cbeff0157cd13d3878d1bd57b2ebdd74221925458\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:00:36.222120 containerd[2020]: time="2024-07-02T00:00:36.222004654Z" level=info msg="CreateContainer within sandbox \"6b82eeba8cf4f6418b933a9cbeff0157cd13d3878d1bd57b2ebdd74221925458\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b87f90535b7b2056d0f03013332b99d44ed7118c7dffa8ad077419a2c4cd455\"" Jul 2 00:00:36.225423 containerd[2020]: time="2024-07-02T00:00:36.224952286Z" level=info msg="StartContainer for \"2b87f90535b7b2056d0f03013332b99d44ed7118c7dffa8ad077419a2c4cd455\"" Jul 2 00:00:36.248996 kubelet[3434]: I0702 00:00:36.248940 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-log-dir\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.250163 kubelet[3434]: I0702 00:00:36.249032 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:00:36.250163 kubelet[3434]: I0702 00:00:36.249503 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-policysync" (OuterVolumeSpecName: "policysync") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:00:36.250163 kubelet[3434]: I0702 00:00:36.249322 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-policysync\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.250458 kubelet[3434]: I0702 00:00:36.250212 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:00:36.251880 kubelet[3434]: I0702 00:00:36.250126 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-xtables-lock\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.251880 kubelet[3434]: I0702 00:00:36.250784 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-net-dir\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.251880 kubelet[3434]: I0702 00:00:36.250896 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:00:36.251880 kubelet[3434]: I0702 00:00:36.251796 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-var-lib-calico\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.252277 kubelet[3434]: I0702 00:00:36.251939 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:00:36.254952 kubelet[3434]: I0702 00:00:36.252462 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-var-run-calico\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.254952 kubelet[3434]: I0702 00:00:36.252530 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:00:36.254952 kubelet[3434]: I0702 00:00:36.252564 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/697cd1d3-3a80-4700-b5f2-c6db20390077-node-certs\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.254952 kubelet[3434]: I0702 00:00:36.252638 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfn7z\" (UniqueName: \"kubernetes.io/projected/697cd1d3-3a80-4700-b5f2-c6db20390077-kube-api-access-wfn7z\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.254952 kubelet[3434]: I0702 00:00:36.252724 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/697cd1d3-3a80-4700-b5f2-c6db20390077-tigera-ca-bundle\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.254952 kubelet[3434]: I0702 00:00:36.252771 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-bin-dir\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.255484 kubelet[3434]: I0702 00:00:36.252805 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-lib-modules\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.255484 kubelet[3434]: I0702 00:00:36.252847 3434 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-flexvol-driver-host\") pod \"697cd1d3-3a80-4700-b5f2-c6db20390077\" (UID: \"697cd1d3-3a80-4700-b5f2-c6db20390077\") " Jul 2 00:00:36.255484 kubelet[3434]: I0702 00:00:36.252924 3434 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-log-dir\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.255484 kubelet[3434]: I0702 00:00:36.253948 3434 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-policysync\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.255484 kubelet[3434]: I0702 00:00:36.253996 3434 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-xtables-lock\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.255484 kubelet[3434]: I0702 00:00:36.254017 3434 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-net-dir\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.255484 kubelet[3434]: I0702 00:00:36.254038 3434 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-var-lib-calico\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.257124 kubelet[3434]: I0702 00:00:36.254135 3434 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-var-run-calico\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.257124 kubelet[3434]: I0702 00:00:36.254198 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:00:36.257124 kubelet[3434]: I0702 00:00:36.255785 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:00:36.258805 kubelet[3434]: I0702 00:00:36.256625 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697cd1d3-3a80-4700-b5f2-c6db20390077-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:00:36.259160 kubelet[3434]: I0702 00:00:36.259100 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:00:36.273939 kubelet[3434]: I0702 00:00:36.273858 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697cd1d3-3a80-4700-b5f2-c6db20390077-kube-api-access-wfn7z" (OuterVolumeSpecName: "kube-api-access-wfn7z") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "kube-api-access-wfn7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:00:36.278731 kubelet[3434]: I0702 00:00:36.278636 3434 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697cd1d3-3a80-4700-b5f2-c6db20390077-node-certs" (OuterVolumeSpecName: "node-certs") pod "697cd1d3-3a80-4700-b5f2-c6db20390077" (UID: "697cd1d3-3a80-4700-b5f2-c6db20390077"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:00:36.309345 systemd[1]: Started cri-containerd-2b87f90535b7b2056d0f03013332b99d44ed7118c7dffa8ad077419a2c4cd455.scope - libcontainer container 2b87f90535b7b2056d0f03013332b99d44ed7118c7dffa8ad077419a2c4cd455. Jul 2 00:00:36.355418 kubelet[3434]: I0702 00:00:36.354767 3434 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-cni-bin-dir\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.356424 kubelet[3434]: I0702 00:00:36.356202 3434 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-lib-modules\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.356424 kubelet[3434]: I0702 00:00:36.356253 3434 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/697cd1d3-3a80-4700-b5f2-c6db20390077-flexvol-driver-host\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.356424 kubelet[3434]: I0702 00:00:36.356294 3434 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/697cd1d3-3a80-4700-b5f2-c6db20390077-node-certs\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.356424 kubelet[3434]: I0702 00:00:36.356315 3434 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wfn7z\" (UniqueName: \"kubernetes.io/projected/697cd1d3-3a80-4700-b5f2-c6db20390077-kube-api-access-wfn7z\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.356424 kubelet[3434]: I0702 00:00:36.356336 3434 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/697cd1d3-3a80-4700-b5f2-c6db20390077-tigera-ca-bundle\") on node \"ip-172-31-26-136\" DevicePath \"\"" Jul 2 00:00:36.457648 containerd[2020]: time="2024-07-02T00:00:36.457560827Z" level=info msg="StartContainer for \"2b87f90535b7b2056d0f03013332b99d44ed7118c7dffa8ad077419a2c4cd455\" returns successfully" Jul 2 00:00:36.528705 kubelet[3434]: E0702 00:00:36.527808 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:36.538916 kubelet[3434]: I0702 00:00:36.538864 3434 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee52cc17-d686-403c-a8ad-50bfb9eaf7ff" path="/var/lib/kubelet/pods/ee52cc17-d686-403c-a8ad-50bfb9eaf7ff/volumes" Jul 2 00:00:36.573277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea-rootfs.mount: Deactivated successfully. Jul 2 00:00:36.573480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea-shm.mount: Deactivated successfully. Jul 2 00:00:36.573619 systemd[1]: var-lib-kubelet-pods-697cd1d3\x2d3a80\x2d4700\x2db5f2\x2dc6db20390077-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwfn7z.mount: Deactivated successfully. Jul 2 00:00:36.573845 systemd[1]: var-lib-kubelet-pods-697cd1d3\x2d3a80\x2d4700\x2db5f2\x2dc6db20390077-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 00:00:36.600252 systemd[1]: Removed slice kubepods-besteffort-pod697cd1d3_3a80_4700_b5f2_c6db20390077.slice - libcontainer container kubepods-besteffort-pod697cd1d3_3a80_4700_b5f2_c6db20390077.slice. Jul 2 00:00:36.823982 kubelet[3434]: I0702 00:00:36.823902 3434 scope.go:117] "RemoveContainer" containerID="6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87" Jul 2 00:00:36.839689 containerd[2020]: time="2024-07-02T00:00:36.837563509Z" level=info msg="RemoveContainer for \"6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87\"" Jul 2 00:00:36.849739 containerd[2020]: time="2024-07-02T00:00:36.848028733Z" level=info msg="RemoveContainer for \"6f12b28dbd827bfdaa8f3dfb2e50e6d2cbacd29d6c45e7e5cde81110d379fd87\" returns successfully" Jul 2 00:00:36.998916 kubelet[3434]: I0702 00:00:36.998830 3434 topology_manager.go:215] "Topology Admit Handler" podUID="b59eb856-52d0-4479-91eb-da20bd2cbaf3" podNamespace="calico-system" podName="calico-node-kz55f" Jul 2 00:00:36.999774 kubelet[3434]: E0702 00:00:36.999413 3434 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="697cd1d3-3a80-4700-b5f2-c6db20390077" containerName="flexvol-driver" Jul 2 00:00:36.999774 kubelet[3434]: I0702 00:00:36.999684 3434 memory_manager.go:354] "RemoveStaleState removing state" podUID="697cd1d3-3a80-4700-b5f2-c6db20390077" containerName="flexvol-driver" Jul 2 00:00:37.024779 systemd[1]: Created slice kubepods-besteffort-podb59eb856_52d0_4479_91eb_da20bd2cbaf3.slice - libcontainer container kubepods-besteffort-podb59eb856_52d0_4479_91eb_da20bd2cbaf3.slice. Jul 2 00:00:37.050511 kubelet[3434]: W0702 00:00:37.050110 3434 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-26-136" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-136' and this object Jul 2 00:00:37.050511 kubelet[3434]: E0702 00:00:37.050182 3434 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-26-136" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-136' and this object Jul 2 00:00:37.050946 kubelet[3434]: W0702 00:00:37.050578 3434 reflector.go:547] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-26-136" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-136' and this object Jul 2 00:00:37.050946 kubelet[3434]: E0702 00:00:37.050636 3434 reflector.go:150] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-26-136" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-136' and this object Jul 2 00:00:37.064699 kubelet[3434]: I0702 00:00:37.063174 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b59eb856-52d0-4479-91eb-da20bd2cbaf3-cni-bin-dir\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.064699 kubelet[3434]: I0702 00:00:37.063252 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b59eb856-52d0-4479-91eb-da20bd2cbaf3-flexvol-driver-host\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.064699 kubelet[3434]: I0702 00:00:37.063298 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b59eb856-52d0-4479-91eb-da20bd2cbaf3-tigera-ca-bundle\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.064699 kubelet[3434]: I0702 00:00:37.063340 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b59eb856-52d0-4479-91eb-da20bd2cbaf3-cni-net-dir\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.064699 kubelet[3434]: I0702 00:00:37.063380 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b59eb856-52d0-4479-91eb-da20bd2cbaf3-policysync\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.065093 kubelet[3434]: I0702 00:00:37.063442 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b59eb856-52d0-4479-91eb-da20bd2cbaf3-cni-log-dir\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.065093 kubelet[3434]: I0702 00:00:37.063481 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b59eb856-52d0-4479-91eb-da20bd2cbaf3-var-lib-calico\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.065093 kubelet[3434]: I0702 00:00:37.063520 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b59eb856-52d0-4479-91eb-da20bd2cbaf3-xtables-lock\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.065093 kubelet[3434]: I0702 00:00:37.063568 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b59eb856-52d0-4479-91eb-da20bd2cbaf3-node-certs\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.065093 kubelet[3434]: I0702 00:00:37.063613 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b59eb856-52d0-4479-91eb-da20bd2cbaf3-var-run-calico\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.065350 kubelet[3434]: I0702 00:00:37.063700 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b59eb856-52d0-4479-91eb-da20bd2cbaf3-lib-modules\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.065350 kubelet[3434]: I0702 00:00:37.063813 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp2ws\" (UniqueName: \"kubernetes.io/projected/b59eb856-52d0-4479-91eb-da20bd2cbaf3-kube-api-access-lp2ws\") pod \"calico-node-kz55f\" (UID: \"b59eb856-52d0-4479-91eb-da20bd2cbaf3\") " pod="calico-system/calico-node-kz55f" Jul 2 00:00:37.133805 kubelet[3434]: I0702 00:00:37.131947 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-867cb8566-xxxq8" podStartSLOduration=6.131920583 podStartE2EDuration="6.131920583s" podCreationTimestamp="2024-07-02 00:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:00:37.079801702 +0000 UTC m=+30.861675418" watchObservedRunningTime="2024-07-02 00:00:37.131920583 +0000 UTC m=+30.913794275" Jul 2 00:00:38.165617 kubelet[3434]: E0702 00:00:38.165448 3434 secret.go:194] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 00:00:38.165617 kubelet[3434]: E0702 00:00:38.165609 3434 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b59eb856-52d0-4479-91eb-da20bd2cbaf3-node-certs podName:b59eb856-52d0-4479-91eb-da20bd2cbaf3 nodeName:}" failed. No retries permitted until 2024-07-02 00:00:38.6655721 +0000 UTC m=+32.447445792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/b59eb856-52d0-4479-91eb-da20bd2cbaf3-node-certs") pod "calico-node-kz55f" (UID: "b59eb856-52d0-4479-91eb-da20bd2cbaf3") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:00:38.527683 kubelet[3434]: E0702 00:00:38.526916 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:38.532544 kubelet[3434]: I0702 00:00:38.532486 3434 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697cd1d3-3a80-4700-b5f2-c6db20390077" path="/var/lib/kubelet/pods/697cd1d3-3a80-4700-b5f2-c6db20390077/volumes" Jul 2 00:00:38.837458 containerd[2020]: time="2024-07-02T00:00:38.837277419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kz55f,Uid:b59eb856-52d0-4479-91eb-da20bd2cbaf3,Namespace:calico-system,Attempt:0,}" Jul 2 00:00:38.882627 containerd[2020]: time="2024-07-02T00:00:38.882407439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:00:38.882833 containerd[2020]: time="2024-07-02T00:00:38.882623163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:38.882833 containerd[2020]: time="2024-07-02T00:00:38.882759351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:00:38.883082 containerd[2020]: time="2024-07-02T00:00:38.882838551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:00:38.928052 systemd[1]: Started cri-containerd-177e8837a7dc9a0fb261a46b3dea8e21d7ac193c16c6878354b4fbb449081ab5.scope - libcontainer container 177e8837a7dc9a0fb261a46b3dea8e21d7ac193c16c6878354b4fbb449081ab5. Jul 2 00:00:38.983798 containerd[2020]: time="2024-07-02T00:00:38.983709160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kz55f,Uid:b59eb856-52d0-4479-91eb-da20bd2cbaf3,Namespace:calico-system,Attempt:0,} returns sandbox id \"177e8837a7dc9a0fb261a46b3dea8e21d7ac193c16c6878354b4fbb449081ab5\"" Jul 2 00:00:38.990635 containerd[2020]: time="2024-07-02T00:00:38.989739136Z" level=info msg="CreateContainer within sandbox \"177e8837a7dc9a0fb261a46b3dea8e21d7ac193c16c6878354b4fbb449081ab5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:00:39.018378 containerd[2020]: time="2024-07-02T00:00:39.018265152Z" level=info msg="CreateContainer within sandbox \"177e8837a7dc9a0fb261a46b3dea8e21d7ac193c16c6878354b4fbb449081ab5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"76f96eecb2abda5d0c50edd4f6419a9fcb3c23b89df8e80a72a717596f15e88e\"" Jul 2 00:00:39.019848 containerd[2020]: time="2024-07-02T00:00:39.019748076Z" level=info msg="StartContainer for \"76f96eecb2abda5d0c50edd4f6419a9fcb3c23b89df8e80a72a717596f15e88e\"" Jul 2 00:00:39.077119 systemd[1]: Started cri-containerd-76f96eecb2abda5d0c50edd4f6419a9fcb3c23b89df8e80a72a717596f15e88e.scope - libcontainer container 76f96eecb2abda5d0c50edd4f6419a9fcb3c23b89df8e80a72a717596f15e88e. Jul 2 00:00:39.143744 containerd[2020]: time="2024-07-02T00:00:39.142491277Z" level=info msg="StartContainer for \"76f96eecb2abda5d0c50edd4f6419a9fcb3c23b89df8e80a72a717596f15e88e\" returns successfully" Jul 2 00:00:39.184036 systemd[1]: cri-containerd-76f96eecb2abda5d0c50edd4f6419a9fcb3c23b89df8e80a72a717596f15e88e.scope: Deactivated successfully. Jul 2 00:00:39.243595 containerd[2020]: time="2024-07-02T00:00:39.243503713Z" level=info msg="shim disconnected" id=76f96eecb2abda5d0c50edd4f6419a9fcb3c23b89df8e80a72a717596f15e88e namespace=k8s.io Jul 2 00:00:39.243595 containerd[2020]: time="2024-07-02T00:00:39.243581281Z" level=warning msg="cleaning up after shim disconnected" id=76f96eecb2abda5d0c50edd4f6419a9fcb3c23b89df8e80a72a717596f15e88e namespace=k8s.io Jul 2 00:00:39.243942 containerd[2020]: time="2024-07-02T00:00:39.243605797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:39.876730 containerd[2020]: time="2024-07-02T00:00:39.876635680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:00:40.528041 kubelet[3434]: E0702 00:00:40.527159 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:42.528724 kubelet[3434]: E0702 00:00:42.528585 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:44.026362 containerd[2020]: time="2024-07-02T00:00:44.026265113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:44.028447 containerd[2020]: time="2024-07-02T00:00:44.028369805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 00:00:44.030106 containerd[2020]: time="2024-07-02T00:00:44.029969309Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:44.035421 containerd[2020]: time="2024-07-02T00:00:44.035215097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:44.039319 containerd[2020]: time="2024-07-02T00:00:44.038163773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 4.161418377s" Jul 2 00:00:44.039319 containerd[2020]: time="2024-07-02T00:00:44.038274329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 00:00:44.046367 containerd[2020]: time="2024-07-02T00:00:44.046252013Z" level=info msg="CreateContainer within sandbox \"177e8837a7dc9a0fb261a46b3dea8e21d7ac193c16c6878354b4fbb449081ab5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:00:44.073612 containerd[2020]: time="2024-07-02T00:00:44.073509449Z" level=info msg="CreateContainer within sandbox \"177e8837a7dc9a0fb261a46b3dea8e21d7ac193c16c6878354b4fbb449081ab5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bd15eb966af581827ceb28903216c907c4060dd2de31ec62d0ccd2e00acc1570\"" Jul 2 00:00:44.076085 containerd[2020]: time="2024-07-02T00:00:44.074426189Z" level=info msg="StartContainer for \"bd15eb966af581827ceb28903216c907c4060dd2de31ec62d0ccd2e00acc1570\"" Jul 2 00:00:44.150347 systemd[1]: Started cri-containerd-bd15eb966af581827ceb28903216c907c4060dd2de31ec62d0ccd2e00acc1570.scope - libcontainer container bd15eb966af581827ceb28903216c907c4060dd2de31ec62d0ccd2e00acc1570. Jul 2 00:00:44.214975 containerd[2020]: time="2024-07-02T00:00:44.214520982Z" level=info msg="StartContainer for \"bd15eb966af581827ceb28903216c907c4060dd2de31ec62d0ccd2e00acc1570\" returns successfully" Jul 2 00:00:44.528337 kubelet[3434]: E0702 00:00:44.528189 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:45.314241 systemd[1]: Started sshd@7-172.31.26.136:22-147.75.109.163:39678.service - OpenSSH per-connection server daemon (147.75.109.163:39678). Jul 2 00:00:45.506287 sshd[4435]: Accepted publickey for core from 147.75.109.163 port 39678 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:45.509401 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:45.519262 systemd-logind[1992]: New session 8 of user core. Jul 2 00:00:45.526982 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:00:45.816028 containerd[2020]: time="2024-07-02T00:00:45.815925790Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:00:45.822279 systemd[1]: cri-containerd-bd15eb966af581827ceb28903216c907c4060dd2de31ec62d0ccd2e00acc1570.scope: Deactivated successfully. Jul 2 00:00:45.905002 sshd[4435]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:45.922554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd15eb966af581827ceb28903216c907c4060dd2de31ec62d0ccd2e00acc1570-rootfs.mount: Deactivated successfully. Jul 2 00:00:45.924645 systemd[1]: sshd@7-172.31.26.136:22-147.75.109.163:39678.service: Deactivated successfully. Jul 2 00:00:45.932268 kubelet[3434]: I0702 00:00:45.930462 3434 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:00:45.936218 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:00:45.946477 systemd-logind[1992]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:00:45.951178 systemd-logind[1992]: Removed session 8. Jul 2 00:00:45.990769 kubelet[3434]: I0702 00:00:45.990636 3434 topology_manager.go:215] "Topology Admit Handler" podUID="2359edd6-6282-47da-88b1-1e71aa5d1c63" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b48tn" Jul 2 00:00:45.995870 kubelet[3434]: I0702 00:00:45.995804 3434 topology_manager.go:215] "Topology Admit Handler" podUID="db41f039-97b5-4424-9900-efa54584157e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2lfdt" Jul 2 00:00:46.016602 systemd[1]: Created slice kubepods-burstable-pod2359edd6_6282_47da_88b1_1e71aa5d1c63.slice - libcontainer container kubepods-burstable-pod2359edd6_6282_47da_88b1_1e71aa5d1c63.slice. Jul 2 00:00:46.030951 kubelet[3434]: W0702 00:00:46.030475 3434 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-26-136" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-136' and this object Jul 2 00:00:46.030951 kubelet[3434]: E0702 00:00:46.030539 3434 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-26-136" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-136' and this object Jul 2 00:00:46.043219 systemd[1]: Created slice kubepods-burstable-poddb41f039_97b5_4424_9900_efa54584157e.slice - libcontainer container kubepods-burstable-poddb41f039_97b5_4424_9900_efa54584157e.slice. Jul 2 00:00:46.075104 kubelet[3434]: I0702 00:00:46.074708 3434 topology_manager.go:215] "Topology Admit Handler" podUID="d4985aed-3f25-4daa-b2aa-005e064a14f0" podNamespace="calico-system" podName="calico-kube-controllers-6b5597fb45-9mjn4" Jul 2 00:00:46.098129 systemd[1]: Created slice kubepods-besteffort-podd4985aed_3f25_4daa_b2aa_005e064a14f0.slice - libcontainer container kubepods-besteffort-podd4985aed_3f25_4daa_b2aa_005e064a14f0.slice. Jul 2 00:00:46.141557 kubelet[3434]: I0702 00:00:46.141293 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4985aed-3f25-4daa-b2aa-005e064a14f0-tigera-ca-bundle\") pod \"calico-kube-controllers-6b5597fb45-9mjn4\" (UID: \"d4985aed-3f25-4daa-b2aa-005e064a14f0\") " pod="calico-system/calico-kube-controllers-6b5597fb45-9mjn4" Jul 2 00:00:46.143916 kubelet[3434]: I0702 00:00:46.143862 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b78v\" (UniqueName: \"kubernetes.io/projected/2359edd6-6282-47da-88b1-1e71aa5d1c63-kube-api-access-2b78v\") pod \"coredns-7db6d8ff4d-b48tn\" (UID: \"2359edd6-6282-47da-88b1-1e71aa5d1c63\") " pod="kube-system/coredns-7db6d8ff4d-b48tn" Jul 2 00:00:46.144427 kubelet[3434]: I0702 00:00:46.144339 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db41f039-97b5-4424-9900-efa54584157e-config-volume\") pod \"coredns-7db6d8ff4d-2lfdt\" (UID: \"db41f039-97b5-4424-9900-efa54584157e\") " pod="kube-system/coredns-7db6d8ff4d-2lfdt" Jul 2 00:00:46.144751 kubelet[3434]: I0702 00:00:46.144693 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznms\" (UniqueName: \"kubernetes.io/projected/d4985aed-3f25-4daa-b2aa-005e064a14f0-kube-api-access-hznms\") pod \"calico-kube-controllers-6b5597fb45-9mjn4\" (UID: \"d4985aed-3f25-4daa-b2aa-005e064a14f0\") " pod="calico-system/calico-kube-controllers-6b5597fb45-9mjn4" Jul 2 00:00:46.145071 kubelet[3434]: I0702 00:00:46.144928 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6rlm\" (UniqueName: \"kubernetes.io/projected/db41f039-97b5-4424-9900-efa54584157e-kube-api-access-m6rlm\") pod \"coredns-7db6d8ff4d-2lfdt\" (UID: \"db41f039-97b5-4424-9900-efa54584157e\") " pod="kube-system/coredns-7db6d8ff4d-2lfdt" Jul 2 00:00:46.145071 kubelet[3434]: I0702 00:00:46.144987 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2359edd6-6282-47da-88b1-1e71aa5d1c63-config-volume\") pod \"coredns-7db6d8ff4d-b48tn\" (UID: \"2359edd6-6282-47da-88b1-1e71aa5d1c63\") " pod="kube-system/coredns-7db6d8ff4d-b48tn" Jul 2 00:00:46.414362 containerd[2020]: time="2024-07-02T00:00:46.413648001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5597fb45-9mjn4,Uid:d4985aed-3f25-4daa-b2aa-005e064a14f0,Namespace:calico-system,Attempt:0,}" Jul 2 00:00:46.548475 systemd[1]: Created slice kubepods-besteffort-podb82ac8fb_8024_40b1_9a10_88793e57ca39.slice - libcontainer container kubepods-besteffort-podb82ac8fb_8024_40b1_9a10_88793e57ca39.slice. Jul 2 00:00:46.554428 containerd[2020]: time="2024-07-02T00:00:46.554358141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdgpm,Uid:b82ac8fb-8024-40b1-9a10-88793e57ca39,Namespace:calico-system,Attempt:0,}" Jul 2 00:00:46.923175 containerd[2020]: time="2024-07-02T00:00:46.920696771Z" level=info msg="shim disconnected" id=bd15eb966af581827ceb28903216c907c4060dd2de31ec62d0ccd2e00acc1570 namespace=k8s.io Jul 2 00:00:46.923175 containerd[2020]: time="2024-07-02T00:00:46.920787503Z" level=warning msg="cleaning up after shim disconnected" id=bd15eb966af581827ceb28903216c907c4060dd2de31ec62d0ccd2e00acc1570 namespace=k8s.io Jul 2 00:00:46.923175 containerd[2020]: time="2024-07-02T00:00:46.920811371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:00:47.087800 containerd[2020]: time="2024-07-02T00:00:47.087543500Z" level=error msg="Failed to destroy network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:47.092786 containerd[2020]: time="2024-07-02T00:00:47.092613992Z" level=error msg="encountered an error cleaning up failed sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:47.093600 containerd[2020]: time="2024-07-02T00:00:47.092825000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdgpm,Uid:b82ac8fb-8024-40b1-9a10-88793e57ca39,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:47.095806 kubelet[3434]: E0702 00:00:47.093136 3434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:47.095806 kubelet[3434]: E0702 00:00:47.093228 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vdgpm" Jul 2 00:00:47.095806 kubelet[3434]: E0702 00:00:47.093263 3434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vdgpm" Jul 2 00:00:47.095164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316-shm.mount: Deactivated successfully. Jul 2 00:00:47.096826 kubelet[3434]: E0702 00:00:47.093332 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vdgpm_calico-system(b82ac8fb-8024-40b1-9a10-88793e57ca39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vdgpm_calico-system(b82ac8fb-8024-40b1-9a10-88793e57ca39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:47.102768 containerd[2020]: time="2024-07-02T00:00:47.102677480Z" level=error msg="Failed to destroy network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:47.104344 containerd[2020]: time="2024-07-02T00:00:47.104263328Z" level=error msg="encountered an error cleaning up failed sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:47.104458 containerd[2020]: time="2024-07-02T00:00:47.104399456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5597fb45-9mjn4,Uid:d4985aed-3f25-4daa-b2aa-005e064a14f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:47.106064 kubelet[3434]: E0702 00:00:47.105880 3434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:47.106064 kubelet[3434]: E0702 00:00:47.105966 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b5597fb45-9mjn4" Jul 2 00:00:47.106064 kubelet[3434]: E0702 00:00:47.106004 3434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b5597fb45-9mjn4" Jul 2 00:00:47.107958 kubelet[3434]: E0702 00:00:47.107727 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b5597fb45-9mjn4_calico-system(d4985aed-3f25-4daa-b2aa-005e064a14f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b5597fb45-9mjn4_calico-system(d4985aed-3f25-4daa-b2aa-005e064a14f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b5597fb45-9mjn4" podUID="d4985aed-3f25-4daa-b2aa-005e064a14f0" Jul 2 00:00:47.109916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848-shm.mount: Deactivated successfully. Jul 2 00:00:47.247956 kubelet[3434]: E0702 00:00:47.246471 3434 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:00:47.247956 kubelet[3434]: E0702 00:00:47.246607 3434 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2359edd6-6282-47da-88b1-1e71aa5d1c63-config-volume podName:2359edd6-6282-47da-88b1-1e71aa5d1c63 nodeName:}" failed. No retries permitted until 2024-07-02 00:00:47.746579729 +0000 UTC m=+41.528453421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2359edd6-6282-47da-88b1-1e71aa5d1c63-config-volume") pod "coredns-7db6d8ff4d-b48tn" (UID: "2359edd6-6282-47da-88b1-1e71aa5d1c63") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:00:47.247956 kubelet[3434]: E0702 00:00:47.246769 3434 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:00:47.247956 kubelet[3434]: E0702 00:00:47.246846 3434 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db41f039-97b5-4424-9900-efa54584157e-config-volume podName:db41f039-97b5-4424-9900-efa54584157e nodeName:}" failed. No retries permitted until 2024-07-02 00:00:47.746821421 +0000 UTC m=+41.528695101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/db41f039-97b5-4424-9900-efa54584157e-config-volume") pod "coredns-7db6d8ff4d-2lfdt" (UID: "db41f039-97b5-4424-9900-efa54584157e") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:00:47.833550 containerd[2020]: time="2024-07-02T00:00:47.833373408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b48tn,Uid:2359edd6-6282-47da-88b1-1e71aa5d1c63,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:47.850848 containerd[2020]: time="2024-07-02T00:00:47.850327596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lfdt,Uid:db41f039-97b5-4424-9900-efa54584157e,Namespace:kube-system,Attempt:0,}" Jul 2 00:00:47.931871 kubelet[3434]: I0702 00:00:47.931240 3434 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:00:47.935346 containerd[2020]: time="2024-07-02T00:00:47.935086560Z" level=info msg="StopPodSandbox for \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\"" Jul 2 00:00:47.937461 containerd[2020]: time="2024-07-02T00:00:47.936933876Z" level=info msg="Ensure that sandbox c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316 in task-service has been cleanup successfully" Jul 2 00:00:47.939282 kubelet[3434]: I0702 00:00:47.939197 3434 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:00:47.943469 containerd[2020]: time="2024-07-02T00:00:47.941168328Z" level=info msg="StopPodSandbox for \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\"" Jul 2 00:00:47.948456 containerd[2020]: time="2024-07-02T00:00:47.948348156Z" level=info msg="Ensure that sandbox 6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848 in task-service has been cleanup successfully" Jul 2 00:00:47.960900 containerd[2020]: time="2024-07-02T00:00:47.960138792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:00:48.105304 containerd[2020]: time="2024-07-02T00:00:48.104873757Z" level=error msg="StopPodSandbox for \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\" failed" error="failed to destroy network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.105601 kubelet[3434]: E0702 00:00:48.105225 3434 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:00:48.105601 kubelet[3434]: E0702 00:00:48.105313 3434 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848"} Jul 2 00:00:48.105601 kubelet[3434]: E0702 00:00:48.105373 3434 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4985aed-3f25-4daa-b2aa-005e064a14f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:00:48.105601 kubelet[3434]: E0702 00:00:48.105414 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4985aed-3f25-4daa-b2aa-005e064a14f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b5597fb45-9mjn4" podUID="d4985aed-3f25-4daa-b2aa-005e064a14f0" Jul 2 00:00:48.119376 containerd[2020]: time="2024-07-02T00:00:48.118299333Z" level=error msg="Failed to destroy network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.125126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700-shm.mount: Deactivated successfully. Jul 2 00:00:48.128646 containerd[2020]: time="2024-07-02T00:00:48.128449665Z" level=error msg="encountered an error cleaning up failed sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.129398 containerd[2020]: time="2024-07-02T00:00:48.128867073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b48tn,Uid:2359edd6-6282-47da-88b1-1e71aa5d1c63,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.129925 kubelet[3434]: E0702 00:00:48.129852 3434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.130107 kubelet[3434]: E0702 00:00:48.129947 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b48tn" Jul 2 00:00:48.130107 kubelet[3434]: E0702 00:00:48.129987 3434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b48tn" Jul 2 00:00:48.133005 kubelet[3434]: E0702 00:00:48.132539 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b48tn_kube-system(2359edd6-6282-47da-88b1-1e71aa5d1c63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b48tn_kube-system(2359edd6-6282-47da-88b1-1e71aa5d1c63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b48tn" podUID="2359edd6-6282-47da-88b1-1e71aa5d1c63" Jul 2 00:00:48.140840 containerd[2020]: time="2024-07-02T00:00:48.140506833Z" level=error msg="StopPodSandbox for \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\" failed" error="failed to destroy network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.141101 kubelet[3434]: E0702 00:00:48.140990 3434 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:00:48.141101 kubelet[3434]: E0702 00:00:48.141082 3434 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316"} Jul 2 00:00:48.141241 kubelet[3434]: E0702 00:00:48.141143 3434 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b82ac8fb-8024-40b1-9a10-88793e57ca39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:00:48.141241 kubelet[3434]: E0702 00:00:48.141184 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b82ac8fb-8024-40b1-9a10-88793e57ca39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vdgpm" podUID="b82ac8fb-8024-40b1-9a10-88793e57ca39" Jul 2 00:00:48.157499 containerd[2020]: time="2024-07-02T00:00:48.157355625Z" level=error msg="Failed to destroy network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.158201 containerd[2020]: time="2024-07-02T00:00:48.158087661Z" level=error msg="encountered an error cleaning up failed sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.158849 containerd[2020]: time="2024-07-02T00:00:48.158191965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lfdt,Uid:db41f039-97b5-4424-9900-efa54584157e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.160635 kubelet[3434]: E0702 00:00:48.160523 3434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:48.160635 kubelet[3434]: E0702 00:00:48.160629 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2lfdt" Jul 2 00:00:48.160885 kubelet[3434]: E0702 00:00:48.160690 3434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2lfdt" Jul 2 00:00:48.163746 kubelet[3434]: E0702 00:00:48.160800 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2lfdt_kube-system(db41f039-97b5-4424-9900-efa54584157e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2lfdt_kube-system(db41f039-97b5-4424-9900-efa54584157e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2lfdt" podUID="db41f039-97b5-4424-9900-efa54584157e" Jul 2 00:00:48.165614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab-shm.mount: Deactivated successfully. Jul 2 00:00:48.961986 kubelet[3434]: I0702 00:00:48.961910 3434 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:00:48.963975 containerd[2020]: time="2024-07-02T00:00:48.963619345Z" level=info msg="StopPodSandbox for \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\"" Jul 2 00:00:48.964976 containerd[2020]: time="2024-07-02T00:00:48.964072585Z" level=info msg="Ensure that sandbox 203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab in task-service has been cleanup successfully" Jul 2 00:00:48.969746 kubelet[3434]: I0702 00:00:48.969332 3434 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:00:48.976746 containerd[2020]: time="2024-07-02T00:00:48.975264757Z" level=info msg="StopPodSandbox for \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\"" Jul 2 00:00:48.976746 containerd[2020]: time="2024-07-02T00:00:48.975643705Z" level=info msg="Ensure that sandbox a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700 in task-service has been cleanup successfully" Jul 2 00:00:49.065532 containerd[2020]: time="2024-07-02T00:00:49.065269258Z" level=error msg="StopPodSandbox for \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\" failed" error="failed to destroy network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:49.066385 kubelet[3434]: E0702 00:00:49.066304 3434 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:00:49.066514 kubelet[3434]: E0702 00:00:49.066449 3434 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab"} Jul 2 00:00:49.066591 kubelet[3434]: E0702 00:00:49.066528 3434 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db41f039-97b5-4424-9900-efa54584157e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:00:49.066591 kubelet[3434]: E0702 00:00:49.066572 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db41f039-97b5-4424-9900-efa54584157e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2lfdt" podUID="db41f039-97b5-4424-9900-efa54584157e" Jul 2 00:00:49.081525 containerd[2020]: time="2024-07-02T00:00:49.081224830Z" level=error msg="StopPodSandbox for \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\" failed" error="failed to destroy network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:00:49.082024 kubelet[3434]: E0702 00:00:49.081619 3434 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:00:49.082024 kubelet[3434]: E0702 00:00:49.081727 3434 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700"} Jul 2 00:00:49.082024 kubelet[3434]: E0702 00:00:49.081786 3434 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2359edd6-6282-47da-88b1-1e71aa5d1c63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:00:49.082024 kubelet[3434]: E0702 00:00:49.081827 3434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2359edd6-6282-47da-88b1-1e71aa5d1c63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b48tn" podUID="2359edd6-6282-47da-88b1-1e71aa5d1c63" Jul 2 00:00:50.949873 systemd[1]: Started sshd@8-172.31.26.136:22-147.75.109.163:39686.service - OpenSSH per-connection server daemon (147.75.109.163:39686). Jul 2 00:00:51.154002 sshd[4674]: Accepted publickey for core from 147.75.109.163 port 39686 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:51.158505 sshd[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:51.173417 systemd-logind[1992]: New session 9 of user core. Jul 2 00:00:51.181988 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:00:51.492382 sshd[4674]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:51.502043 systemd[1]: sshd@8-172.31.26.136:22-147.75.109.163:39686.service: Deactivated successfully. Jul 2 00:00:51.507599 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:00:51.514958 systemd-logind[1992]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:00:51.518290 systemd-logind[1992]: Removed session 9. Jul 2 00:00:54.489428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617526631.mount: Deactivated successfully. Jul 2 00:00:54.608091 containerd[2020]: time="2024-07-02T00:00:54.608008877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:54.609694 containerd[2020]: time="2024-07-02T00:00:54.609594461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 00:00:54.611205 containerd[2020]: time="2024-07-02T00:00:54.611115113Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:54.615245 containerd[2020]: time="2024-07-02T00:00:54.615152321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:00:54.617073 containerd[2020]: time="2024-07-02T00:00:54.616830593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 6.654289437s" Jul 2 00:00:54.617073 containerd[2020]: time="2024-07-02T00:00:54.616941749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 00:00:54.644020 containerd[2020]: time="2024-07-02T00:00:54.643951530Z" level=info msg="CreateContainer within sandbox \"177e8837a7dc9a0fb261a46b3dea8e21d7ac193c16c6878354b4fbb449081ab5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:00:54.681423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166838805.mount: Deactivated successfully. Jul 2 00:00:54.686421 containerd[2020]: time="2024-07-02T00:00:54.686291742Z" level=info msg="CreateContainer within sandbox \"177e8837a7dc9a0fb261a46b3dea8e21d7ac193c16c6878354b4fbb449081ab5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ec118ac6ff38770c520801f52fa15583e3b9f0092575974d2c43ae2a52b43900\"" Jul 2 00:00:54.686421 containerd[2020]: time="2024-07-02T00:00:54.687208710Z" level=info msg="StartContainer for \"ec118ac6ff38770c520801f52fa15583e3b9f0092575974d2c43ae2a52b43900\"" Jul 2 00:00:54.736004 systemd[1]: Started cri-containerd-ec118ac6ff38770c520801f52fa15583e3b9f0092575974d2c43ae2a52b43900.scope - libcontainer container ec118ac6ff38770c520801f52fa15583e3b9f0092575974d2c43ae2a52b43900. Jul 2 00:00:54.832644 containerd[2020]: time="2024-07-02T00:00:54.832465951Z" level=info msg="StartContainer for \"ec118ac6ff38770c520801f52fa15583e3b9f0092575974d2c43ae2a52b43900\" returns successfully" Jul 2 00:00:54.991773 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:00:54.993481 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:00:55.061812 kubelet[3434]: I0702 00:00:55.061258 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kz55f" podStartSLOduration=4.317848215 podStartE2EDuration="19.061230772s" podCreationTimestamp="2024-07-02 00:00:36 +0000 UTC" firstStartedPulling="2024-07-02 00:00:39.874986148 +0000 UTC m=+33.656859840" lastFinishedPulling="2024-07-02 00:00:54.618368705 +0000 UTC m=+48.400242397" observedRunningTime="2024-07-02 00:00:55.058097308 +0000 UTC m=+48.839971000" watchObservedRunningTime="2024-07-02 00:00:55.061230772 +0000 UTC m=+48.843104488" Jul 2 00:00:56.268985 kubelet[3434]: I0702 00:00:56.268888 3434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:00:56.545543 systemd[1]: Started sshd@9-172.31.26.136:22-147.75.109.163:49072.service - OpenSSH per-connection server daemon (147.75.109.163:49072). Jul 2 00:00:56.740304 sshd[4801]: Accepted publickey for core from 147.75.109.163 port 49072 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:56.749873 sshd[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:56.767153 systemd-logind[1992]: New session 10 of user core. Jul 2 00:00:56.773164 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:00:57.169694 sshd[4801]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:57.180582 systemd[1]: sshd@9-172.31.26.136:22-147.75.109.163:49072.service: Deactivated successfully. Jul 2 00:00:57.194155 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:00:57.198588 systemd-logind[1992]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:00:57.228435 systemd[1]: Started sshd@10-172.31.26.136:22-147.75.109.163:49086.service - OpenSSH per-connection server daemon (147.75.109.163:49086). Jul 2 00:00:57.233691 systemd-logind[1992]: Removed session 10. Jul 2 00:00:57.461172 sshd[4903]: Accepted publickey for core from 147.75.109.163 port 49086 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:57.465946 sshd[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:57.486024 systemd-logind[1992]: New session 11 of user core. Jul 2 00:00:57.496976 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:00:57.984107 sshd[4903]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:57.998112 systemd[1]: sshd@10-172.31.26.136:22-147.75.109.163:49086.service: Deactivated successfully. Jul 2 00:00:58.008730 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:00:58.015729 systemd-logind[1992]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:00:58.046408 systemd[1]: Started sshd@11-172.31.26.136:22-147.75.109.163:49092.service - OpenSSH per-connection server daemon (147.75.109.163:49092). Jul 2 00:00:58.052991 systemd-logind[1992]: Removed session 11. Jul 2 00:00:58.108281 systemd-networkd[1927]: vxlan.calico: Link UP Jul 2 00:00:58.108300 systemd-networkd[1927]: vxlan.calico: Gained carrier Jul 2 00:00:58.124932 (udev-worker)[4979]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:58.146103 (udev-worker)[4977]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:58.147293 (udev-worker)[4985]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:58.298723 sshd[4966]: Accepted publickey for core from 147.75.109.163 port 49092 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:58.298966 sshd[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:58.318060 systemd-logind[1992]: New session 12 of user core. Jul 2 00:00:58.326863 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:00:58.605643 sshd[4966]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:58.611637 systemd-logind[1992]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:00:58.612461 systemd[1]: sshd@11-172.31.26.136:22-147.75.109.163:49092.service: Deactivated successfully. Jul 2 00:00:58.618628 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:00:58.624358 systemd-logind[1992]: Removed session 12. Jul 2 00:00:59.296572 systemd-networkd[1927]: vxlan.calico: Gained IPv6LL Jul 2 00:01:00.535772 containerd[2020]: time="2024-07-02T00:01:00.535241123Z" level=info msg="StopPodSandbox for \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\"" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.672 [INFO][5043] k8s.go 608: Cleaning up netns ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.672 [INFO][5043] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" iface="eth0" netns="/var/run/netns/cni-61a03ba3-457e-c45d-938e-a3ac325658fd" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.674 [INFO][5043] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" iface="eth0" netns="/var/run/netns/cni-61a03ba3-457e-c45d-938e-a3ac325658fd" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.674 [INFO][5043] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" iface="eth0" netns="/var/run/netns/cni-61a03ba3-457e-c45d-938e-a3ac325658fd" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.674 [INFO][5043] k8s.go 615: Releasing IP address(es) ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.674 [INFO][5043] utils.go 188: Calico CNI releasing IP address ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.722 [INFO][5049] ipam_plugin.go 411: Releasing address using handleID ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" HandleID="k8s-pod-network.6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.722 [INFO][5049] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.722 [INFO][5049] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.739 [WARNING][5049] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" HandleID="k8s-pod-network.6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.739 [INFO][5049] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" HandleID="k8s-pod-network.6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.741 [INFO][5049] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:00.747450 containerd[2020]: 2024-07-02 00:01:00.744 [INFO][5043] k8s.go 621: Teardown processing complete. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:00.749205 containerd[2020]: time="2024-07-02T00:01:00.749130588Z" level=info msg="TearDown network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\" successfully" Jul 2 00:01:00.749205 containerd[2020]: time="2024-07-02T00:01:00.749188200Z" level=info msg="StopPodSandbox for \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\" returns successfully" Jul 2 00:01:00.753771 containerd[2020]: time="2024-07-02T00:01:00.753073332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5597fb45-9mjn4,Uid:d4985aed-3f25-4daa-b2aa-005e064a14f0,Namespace:calico-system,Attempt:1,}" Jul 2 00:01:00.755768 systemd[1]: run-netns-cni\x2d61a03ba3\x2d457e\x2dc45d\x2d938e\x2da3ac325658fd.mount: Deactivated successfully. Jul 2 00:01:00.985595 systemd-networkd[1927]: cali963985f347a: Link UP Jul 2 00:01:00.989158 systemd-networkd[1927]: cali963985f347a: Gained carrier Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.855 [INFO][5056] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0 calico-kube-controllers-6b5597fb45- calico-system d4985aed-3f25-4daa-b2aa-005e064a14f0 914 0 2024-07-02 00:00:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b5597fb45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-136 calico-kube-controllers-6b5597fb45-9mjn4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali963985f347a [] []}} ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Namespace="calico-system" Pod="calico-kube-controllers-6b5597fb45-9mjn4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.855 [INFO][5056] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Namespace="calico-system" Pod="calico-kube-controllers-6b5597fb45-9mjn4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.904 [INFO][5067] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" HandleID="k8s-pod-network.53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.921 [INFO][5067] ipam_plugin.go 264: Auto assigning IP ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" HandleID="k8s-pod-network.53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028ca70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-136", "pod":"calico-kube-controllers-6b5597fb45-9mjn4", "timestamp":"2024-07-02 00:01:00.904718977 +0000 UTC"}, Hostname:"ip-172-31-26-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.921 [INFO][5067] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.921 [INFO][5067] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.921 [INFO][5067] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-136' Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.924 [INFO][5067] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" host="ip-172-31-26-136" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.930 [INFO][5067] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-136" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.938 [INFO][5067] ipam.go 489: Trying affinity for 192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.941 [INFO][5067] ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.945 [INFO][5067] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.945 [INFO][5067] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" host="ip-172-31-26-136" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.947 [INFO][5067] ipam.go 1685: Creating new handle: k8s-pod-network.53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.954 [INFO][5067] ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" host="ip-172-31-26-136" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.970 [INFO][5067] ipam.go 1216: Successfully claimed IPs: [192.168.122.193/26] block=192.168.122.192/26 handle="k8s-pod-network.53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" host="ip-172-31-26-136" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.970 [INFO][5067] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.193/26] handle="k8s-pod-network.53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" host="ip-172-31-26-136" Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.970 [INFO][5067] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:01.051460 containerd[2020]: 2024-07-02 00:01:00.971 [INFO][5067] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.122.193/26] IPv6=[] ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" HandleID="k8s-pod-network.53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:01.056848 containerd[2020]: 2024-07-02 00:01:00.976 [INFO][5056] k8s.go 386: Populated endpoint ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Namespace="calico-system" Pod="calico-kube-controllers-6b5597fb45-9mjn4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0", GenerateName:"calico-kube-controllers-6b5597fb45-", Namespace:"calico-system", SelfLink:"", UID:"d4985aed-3f25-4daa-b2aa-005e064a14f0", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5597fb45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"", Pod:"calico-kube-controllers-6b5597fb45-9mjn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali963985f347a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:01.056848 containerd[2020]: 2024-07-02 00:01:00.976 [INFO][5056] k8s.go 387: Calico CNI using IPs: [192.168.122.193/32] ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Namespace="calico-system" Pod="calico-kube-controllers-6b5597fb45-9mjn4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:01.056848 containerd[2020]: 2024-07-02 00:01:00.977 [INFO][5056] dataplane_linux.go 68: Setting the host side veth name to cali963985f347a ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Namespace="calico-system" Pod="calico-kube-controllers-6b5597fb45-9mjn4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:01.056848 containerd[2020]: 2024-07-02 00:01:00.991 [INFO][5056] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Namespace="calico-system" Pod="calico-kube-controllers-6b5597fb45-9mjn4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:01.056848 containerd[2020]: 2024-07-02 00:01:00.991 [INFO][5056] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Namespace="calico-system" Pod="calico-kube-controllers-6b5597fb45-9mjn4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0", GenerateName:"calico-kube-controllers-6b5597fb45-", Namespace:"calico-system", SelfLink:"", UID:"d4985aed-3f25-4daa-b2aa-005e064a14f0", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5597fb45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c", Pod:"calico-kube-controllers-6b5597fb45-9mjn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali963985f347a", MAC:"66:92:6e:13:fa:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:01.056848 containerd[2020]: 2024-07-02 00:01:01.044 [INFO][5056] k8s.go 500: Wrote updated endpoint to datastore ContainerID="53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c" Namespace="calico-system" Pod="calico-kube-controllers-6b5597fb45-9mjn4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:01.126389 containerd[2020]: time="2024-07-02T00:01:01.126217714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:01.129132 containerd[2020]: time="2024-07-02T00:01:01.127323226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:01.129898 containerd[2020]: time="2024-07-02T00:01:01.129791386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:01.130317 containerd[2020]: time="2024-07-02T00:01:01.130144198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:01.207514 systemd[1]: Started cri-containerd-53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c.scope - libcontainer container 53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c. Jul 2 00:01:01.284768 containerd[2020]: time="2024-07-02T00:01:01.284459939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b5597fb45-9mjn4,Uid:d4985aed-3f25-4daa-b2aa-005e064a14f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c\"" Jul 2 00:01:01.288726 containerd[2020]: time="2024-07-02T00:01:01.288567695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:01:01.528568 containerd[2020]: time="2024-07-02T00:01:01.527959848Z" level=info msg="StopPodSandbox for \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\"" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.647 [INFO][5141] k8s.go 608: Cleaning up netns ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.648 [INFO][5141] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" iface="eth0" netns="/var/run/netns/cni-2e8d745a-0192-412b-f790-5983c8caf5dc" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.648 [INFO][5141] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" iface="eth0" netns="/var/run/netns/cni-2e8d745a-0192-412b-f790-5983c8caf5dc" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.649 [INFO][5141] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" iface="eth0" netns="/var/run/netns/cni-2e8d745a-0192-412b-f790-5983c8caf5dc" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.651 [INFO][5141] k8s.go 615: Releasing IP address(es) ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.651 [INFO][5141] utils.go 188: Calico CNI releasing IP address ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.700 [INFO][5148] ipam_plugin.go 411: Releasing address using handleID ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" HandleID="k8s-pod-network.203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.700 [INFO][5148] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.700 [INFO][5148] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.714 [WARNING][5148] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" HandleID="k8s-pod-network.203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.715 [INFO][5148] ipam_plugin.go 439: Releasing address using workloadID ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" HandleID="k8s-pod-network.203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.718 [INFO][5148] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:01.724485 containerd[2020]: 2024-07-02 00:01:01.721 [INFO][5141] k8s.go 621: Teardown processing complete. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:01.730491 containerd[2020]: time="2024-07-02T00:01:01.725539453Z" level=info msg="TearDown network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\" successfully" Jul 2 00:01:01.730491 containerd[2020]: time="2024-07-02T00:01:01.725584141Z" level=info msg="StopPodSandbox for \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\" returns successfully" Jul 2 00:01:01.730491 containerd[2020]: time="2024-07-02T00:01:01.729019093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lfdt,Uid:db41f039-97b5-4424-9900-efa54584157e,Namespace:kube-system,Attempt:1,}" Jul 2 00:01:01.757567 systemd[1]: run-netns-cni\x2d2e8d745a\x2d0192\x2d412b\x2df790\x2d5983c8caf5dc.mount: Deactivated successfully. Jul 2 00:01:01.980570 systemd-networkd[1927]: cali2252ded041a: Link UP Jul 2 00:01:01.985700 systemd-networkd[1927]: cali2252ded041a: Gained carrier Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.854 [INFO][5158] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0 coredns-7db6d8ff4d- kube-system db41f039-97b5-4424-9900-efa54584157e 921 0 2024-07-02 00:00:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-136 coredns-7db6d8ff4d-2lfdt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2252ded041a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lfdt" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.854 [INFO][5158] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lfdt" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.910 [INFO][5165] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" HandleID="k8s-pod-network.257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.928 [INFO][5165] ipam_plugin.go 264: Auto assigning IP ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" HandleID="k8s-pod-network.257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000362290), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-136", "pod":"coredns-7db6d8ff4d-2lfdt", "timestamp":"2024-07-02 00:01:01.910378958 +0000 UTC"}, Hostname:"ip-172-31-26-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.928 [INFO][5165] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.928 [INFO][5165] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.928 [INFO][5165] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-136' Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.931 [INFO][5165] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" host="ip-172-31-26-136" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.939 [INFO][5165] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-136" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.947 [INFO][5165] ipam.go 489: Trying affinity for 192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.950 [INFO][5165] ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.953 [INFO][5165] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.954 [INFO][5165] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" host="ip-172-31-26-136" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.956 [INFO][5165] ipam.go 1685: Creating new handle: k8s-pod-network.257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.962 [INFO][5165] ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" host="ip-172-31-26-136" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.970 [INFO][5165] ipam.go 1216: Successfully claimed IPs: [192.168.122.194/26] block=192.168.122.192/26 handle="k8s-pod-network.257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" host="ip-172-31-26-136" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.970 [INFO][5165] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.194/26] handle="k8s-pod-network.257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" host="ip-172-31-26-136" Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.970 [INFO][5165] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:02.010278 containerd[2020]: 2024-07-02 00:01:01.970 [INFO][5165] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.122.194/26] IPv6=[] ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" HandleID="k8s-pod-network.257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:02.011529 containerd[2020]: 2024-07-02 00:01:01.973 [INFO][5158] k8s.go 386: Populated endpoint ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lfdt" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"db41f039-97b5-4424-9900-efa54584157e", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"", Pod:"coredns-7db6d8ff4d-2lfdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2252ded041a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:02.011529 containerd[2020]: 2024-07-02 00:01:01.974 [INFO][5158] k8s.go 387: Calico CNI using IPs: [192.168.122.194/32] ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lfdt" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:02.011529 containerd[2020]: 2024-07-02 00:01:01.974 [INFO][5158] dataplane_linux.go 68: Setting the host side veth name to cali2252ded041a ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lfdt" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:02.011529 containerd[2020]: 2024-07-02 00:01:01.979 [INFO][5158] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lfdt" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:02.011529 containerd[2020]: 2024-07-02 00:01:01.979 [INFO][5158] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lfdt" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"db41f039-97b5-4424-9900-efa54584157e", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a", Pod:"coredns-7db6d8ff4d-2lfdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2252ded041a", MAC:"d6:05:b7:c6:41:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:02.011529 containerd[2020]: 2024-07-02 00:01:02.000 [INFO][5158] k8s.go 500: Wrote updated endpoint to datastore ContainerID="257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lfdt" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:02.088792 containerd[2020]: time="2024-07-02T00:01:02.087948251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:02.088792 containerd[2020]: time="2024-07-02T00:01:02.088070507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:02.088792 containerd[2020]: time="2024-07-02T00:01:02.088117727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:02.088792 containerd[2020]: time="2024-07-02T00:01:02.088152335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:02.159463 systemd[1]: run-containerd-runc-k8s.io-257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a-runc.AWQwe3.mount: Deactivated successfully. Jul 2 00:01:02.173150 systemd[1]: Started cri-containerd-257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a.scope - libcontainer container 257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a. Jul 2 00:01:02.270251 containerd[2020]: time="2024-07-02T00:01:02.270066143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lfdt,Uid:db41f039-97b5-4424-9900-efa54584157e,Namespace:kube-system,Attempt:1,} returns sandbox id \"257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a\"" Jul 2 00:01:02.283275 containerd[2020]: time="2024-07-02T00:01:02.283212768Z" level=info msg="CreateContainer within sandbox \"257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:01:02.313136 containerd[2020]: time="2024-07-02T00:01:02.312902940Z" level=info msg="CreateContainer within sandbox \"257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e9703d91386d92e4d19235545ca5a450dfd54d743279f33eac03ebb48129cd9\"" Jul 2 00:01:02.315977 containerd[2020]: time="2024-07-02T00:01:02.315880752Z" level=info msg="StartContainer for \"4e9703d91386d92e4d19235545ca5a450dfd54d743279f33eac03ebb48129cd9\"" Jul 2 00:01:02.382997 systemd[1]: Started cri-containerd-4e9703d91386d92e4d19235545ca5a450dfd54d743279f33eac03ebb48129cd9.scope - libcontainer container 4e9703d91386d92e4d19235545ca5a450dfd54d743279f33eac03ebb48129cd9. Jul 2 00:01:02.530778 containerd[2020]: time="2024-07-02T00:01:02.529224685Z" level=info msg="StopPodSandbox for \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\"" Jul 2 00:01:02.534681 containerd[2020]: time="2024-07-02T00:01:02.533929093Z" level=info msg="StopPodSandbox for \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\"" Jul 2 00:01:02.550277 containerd[2020]: time="2024-07-02T00:01:02.549968677Z" level=info msg="StartContainer for \"4e9703d91386d92e4d19235545ca5a450dfd54d743279f33eac03ebb48129cd9\" returns successfully" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.777 [INFO][5286] k8s.go 608: Cleaning up netns ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.778 [INFO][5286] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" iface="eth0" netns="/var/run/netns/cni-851f1d80-a7f5-6b1d-ff9c-23a26549b919" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.778 [INFO][5286] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" iface="eth0" netns="/var/run/netns/cni-851f1d80-a7f5-6b1d-ff9c-23a26549b919" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.778 [INFO][5286] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" iface="eth0" netns="/var/run/netns/cni-851f1d80-a7f5-6b1d-ff9c-23a26549b919" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.778 [INFO][5286] k8s.go 615: Releasing IP address(es) ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.778 [INFO][5286] utils.go 188: Calico CNI releasing IP address ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.879 [INFO][5301] ipam_plugin.go 411: Releasing address using handleID ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" HandleID="k8s-pod-network.a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.880 [INFO][5301] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.880 [INFO][5301] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.914 [WARNING][5301] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" HandleID="k8s-pod-network.a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.914 [INFO][5301] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" HandleID="k8s-pod-network.a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.918 [INFO][5301] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:02.924830 containerd[2020]: 2024-07-02 00:01:02.921 [INFO][5286] k8s.go 621: Teardown processing complete. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:02.930543 containerd[2020]: time="2024-07-02T00:01:02.926832423Z" level=info msg="TearDown network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\" successfully" Jul 2 00:01:02.930543 containerd[2020]: time="2024-07-02T00:01:02.926882475Z" level=info msg="StopPodSandbox for \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\" returns successfully" Jul 2 00:01:02.931800 containerd[2020]: time="2024-07-02T00:01:02.931032411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b48tn,Uid:2359edd6-6282-47da-88b1-1e71aa5d1c63,Namespace:kube-system,Attempt:1,}" Jul 2 00:01:02.933710 systemd[1]: run-netns-cni\x2d851f1d80\x2da7f5\x2d6b1d\x2dff9c\x2d23a26549b919.mount: Deactivated successfully. Jul 2 00:01:02.944233 systemd-networkd[1927]: cali963985f347a: Gained IPv6LL Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.800 [INFO][5285] k8s.go 608: Cleaning up netns ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.801 [INFO][5285] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" iface="eth0" netns="/var/run/netns/cni-68208f27-e6f1-8517-6276-4f0b0f336754" Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.801 [INFO][5285] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" iface="eth0" netns="/var/run/netns/cni-68208f27-e6f1-8517-6276-4f0b0f336754" Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.802 [INFO][5285] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" iface="eth0" netns="/var/run/netns/cni-68208f27-e6f1-8517-6276-4f0b0f336754" Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.803 [INFO][5285] k8s.go 615: Releasing IP address(es) ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.803 [INFO][5285] utils.go 188: Calico CNI releasing IP address ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.894 [INFO][5305] ipam_plugin.go 411: Releasing address using handleID ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" HandleID="k8s-pod-network.c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.894 [INFO][5305] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.918 [INFO][5305] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.953 [WARNING][5305] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" HandleID="k8s-pod-network.c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.953 [INFO][5305] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" HandleID="k8s-pod-network.c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.961 [INFO][5305] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:02.977907 containerd[2020]: 2024-07-02 00:01:02.964 [INFO][5285] k8s.go 621: Teardown processing complete. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:02.981599 containerd[2020]: time="2024-07-02T00:01:02.979127619Z" level=info msg="TearDown network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\" successfully" Jul 2 00:01:02.981599 containerd[2020]: time="2024-07-02T00:01:02.979183671Z" level=info msg="StopPodSandbox for \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\" returns successfully" Jul 2 00:01:02.981599 containerd[2020]: time="2024-07-02T00:01:02.980162499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdgpm,Uid:b82ac8fb-8024-40b1-9a10-88793e57ca39,Namespace:calico-system,Attempt:1,}" Jul 2 00:01:02.992120 systemd[1]: run-netns-cni\x2d68208f27\x2de6f1\x2d8517\x2d6276\x2d4f0b0f336754.mount: Deactivated successfully. Jul 2 00:01:03.235122 kubelet[3434]: I0702 00:01:03.234571 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2lfdt" podStartSLOduration=44.234551532 podStartE2EDuration="44.234551532s" podCreationTimestamp="2024-07-02 00:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:01:03.145391592 +0000 UTC m=+56.927265332" watchObservedRunningTime="2024-07-02 00:01:03.234551532 +0000 UTC m=+57.016425236" Jul 2 00:01:03.519407 systemd-networkd[1927]: calibbb640cbdeb: Link UP Jul 2 00:01:03.522738 systemd-networkd[1927]: calibbb640cbdeb: Gained carrier Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.222 [INFO][5315] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0 coredns-7db6d8ff4d- kube-system 2359edd6-6282-47da-88b1-1e71aa5d1c63 932 0 2024-07-02 00:00:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-136 coredns-7db6d8ff4d-b48tn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibbb640cbdeb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b48tn" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.222 [INFO][5315] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b48tn" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.351 [INFO][5341] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" HandleID="k8s-pod-network.fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.412 [INFO][5341] ipam_plugin.go 264: Auto assigning IP ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" HandleID="k8s-pod-network.fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002bc9c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-136", "pod":"coredns-7db6d8ff4d-b48tn", "timestamp":"2024-07-02 00:01:03.350646205 +0000 UTC"}, Hostname:"ip-172-31-26-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.413 [INFO][5341] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.413 [INFO][5341] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.413 [INFO][5341] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-136' Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.425 [INFO][5341] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" host="ip-172-31-26-136" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.445 [INFO][5341] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-136" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.462 [INFO][5341] ipam.go 489: Trying affinity for 192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.470 [INFO][5341] ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.477 [INFO][5341] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.477 [INFO][5341] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" host="ip-172-31-26-136" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.480 [INFO][5341] ipam.go 1685: Creating new handle: k8s-pod-network.fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.486 [INFO][5341] ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" host="ip-172-31-26-136" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.496 [INFO][5341] ipam.go 1216: Successfully claimed IPs: [192.168.122.195/26] block=192.168.122.192/26 handle="k8s-pod-network.fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" host="ip-172-31-26-136" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.496 [INFO][5341] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.195/26] handle="k8s-pod-network.fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" host="ip-172-31-26-136" Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.497 [INFO][5341] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:03.575014 containerd[2020]: 2024-07-02 00:01:03.497 [INFO][5341] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.122.195/26] IPv6=[] ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" HandleID="k8s-pod-network.fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:03.581408 containerd[2020]: 2024-07-02 00:01:03.501 [INFO][5315] k8s.go 386: Populated endpoint ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b48tn" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2359edd6-6282-47da-88b1-1e71aa5d1c63", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"", Pod:"coredns-7db6d8ff4d-b48tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbb640cbdeb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:03.581408 containerd[2020]: 2024-07-02 00:01:03.503 [INFO][5315] k8s.go 387: Calico CNI using IPs: [192.168.122.195/32] ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b48tn" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:03.581408 containerd[2020]: 2024-07-02 00:01:03.503 [INFO][5315] dataplane_linux.go 68: Setting the host side veth name to calibbb640cbdeb ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b48tn" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:03.581408 containerd[2020]: 2024-07-02 00:01:03.524 [INFO][5315] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b48tn" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:03.581408 containerd[2020]: 2024-07-02 00:01:03.525 [INFO][5315] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b48tn" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2359edd6-6282-47da-88b1-1e71aa5d1c63", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f", Pod:"coredns-7db6d8ff4d-b48tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbb640cbdeb", MAC:"72:cd:f6:46:96:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:03.581408 containerd[2020]: 2024-07-02 00:01:03.568 [INFO][5315] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b48tn" WorkloadEndpoint="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:03.669749 systemd[1]: Started sshd@12-172.31.26.136:22-147.75.109.163:51344.service - OpenSSH per-connection server daemon (147.75.109.163:51344). Jul 2 00:01:03.697472 systemd-networkd[1927]: cali83a3e0081eb: Link UP Jul 2 00:01:03.717325 containerd[2020]: time="2024-07-02T00:01:03.712134723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:03.721048 containerd[2020]: time="2024-07-02T00:01:03.719356155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:03.721048 containerd[2020]: time="2024-07-02T00:01:03.719425563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:03.721048 containerd[2020]: time="2024-07-02T00:01:03.719451531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:03.731625 systemd-networkd[1927]: cali83a3e0081eb: Gained carrier Jul 2 00:01:03.821005 systemd[1]: run-containerd-runc-k8s.io-fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f-runc.K1ZL0c.mount: Deactivated successfully. Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.242 [INFO][5325] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0 csi-node-driver- calico-system b82ac8fb-8024-40b1-9a10-88793e57ca39 933 0 2024-07-02 00:00:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-26-136 csi-node-driver-vdgpm eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali83a3e0081eb [] []}} ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Namespace="calico-system" Pod="csi-node-driver-vdgpm" WorkloadEndpoint="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.242 [INFO][5325] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Namespace="calico-system" Pod="csi-node-driver-vdgpm" WorkloadEndpoint="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.427 [INFO][5345] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" HandleID="k8s-pod-network.1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.467 [INFO][5345] ipam_plugin.go 264: Auto assigning IP ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" HandleID="k8s-pod-network.1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033e2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-136", "pod":"csi-node-driver-vdgpm", "timestamp":"2024-07-02 00:01:03.427203337 +0000 UTC"}, Hostname:"ip-172-31-26-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.467 [INFO][5345] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.497 [INFO][5345] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.497 [INFO][5345] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-136' Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.509 [INFO][5345] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" host="ip-172-31-26-136" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.534 [INFO][5345] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-136" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.561 [INFO][5345] ipam.go 489: Trying affinity for 192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.577 [INFO][5345] ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.600 [INFO][5345] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.600 [INFO][5345] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" host="ip-172-31-26-136" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.614 [INFO][5345] ipam.go 1685: Creating new handle: k8s-pod-network.1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.632 [INFO][5345] ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" host="ip-172-31-26-136" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.665 [INFO][5345] ipam.go 1216: Successfully claimed IPs: [192.168.122.196/26] block=192.168.122.192/26 handle="k8s-pod-network.1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" host="ip-172-31-26-136" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.666 [INFO][5345] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.196/26] handle="k8s-pod-network.1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" host="ip-172-31-26-136" Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.666 [INFO][5345] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:03.834127 containerd[2020]: 2024-07-02 00:01:03.666 [INFO][5345] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.122.196/26] IPv6=[] ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" HandleID="k8s-pod-network.1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:03.836872 containerd[2020]: 2024-07-02 00:01:03.681 [INFO][5325] k8s.go 386: Populated endpoint ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Namespace="calico-system" Pod="csi-node-driver-vdgpm" WorkloadEndpoint="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b82ac8fb-8024-40b1-9a10-88793e57ca39", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"", Pod:"csi-node-driver-vdgpm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83a3e0081eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:03.836872 containerd[2020]: 2024-07-02 00:01:03.682 [INFO][5325] k8s.go 387: Calico CNI using IPs: [192.168.122.196/32] ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Namespace="calico-system" Pod="csi-node-driver-vdgpm" WorkloadEndpoint="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:03.836872 containerd[2020]: 2024-07-02 00:01:03.682 [INFO][5325] dataplane_linux.go 68: Setting the host side veth name to cali83a3e0081eb ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Namespace="calico-system" Pod="csi-node-driver-vdgpm" WorkloadEndpoint="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:03.836872 containerd[2020]: 2024-07-02 00:01:03.701 [INFO][5325] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Namespace="calico-system" Pod="csi-node-driver-vdgpm" WorkloadEndpoint="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:03.836872 containerd[2020]: 2024-07-02 00:01:03.709 [INFO][5325] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Namespace="calico-system" Pod="csi-node-driver-vdgpm" WorkloadEndpoint="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b82ac8fb-8024-40b1-9a10-88793e57ca39", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a", Pod:"csi-node-driver-vdgpm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83a3e0081eb", MAC:"26:88:a2:ad:aa:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:03.836872 containerd[2020]: 2024-07-02 00:01:03.796 [INFO][5325] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a" Namespace="calico-system" Pod="csi-node-driver-vdgpm" WorkloadEndpoint="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:03.849412 systemd[1]: Started cri-containerd-fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f.scope - libcontainer container fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f. Jul 2 00:01:03.944174 sshd[5384]: Accepted publickey for core from 147.75.109.163 port 51344 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:03.965958 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:03.984938 containerd[2020]: time="2024-07-02T00:01:03.980208484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:03.984938 containerd[2020]: time="2024-07-02T00:01:03.980322340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:03.984938 containerd[2020]: time="2024-07-02T00:01:03.980361448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:03.984938 containerd[2020]: time="2024-07-02T00:01:03.980403376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:03.992798 systemd-logind[1992]: New session 13 of user core. Jul 2 00:01:04.017156 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:01:04.032294 systemd-networkd[1927]: cali2252ded041a: Gained IPv6LL Jul 2 00:01:04.083171 systemd[1]: Started cri-containerd-1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a.scope - libcontainer container 1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a. Jul 2 00:01:04.132621 containerd[2020]: time="2024-07-02T00:01:04.132548329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b48tn,Uid:2359edd6-6282-47da-88b1-1e71aa5d1c63,Namespace:kube-system,Attempt:1,} returns sandbox id \"fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f\"" Jul 2 00:01:04.148759 containerd[2020]: time="2024-07-02T00:01:04.147746053Z" level=info msg="CreateContainer within sandbox \"fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:01:04.280371 containerd[2020]: time="2024-07-02T00:01:04.280309645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdgpm,Uid:b82ac8fb-8024-40b1-9a10-88793e57ca39,Namespace:calico-system,Attempt:1,} returns sandbox id \"1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a\"" Jul 2 00:01:04.284326 containerd[2020]: time="2024-07-02T00:01:04.283810789Z" level=info msg="CreateContainer within sandbox \"fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"720e840e8d54e8533e1ca10fd16c57e9641e37d24f0c78e1d5ead4606d4a2a98\"" Jul 2 00:01:04.287142 containerd[2020]: time="2024-07-02T00:01:04.285865357Z" level=info msg="StartContainer for \"720e840e8d54e8533e1ca10fd16c57e9641e37d24f0c78e1d5ead4606d4a2a98\"" Jul 2 00:01:04.410413 sshd[5384]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:04.425736 systemd[1]: Started cri-containerd-720e840e8d54e8533e1ca10fd16c57e9641e37d24f0c78e1d5ead4606d4a2a98.scope - libcontainer container 720e840e8d54e8533e1ca10fd16c57e9641e37d24f0c78e1d5ead4606d4a2a98. Jul 2 00:01:04.428444 systemd[1]: sshd@12-172.31.26.136:22-147.75.109.163:51344.service: Deactivated successfully. Jul 2 00:01:04.438297 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:01:04.444953 systemd-logind[1992]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:01:04.453013 systemd-logind[1992]: Removed session 13. Jul 2 00:01:04.556040 containerd[2020]: time="2024-07-02T00:01:04.555875523Z" level=info msg="StartContainer for \"720e840e8d54e8533e1ca10fd16c57e9641e37d24f0c78e1d5ead4606d4a2a98\" returns successfully" Jul 2 00:01:04.762221 systemd[1]: run-containerd-runc-k8s.io-1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a-runc.5wsmNN.mount: Deactivated successfully. Jul 2 00:01:05.120414 systemd-networkd[1927]: cali83a3e0081eb: Gained IPv6LL Jul 2 00:01:05.311894 systemd-networkd[1927]: calibbb640cbdeb: Gained IPv6LL Jul 2 00:01:05.763233 containerd[2020]: time="2024-07-02T00:01:05.763163645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:05.768785 containerd[2020]: time="2024-07-02T00:01:05.768443321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 00:01:05.771112 containerd[2020]: time="2024-07-02T00:01:05.770901245Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:05.782013 containerd[2020]: time="2024-07-02T00:01:05.781850897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:05.786909 containerd[2020]: time="2024-07-02T00:01:05.786616661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 4.497971782s" Jul 2 00:01:05.786909 containerd[2020]: time="2024-07-02T00:01:05.786708809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 00:01:05.790710 containerd[2020]: time="2024-07-02T00:01:05.789545813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:01:05.831686 containerd[2020]: time="2024-07-02T00:01:05.828885245Z" level=info msg="CreateContainer within sandbox \"53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:01:05.877141 containerd[2020]: time="2024-07-02T00:01:05.876939833Z" level=info msg="CreateContainer within sandbox \"53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"64bff3b825fd6da28b667fa5c9e2336da4a06c370ce383c40c04f733e4dfddda\"" Jul 2 00:01:05.879363 containerd[2020]: time="2024-07-02T00:01:05.879293309Z" level=info msg="StartContainer for \"64bff3b825fd6da28b667fa5c9e2336da4a06c370ce383c40c04f733e4dfddda\"" Jul 2 00:01:05.977019 systemd[1]: Started cri-containerd-64bff3b825fd6da28b667fa5c9e2336da4a06c370ce383c40c04f733e4dfddda.scope - libcontainer container 64bff3b825fd6da28b667fa5c9e2336da4a06c370ce383c40c04f733e4dfddda. Jul 2 00:01:06.071496 containerd[2020]: time="2024-07-02T00:01:06.071426330Z" level=info msg="StartContainer for \"64bff3b825fd6da28b667fa5c9e2336da4a06c370ce383c40c04f733e4dfddda\" returns successfully" Jul 2 00:01:06.166827 kubelet[3434]: I0702 00:01:06.166107 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b48tn" podStartSLOduration=47.166084023 podStartE2EDuration="47.166084023s" podCreationTimestamp="2024-07-02 00:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:01:05.15541835 +0000 UTC m=+58.937292030" watchObservedRunningTime="2024-07-02 00:01:06.166084023 +0000 UTC m=+59.947957715" Jul 2 00:01:06.211495 kubelet[3434]: I0702 00:01:06.211387 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b5597fb45-9mjn4" podStartSLOduration=30.710366641 podStartE2EDuration="35.211365159s" podCreationTimestamp="2024-07-02 00:00:31 +0000 UTC" firstStartedPulling="2024-07-02 00:01:01.287838515 +0000 UTC m=+55.069712207" lastFinishedPulling="2024-07-02 00:01:05.788837033 +0000 UTC m=+59.570710725" observedRunningTime="2024-07-02 00:01:06.166542855 +0000 UTC m=+59.948416547" watchObservedRunningTime="2024-07-02 00:01:06.211365159 +0000 UTC m=+59.993238851" Jul 2 00:01:06.498178 containerd[2020]: time="2024-07-02T00:01:06.498009388Z" level=info msg="StopPodSandbox for \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\"" Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.604 [WARNING][5603] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0", GenerateName:"calico-kube-controllers-6b5597fb45-", Namespace:"calico-system", SelfLink:"", UID:"d4985aed-3f25-4daa-b2aa-005e064a14f0", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5597fb45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c", Pod:"calico-kube-controllers-6b5597fb45-9mjn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali963985f347a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.604 [INFO][5603] k8s.go 608: Cleaning up netns ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.604 [INFO][5603] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" iface="eth0" netns="" Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.605 [INFO][5603] k8s.go 615: Releasing IP address(es) ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.605 [INFO][5603] utils.go 188: Calico CNI releasing IP address ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.653 [INFO][5612] ipam_plugin.go 411: Releasing address using handleID ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" HandleID="k8s-pod-network.6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.654 [INFO][5612] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.654 [INFO][5612] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.674 [WARNING][5612] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" HandleID="k8s-pod-network.6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.674 [INFO][5612] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" HandleID="k8s-pod-network.6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.677 [INFO][5612] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:06.683462 containerd[2020]: 2024-07-02 00:01:06.679 [INFO][5603] k8s.go 621: Teardown processing complete. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:06.684461 containerd[2020]: time="2024-07-02T00:01:06.683508281Z" level=info msg="TearDown network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\" successfully" Jul 2 00:01:06.684461 containerd[2020]: time="2024-07-02T00:01:06.683545877Z" level=info msg="StopPodSandbox for \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\" returns successfully" Jul 2 00:01:06.685330 containerd[2020]: time="2024-07-02T00:01:06.685253141Z" level=info msg="RemovePodSandbox for \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\"" Jul 2 00:01:06.685482 containerd[2020]: time="2024-07-02T00:01:06.685324733Z" level=info msg="Forcibly stopping sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\"" Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.809 [WARNING][5632] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0", GenerateName:"calico-kube-controllers-6b5597fb45-", Namespace:"calico-system", SelfLink:"", UID:"d4985aed-3f25-4daa-b2aa-005e064a14f0", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b5597fb45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"53536469c67336c883b6d633b0f08f2a0b891e001c67acf53ccd00e6da29aa0c", Pod:"calico-kube-controllers-6b5597fb45-9mjn4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali963985f347a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.809 [INFO][5632] k8s.go 608: Cleaning up netns ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.809 [INFO][5632] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" iface="eth0" netns="" Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.809 [INFO][5632] k8s.go 615: Releasing IP address(es) ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.809 [INFO][5632] utils.go 188: Calico CNI releasing IP address ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.864 [INFO][5638] ipam_plugin.go 411: Releasing address using handleID ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" HandleID="k8s-pod-network.6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.866 [INFO][5638] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.866 [INFO][5638] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.881 [WARNING][5638] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" HandleID="k8s-pod-network.6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.881 [INFO][5638] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" HandleID="k8s-pod-network.6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Workload="ip--172--31--26--136-k8s-calico--kube--controllers--6b5597fb45--9mjn4-eth0" Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.885 [INFO][5638] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:06.891511 containerd[2020]: 2024-07-02 00:01:06.888 [INFO][5632] k8s.go 621: Teardown processing complete. ContainerID="6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848" Jul 2 00:01:06.894577 containerd[2020]: time="2024-07-02T00:01:06.892678770Z" level=info msg="TearDown network for sandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\" successfully" Jul 2 00:01:06.921330 containerd[2020]: time="2024-07-02T00:01:06.920059483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:01:06.921330 containerd[2020]: time="2024-07-02T00:01:06.920188783Z" level=info msg="RemovePodSandbox \"6d49e40129b1f25ea774e6c12585577cfc875226658be0ccd7cb694571ad1848\" returns successfully" Jul 2 00:01:06.921760 containerd[2020]: time="2024-07-02T00:01:06.921587839Z" level=info msg="StopPodSandbox for \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\"" Jul 2 00:01:06.921904 containerd[2020]: time="2024-07-02T00:01:06.921814519Z" level=info msg="TearDown network for sandbox \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\" successfully" Jul 2 00:01:06.921987 containerd[2020]: time="2024-07-02T00:01:06.921895171Z" level=info msg="StopPodSandbox for \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\" returns successfully" Jul 2 00:01:06.923851 containerd[2020]: time="2024-07-02T00:01:06.923768443Z" level=info msg="RemovePodSandbox for \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\"" Jul 2 00:01:06.924007 containerd[2020]: time="2024-07-02T00:01:06.923877631Z" level=info msg="Forcibly stopping sandbox \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\"" Jul 2 00:01:06.925228 containerd[2020]: time="2024-07-02T00:01:06.924080611Z" level=info msg="TearDown network for sandbox \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\" successfully" Jul 2 00:01:06.946536 containerd[2020]: time="2024-07-02T00:01:06.946450063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:01:06.946755 containerd[2020]: time="2024-07-02T00:01:06.946560667Z" level=info msg="RemovePodSandbox \"ead7c72787539dcfc1f15aae6666b1b1eb99a9b8ac2cf1cd238dd4b4f479a4ea\" returns successfully" Jul 2 00:01:06.948734 containerd[2020]: time="2024-07-02T00:01:06.948346675Z" level=info msg="StopPodSandbox for \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\"" Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.089 [WARNING][5656] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b82ac8fb-8024-40b1-9a10-88793e57ca39", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a", Pod:"csi-node-driver-vdgpm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83a3e0081eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.090 [INFO][5656] k8s.go 608: Cleaning up netns ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.090 [INFO][5656] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" iface="eth0" netns="" Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.090 [INFO][5656] k8s.go 615: Releasing IP address(es) ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.090 [INFO][5656] utils.go 188: Calico CNI releasing IP address ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.222 [INFO][5662] ipam_plugin.go 411: Releasing address using handleID ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" HandleID="k8s-pod-network.c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.223 [INFO][5662] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.223 [INFO][5662] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.246 [WARNING][5662] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" HandleID="k8s-pod-network.c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.247 [INFO][5662] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" HandleID="k8s-pod-network.c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.253 [INFO][5662] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:07.263574 containerd[2020]: 2024-07-02 00:01:07.259 [INFO][5656] k8s.go 621: Teardown processing complete. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:07.263574 containerd[2020]: time="2024-07-02T00:01:07.263072812Z" level=info msg="TearDown network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\" successfully" Jul 2 00:01:07.263574 containerd[2020]: time="2024-07-02T00:01:07.263112544Z" level=info msg="StopPodSandbox for \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\" returns successfully" Jul 2 00:01:07.269574 containerd[2020]: time="2024-07-02T00:01:07.269038228Z" level=info msg="RemovePodSandbox for \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\"" Jul 2 00:01:07.269574 containerd[2020]: time="2024-07-02T00:01:07.269122288Z" level=info msg="Forcibly stopping sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\"" Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.432 [WARNING][5687] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b82ac8fb-8024-40b1-9a10-88793e57ca39", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a", Pod:"csi-node-driver-vdgpm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83a3e0081eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.432 [INFO][5687] k8s.go 608: Cleaning up netns ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.433 [INFO][5687] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" iface="eth0" netns="" Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.433 [INFO][5687] k8s.go 615: Releasing IP address(es) ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.433 [INFO][5687] utils.go 188: Calico CNI releasing IP address ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.544 [INFO][5693] ipam_plugin.go 411: Releasing address using handleID ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" HandleID="k8s-pod-network.c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.544 [INFO][5693] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.545 [INFO][5693] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.565 [WARNING][5693] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" HandleID="k8s-pod-network.c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.565 [INFO][5693] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" HandleID="k8s-pod-network.c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Workload="ip--172--31--26--136-k8s-csi--node--driver--vdgpm-eth0" Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.569 [INFO][5693] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:07.575723 containerd[2020]: 2024-07-02 00:01:07.573 [INFO][5687] k8s.go 621: Teardown processing complete. ContainerID="c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316" Jul 2 00:01:07.579174 containerd[2020]: time="2024-07-02T00:01:07.577832190Z" level=info msg="TearDown network for sandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\" successfully" Jul 2 00:01:07.588237 containerd[2020]: time="2024-07-02T00:01:07.587680170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:01:07.588237 containerd[2020]: time="2024-07-02T00:01:07.587787762Z" level=info msg="RemovePodSandbox \"c0d9e9e9754c4d9edb62d6e3dab1ccbe63d66ca046547e1bf2a743a383283316\" returns successfully" Jul 2 00:01:07.589474 containerd[2020]: time="2024-07-02T00:01:07.588951318Z" level=info msg="StopPodSandbox for \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\"" Jul 2 00:01:07.616706 containerd[2020]: time="2024-07-02T00:01:07.615886446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:07.618976 containerd[2020]: time="2024-07-02T00:01:07.618894510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 00:01:07.623987 containerd[2020]: time="2024-07-02T00:01:07.623578494Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:07.635760 containerd[2020]: time="2024-07-02T00:01:07.635158998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:07.640617 containerd[2020]: time="2024-07-02T00:01:07.639603846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.849860021s" Jul 2 00:01:07.640617 containerd[2020]: time="2024-07-02T00:01:07.639705918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 00:01:07.644343 ntpd[1986]: Listen normally on 7 vxlan.calico 192.168.122.192:123 Jul 2 00:01:07.644507 ntpd[1986]: Listen normally on 8 vxlan.calico [fe80::641f:1eff:fe06:6063%4]:123 Jul 2 00:01:07.645341 ntpd[1986]: 2 Jul 00:01:07 ntpd[1986]: Listen normally on 7 vxlan.calico 192.168.122.192:123 Jul 2 00:01:07.645341 ntpd[1986]: 2 Jul 00:01:07 ntpd[1986]: Listen normally on 8 vxlan.calico [fe80::641f:1eff:fe06:6063%4]:123 Jul 2 00:01:07.645341 ntpd[1986]: 2 Jul 00:01:07 ntpd[1986]: Listen normally on 9 cali963985f347a [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:01:07.645341 ntpd[1986]: 2 Jul 00:01:07 ntpd[1986]: Listen normally on 10 cali2252ded041a [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:01:07.645341 ntpd[1986]: 2 Jul 00:01:07 ntpd[1986]: Listen normally on 11 calibbb640cbdeb [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:01:07.645341 ntpd[1986]: 2 Jul 00:01:07 ntpd[1986]: Listen normally on 12 cali83a3e0081eb [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:01:07.644593 ntpd[1986]: Listen normally on 9 cali963985f347a [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:01:07.644691 ntpd[1986]: Listen normally on 10 cali2252ded041a [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:01:07.644764 ntpd[1986]: Listen normally on 11 calibbb640cbdeb [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:01:07.644830 ntpd[1986]: Listen normally on 12 cali83a3e0081eb [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:01:07.650159 containerd[2020]: time="2024-07-02T00:01:07.650089206Z" level=info msg="CreateContainer within sandbox \"1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:01:07.708176 containerd[2020]: time="2024-07-02T00:01:07.707935734Z" level=info msg="CreateContainer within sandbox \"1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d7abad137f67c45cf902499ae51932febbb443781943f361c3def43cd0eca62f\"" Jul 2 00:01:07.711776 containerd[2020]: time="2024-07-02T00:01:07.711725298Z" level=info msg="StartContainer for \"d7abad137f67c45cf902499ae51932febbb443781943f361c3def43cd0eca62f\"" Jul 2 00:01:07.848008 systemd[1]: Started cri-containerd-d7abad137f67c45cf902499ae51932febbb443781943f361c3def43cd0eca62f.scope - libcontainer container d7abad137f67c45cf902499ae51932febbb443781943f361c3def43cd0eca62f. Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.724 [WARNING][5711] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2359edd6-6282-47da-88b1-1e71aa5d1c63", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f", Pod:"coredns-7db6d8ff4d-b48tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbb640cbdeb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.724 [INFO][5711] k8s.go 608: Cleaning up netns ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.724 [INFO][5711] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" iface="eth0" netns="" Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.724 [INFO][5711] k8s.go 615: Releasing IP address(es) ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.724 [INFO][5711] utils.go 188: Calico CNI releasing IP address ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.801 [INFO][5719] ipam_plugin.go 411: Releasing address using handleID ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" HandleID="k8s-pod-network.a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.802 [INFO][5719] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.802 [INFO][5719] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.831 [WARNING][5719] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" HandleID="k8s-pod-network.a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.832 [INFO][5719] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" HandleID="k8s-pod-network.a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.839 [INFO][5719] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:07.864217 containerd[2020]: 2024-07-02 00:01:07.858 [INFO][5711] k8s.go 621: Teardown processing complete. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:07.867070 containerd[2020]: time="2024-07-02T00:01:07.864267607Z" level=info msg="TearDown network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\" successfully" Jul 2 00:01:07.867070 containerd[2020]: time="2024-07-02T00:01:07.864304903Z" level=info msg="StopPodSandbox for \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\" returns successfully" Jul 2 00:01:07.867070 containerd[2020]: time="2024-07-02T00:01:07.865880035Z" level=info msg="RemovePodSandbox for \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\"" Jul 2 00:01:07.867070 containerd[2020]: time="2024-07-02T00:01:07.865933927Z" level=info msg="Forcibly stopping sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\"" Jul 2 00:01:08.027896 containerd[2020]: time="2024-07-02T00:01:08.025286920Z" level=info msg="StartContainer for \"d7abad137f67c45cf902499ae51932febbb443781943f361c3def43cd0eca62f\" returns successfully" Jul 2 00:01:08.031474 containerd[2020]: time="2024-07-02T00:01:08.030301132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:07.995 [WARNING][5760] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2359edd6-6282-47da-88b1-1e71aa5d1c63", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"fd3d123bbd44c987ce8a8502ec83147e8797ae8efa1739c74b89cb1a17bf315f", Pod:"coredns-7db6d8ff4d-b48tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbb640cbdeb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:07.996 [INFO][5760] k8s.go 608: Cleaning up netns ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:07.996 [INFO][5760] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" iface="eth0" netns="" Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:07.996 [INFO][5760] k8s.go 615: Releasing IP address(es) ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:07.996 [INFO][5760] utils.go 188: Calico CNI releasing IP address ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:08.081 [INFO][5767] ipam_plugin.go 411: Releasing address using handleID ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" HandleID="k8s-pod-network.a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:08.081 [INFO][5767] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:08.081 [INFO][5767] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:08.101 [WARNING][5767] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" HandleID="k8s-pod-network.a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:08.101 [INFO][5767] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" HandleID="k8s-pod-network.a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--b48tn-eth0" Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:08.104 [INFO][5767] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:08.111473 containerd[2020]: 2024-07-02 00:01:08.107 [INFO][5760] k8s.go 621: Teardown processing complete. ContainerID="a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700" Jul 2 00:01:08.112367 containerd[2020]: time="2024-07-02T00:01:08.111483100Z" level=info msg="TearDown network for sandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\" successfully" Jul 2 00:01:08.118170 containerd[2020]: time="2024-07-02T00:01:08.118093072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:01:08.118310 containerd[2020]: time="2024-07-02T00:01:08.118215916Z" level=info msg="RemovePodSandbox \"a0d7db0ee2059f4f2af192e81775b9e8b14b4d2b02f6a693b37ba4dae92d3700\" returns successfully" Jul 2 00:01:08.119247 containerd[2020]: time="2024-07-02T00:01:08.119188696Z" level=info msg="StopPodSandbox for \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\"" Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.232 [WARNING][5794] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"db41f039-97b5-4424-9900-efa54584157e", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a", Pod:"coredns-7db6d8ff4d-2lfdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2252ded041a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.232 [INFO][5794] k8s.go 608: Cleaning up netns ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.232 [INFO][5794] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" iface="eth0" netns="" Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.232 [INFO][5794] k8s.go 615: Releasing IP address(es) ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.232 [INFO][5794] utils.go 188: Calico CNI releasing IP address ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.282 [INFO][5800] ipam_plugin.go 411: Releasing address using handleID ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" HandleID="k8s-pod-network.203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.283 [INFO][5800] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.283 [INFO][5800] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.304 [WARNING][5800] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" HandleID="k8s-pod-network.203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.304 [INFO][5800] ipam_plugin.go 439: Releasing address using workloadID ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" HandleID="k8s-pod-network.203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.308 [INFO][5800] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:08.312768 containerd[2020]: 2024-07-02 00:01:08.310 [INFO][5794] k8s.go 621: Teardown processing complete. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:08.315000 containerd[2020]: time="2024-07-02T00:01:08.312878081Z" level=info msg="TearDown network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\" successfully" Jul 2 00:01:08.315000 containerd[2020]: time="2024-07-02T00:01:08.312917669Z" level=info msg="StopPodSandbox for \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\" returns successfully" Jul 2 00:01:08.315000 containerd[2020]: time="2024-07-02T00:01:08.313782353Z" level=info msg="RemovePodSandbox for \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\"" Jul 2 00:01:08.315000 containerd[2020]: time="2024-07-02T00:01:08.313902821Z" level=info msg="Forcibly stopping sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\"" Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.411 [WARNING][5820] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"db41f039-97b5-4424-9900-efa54584157e", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"257fbaf3cc5997c03b17a532c19e425846a9975d60d1011a745ca0d2f0340b4a", Pod:"coredns-7db6d8ff4d-2lfdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2252ded041a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.412 [INFO][5820] k8s.go 608: Cleaning up netns ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.412 [INFO][5820] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" iface="eth0" netns="" Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.412 [INFO][5820] k8s.go 615: Releasing IP address(es) ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.412 [INFO][5820] utils.go 188: Calico CNI releasing IP address ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.468 [INFO][5829] ipam_plugin.go 411: Releasing address using handleID ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" HandleID="k8s-pod-network.203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.469 [INFO][5829] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.469 [INFO][5829] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.482 [WARNING][5829] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" HandleID="k8s-pod-network.203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.483 [INFO][5829] ipam_plugin.go 439: Releasing address using workloadID ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" HandleID="k8s-pod-network.203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Workload="ip--172--31--26--136-k8s-coredns--7db6d8ff4d--2lfdt-eth0" Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.487 [INFO][5829] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:08.500703 containerd[2020]: 2024-07-02 00:01:08.494 [INFO][5820] k8s.go 621: Teardown processing complete. ContainerID="203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab" Jul 2 00:01:08.500703 containerd[2020]: time="2024-07-02T00:01:08.500197962Z" level=info msg="TearDown network for sandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\" successfully" Jul 2 00:01:08.517378 containerd[2020]: time="2024-07-02T00:01:08.517120830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:01:08.517378 containerd[2020]: time="2024-07-02T00:01:08.517220862Z" level=info msg="RemovePodSandbox \"203ee82eb33121553ea2cf1e4ae792d13222302bbdf7951b202128cb080547ab\" returns successfully" Jul 2 00:01:08.518137 containerd[2020]: time="2024-07-02T00:01:08.517917846Z" level=info msg="StopPodSandbox for \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\"" Jul 2 00:01:08.518137 containerd[2020]: time="2024-07-02T00:01:08.518049810Z" level=info msg="TearDown network for sandbox \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\" successfully" Jul 2 00:01:08.519631 containerd[2020]: time="2024-07-02T00:01:08.518135046Z" level=info msg="StopPodSandbox for \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\" returns successfully" Jul 2 00:01:08.519631 containerd[2020]: time="2024-07-02T00:01:08.518852550Z" level=info msg="RemovePodSandbox for \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\"" Jul 2 00:01:08.519631 containerd[2020]: time="2024-07-02T00:01:08.518904762Z" level=info msg="Forcibly stopping sandbox \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\"" Jul 2 00:01:08.519631 containerd[2020]: time="2024-07-02T00:01:08.519048042Z" level=info msg="TearDown network for sandbox \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\" successfully" Jul 2 00:01:08.526932 containerd[2020]: time="2024-07-02T00:01:08.526817551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:01:08.526932 containerd[2020]: time="2024-07-02T00:01:08.526922095Z" level=info msg="RemovePodSandbox \"c5aa31cfed07aab534f0d44f58244b6801feb4d8d3eda084ddbfaf6eef2ac850\" returns successfully" Jul 2 00:01:08.892456 systemd[1]: run-containerd-runc-k8s.io-ec118ac6ff38770c520801f52fa15583e3b9f0092575974d2c43ae2a52b43900-runc.PX7iAo.mount: Deactivated successfully. Jul 2 00:01:09.469448 systemd[1]: Started sshd@13-172.31.26.136:22-147.75.109.163:51360.service - OpenSSH per-connection server daemon (147.75.109.163:51360). Jul 2 00:01:09.691768 sshd[5870]: Accepted publickey for core from 147.75.109.163 port 51360 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:09.698892 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:09.720076 systemd-logind[1992]: New session 14 of user core. Jul 2 00:01:09.726318 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:01:09.771190 containerd[2020]: time="2024-07-02T00:01:09.771099969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:09.774217 containerd[2020]: time="2024-07-02T00:01:09.774010593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 00:01:09.776713 containerd[2020]: time="2024-07-02T00:01:09.776472369Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:09.788541 containerd[2020]: time="2024-07-02T00:01:09.788116713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:09.794208 containerd[2020]: time="2024-07-02T00:01:09.793199805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.761607965s" Jul 2 00:01:09.794208 containerd[2020]: time="2024-07-02T00:01:09.793275405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 00:01:09.802079 containerd[2020]: time="2024-07-02T00:01:09.802016157Z" level=info msg="CreateContainer within sandbox \"1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:01:09.861637 containerd[2020]: time="2024-07-02T00:01:09.861082821Z" level=info msg="CreateContainer within sandbox \"1c4b853c04fd3e25ebcdb219ae766d076d9085d3132f9a407207b27572ca715a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a7736e59e86ab4be5e02099d2a73a8729e2c570925cdaaa1d033b61eecb04527\"" Jul 2 00:01:09.862025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215449532.mount: Deactivated successfully. Jul 2 00:01:09.867927 containerd[2020]: time="2024-07-02T00:01:09.866036001Z" level=info msg="StartContainer for \"a7736e59e86ab4be5e02099d2a73a8729e2c570925cdaaa1d033b61eecb04527\"" Jul 2 00:01:09.971043 systemd[1]: Started cri-containerd-a7736e59e86ab4be5e02099d2a73a8729e2c570925cdaaa1d033b61eecb04527.scope - libcontainer container a7736e59e86ab4be5e02099d2a73a8729e2c570925cdaaa1d033b61eecb04527. Jul 2 00:01:10.140759 containerd[2020]: time="2024-07-02T00:01:10.140642683Z" level=info msg="StartContainer for \"a7736e59e86ab4be5e02099d2a73a8729e2c570925cdaaa1d033b61eecb04527\" returns successfully" Jul 2 00:01:10.197289 sshd[5870]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:10.214398 systemd[1]: sshd@13-172.31.26.136:22-147.75.109.163:51360.service: Deactivated successfully. Jul 2 00:01:10.222772 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:01:10.225313 systemd-logind[1992]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:01:10.230308 systemd-logind[1992]: Removed session 14. Jul 2 00:01:10.773037 kubelet[3434]: I0702 00:01:10.772911 3434 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:01:10.773037 kubelet[3434]: I0702 00:01:10.772959 3434 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:01:15.239193 systemd[1]: Started sshd@14-172.31.26.136:22-147.75.109.163:36916.service - OpenSSH per-connection server daemon (147.75.109.163:36916). Jul 2 00:01:15.414857 sshd[5923]: Accepted publickey for core from 147.75.109.163 port 36916 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:15.417597 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:15.425190 systemd-logind[1992]: New session 15 of user core. Jul 2 00:01:15.434930 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:01:15.683838 sshd[5923]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:15.690532 systemd[1]: sshd@14-172.31.26.136:22-147.75.109.163:36916.service: Deactivated successfully. Jul 2 00:01:15.694567 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:01:15.699045 systemd-logind[1992]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:01:15.702070 systemd-logind[1992]: Removed session 15. Jul 2 00:01:20.727227 systemd[1]: Started sshd@15-172.31.26.136:22-147.75.109.163:36922.service - OpenSSH per-connection server daemon (147.75.109.163:36922). Jul 2 00:01:20.916092 sshd[5967]: Accepted publickey for core from 147.75.109.163 port 36922 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:20.918830 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:20.927809 systemd-logind[1992]: New session 16 of user core. Jul 2 00:01:20.938046 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:01:21.187814 sshd[5967]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:21.194617 systemd[1]: sshd@15-172.31.26.136:22-147.75.109.163:36922.service: Deactivated successfully. Jul 2 00:01:21.200332 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:01:21.201848 systemd-logind[1992]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:01:21.203872 systemd-logind[1992]: Removed session 16. Jul 2 00:01:21.228297 systemd[1]: Started sshd@16-172.31.26.136:22-147.75.109.163:36924.service - OpenSSH per-connection server daemon (147.75.109.163:36924). Jul 2 00:01:21.414149 sshd[5979]: Accepted publickey for core from 147.75.109.163 port 36924 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:21.416851 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:21.426825 systemd-logind[1992]: New session 17 of user core. Jul 2 00:01:21.430958 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:01:21.900837 sshd[5979]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:21.906775 systemd[1]: sshd@16-172.31.26.136:22-147.75.109.163:36924.service: Deactivated successfully. Jul 2 00:01:21.911011 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:01:21.915271 systemd-logind[1992]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:01:21.917260 systemd-logind[1992]: Removed session 17. Jul 2 00:01:21.938200 systemd[1]: Started sshd@17-172.31.26.136:22-147.75.109.163:36928.service - OpenSSH per-connection server daemon (147.75.109.163:36928). Jul 2 00:01:22.124532 sshd[5991]: Accepted publickey for core from 147.75.109.163 port 36928 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:22.127836 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:22.137101 systemd-logind[1992]: New session 18 of user core. Jul 2 00:01:22.144930 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:01:25.254075 sshd[5991]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:25.266201 systemd[1]: sshd@17-172.31.26.136:22-147.75.109.163:36928.service: Deactivated successfully. Jul 2 00:01:25.278981 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:01:25.281829 systemd[1]: session-18.scope: Consumed 1.044s CPU time. Jul 2 00:01:25.285144 systemd-logind[1992]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:01:25.312134 systemd[1]: Started sshd@18-172.31.26.136:22-147.75.109.163:37086.service - OpenSSH per-connection server daemon (147.75.109.163:37086). Jul 2 00:01:25.314382 systemd-logind[1992]: Removed session 18. Jul 2 00:01:25.487912 sshd[6014]: Accepted publickey for core from 147.75.109.163 port 37086 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:25.490749 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:25.498883 systemd-logind[1992]: New session 19 of user core. Jul 2 00:01:25.509986 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:01:26.035163 sshd[6014]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:26.042167 systemd[1]: sshd@18-172.31.26.136:22-147.75.109.163:37086.service: Deactivated successfully. Jul 2 00:01:26.046181 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:01:26.048640 systemd-logind[1992]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:01:26.051229 systemd-logind[1992]: Removed session 19. Jul 2 00:01:26.072224 systemd[1]: Started sshd@19-172.31.26.136:22-147.75.109.163:37096.service - OpenSSH per-connection server daemon (147.75.109.163:37096). Jul 2 00:01:26.257910 sshd[6025]: Accepted publickey for core from 147.75.109.163 port 37096 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:26.260813 sshd[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:26.269794 systemd-logind[1992]: New session 20 of user core. Jul 2 00:01:26.275953 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:01:26.522592 sshd[6025]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:26.528260 systemd[1]: sshd@19-172.31.26.136:22-147.75.109.163:37096.service: Deactivated successfully. Jul 2 00:01:26.534047 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:01:26.538418 systemd-logind[1992]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:01:26.541469 systemd-logind[1992]: Removed session 20. Jul 2 00:01:31.569843 systemd[1]: Started sshd@20-172.31.26.136:22-147.75.109.163:37104.service - OpenSSH per-connection server daemon (147.75.109.163:37104). Jul 2 00:01:31.780339 sshd[6044]: Accepted publickey for core from 147.75.109.163 port 37104 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:31.783144 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:31.792808 systemd-logind[1992]: New session 21 of user core. Jul 2 00:01:31.797974 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:01:32.080867 sshd[6044]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:32.087605 systemd[1]: sshd@20-172.31.26.136:22-147.75.109.163:37104.service: Deactivated successfully. Jul 2 00:01:32.087844 systemd-logind[1992]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:01:32.097551 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:01:32.105709 systemd-logind[1992]: Removed session 21. Jul 2 00:01:37.122866 systemd[1]: Started sshd@21-172.31.26.136:22-147.75.109.163:35306.service - OpenSSH per-connection server daemon (147.75.109.163:35306). Jul 2 00:01:37.305082 sshd[6060]: Accepted publickey for core from 147.75.109.163 port 35306 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:37.307923 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:37.317862 systemd-logind[1992]: New session 22 of user core. Jul 2 00:01:37.325037 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:01:37.602811 sshd[6060]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:37.614237 systemd[1]: sshd@21-172.31.26.136:22-147.75.109.163:35306.service: Deactivated successfully. Jul 2 00:01:37.619639 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:01:37.625315 systemd-logind[1992]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:01:37.627978 systemd-logind[1992]: Removed session 22. Jul 2 00:01:42.145425 kubelet[3434]: I0702 00:01:42.145272 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vdgpm" podStartSLOduration=67.650344547 podStartE2EDuration="1m13.145246646s" podCreationTimestamp="2024-07-02 00:00:29 +0000 UTC" firstStartedPulling="2024-07-02 00:01:04.302240198 +0000 UTC m=+58.084113902" lastFinishedPulling="2024-07-02 00:01:09.797142309 +0000 UTC m=+63.579016001" observedRunningTime="2024-07-02 00:01:10.244394167 +0000 UTC m=+64.026267871" watchObservedRunningTime="2024-07-02 00:01:42.145246646 +0000 UTC m=+95.927120362" Jul 2 00:01:42.148776 kubelet[3434]: I0702 00:01:42.147584 3434 topology_manager.go:215] "Topology Admit Handler" podUID="e742115c-8034-4ea0-83d4-0b6d4f00a174" podNamespace="calico-apiserver" podName="calico-apiserver-58977cdb57-s5bx4" Jul 2 00:01:42.168866 systemd[1]: Created slice kubepods-besteffort-pode742115c_8034_4ea0_83d4_0b6d4f00a174.slice - libcontainer container kubepods-besteffort-pode742115c_8034_4ea0_83d4_0b6d4f00a174.slice. Jul 2 00:01:42.287416 kubelet[3434]: I0702 00:01:42.287289 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w96x6\" (UniqueName: \"kubernetes.io/projected/e742115c-8034-4ea0-83d4-0b6d4f00a174-kube-api-access-w96x6\") pod \"calico-apiserver-58977cdb57-s5bx4\" (UID: \"e742115c-8034-4ea0-83d4-0b6d4f00a174\") " pod="calico-apiserver/calico-apiserver-58977cdb57-s5bx4" Jul 2 00:01:42.287416 kubelet[3434]: I0702 00:01:42.287397 3434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e742115c-8034-4ea0-83d4-0b6d4f00a174-calico-apiserver-certs\") pod \"calico-apiserver-58977cdb57-s5bx4\" (UID: \"e742115c-8034-4ea0-83d4-0b6d4f00a174\") " pod="calico-apiserver/calico-apiserver-58977cdb57-s5bx4" Jul 2 00:01:42.391759 kubelet[3434]: E0702 00:01:42.391128 3434 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:01:42.391759 kubelet[3434]: E0702 00:01:42.391228 3434 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e742115c-8034-4ea0-83d4-0b6d4f00a174-calico-apiserver-certs podName:e742115c-8034-4ea0-83d4-0b6d4f00a174 nodeName:}" failed. No retries permitted until 2024-07-02 00:01:42.891203903 +0000 UTC m=+96.673077595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e742115c-8034-4ea0-83d4-0b6d4f00a174-calico-apiserver-certs") pod "calico-apiserver-58977cdb57-s5bx4" (UID: "e742115c-8034-4ea0-83d4-0b6d4f00a174") : secret "calico-apiserver-certs" not found Jul 2 00:01:42.640161 systemd[1]: Started sshd@22-172.31.26.136:22-147.75.109.163:49088.service - OpenSSH per-connection server daemon (147.75.109.163:49088). Jul 2 00:01:42.832633 sshd[6111]: Accepted publickey for core from 147.75.109.163 port 49088 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:42.835800 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:42.847484 systemd-logind[1992]: New session 23 of user core. Jul 2 00:01:42.857039 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:01:43.078579 containerd[2020]: time="2024-07-02T00:01:43.078499766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58977cdb57-s5bx4,Uid:e742115c-8034-4ea0-83d4-0b6d4f00a174,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:01:43.147880 sshd[6111]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:43.160453 systemd[1]: sshd@22-172.31.26.136:22-147.75.109.163:49088.service: Deactivated successfully. Jul 2 00:01:43.171729 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:01:43.178023 systemd-logind[1992]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:01:43.184076 systemd-logind[1992]: Removed session 23. Jul 2 00:01:43.449531 systemd-networkd[1927]: calibcb729505c5: Link UP Jul 2 00:01:43.453456 systemd-networkd[1927]: calibcb729505c5: Gained carrier Jul 2 00:01:43.460396 (udev-worker)[6142]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.217 [INFO][6123] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0 calico-apiserver-58977cdb57- calico-apiserver e742115c-8034-4ea0-83d4-0b6d4f00a174 1195 0 2024-07-02 00:01:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58977cdb57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-136 calico-apiserver-58977cdb57-s5bx4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibcb729505c5 [] []}} ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Namespace="calico-apiserver" Pod="calico-apiserver-58977cdb57-s5bx4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.217 [INFO][6123] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Namespace="calico-apiserver" Pod="calico-apiserver-58977cdb57-s5bx4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.295 [INFO][6136] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" HandleID="k8s-pod-network.2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Workload="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.372 [INFO][6136] ipam_plugin.go 264: Auto assigning IP ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" HandleID="k8s-pod-network.2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Workload="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebd70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-136", "pod":"calico-apiserver-58977cdb57-s5bx4", "timestamp":"2024-07-02 00:01:43.295257615 +0000 UTC"}, Hostname:"ip-172-31-26-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.373 [INFO][6136] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.373 [INFO][6136] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.373 [INFO][6136] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-136' Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.380 [INFO][6136] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" host="ip-172-31-26-136" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.389 [INFO][6136] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-136" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.399 [INFO][6136] ipam.go 489: Trying affinity for 192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.403 [INFO][6136] ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.410 [INFO][6136] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-26-136" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.410 [INFO][6136] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" host="ip-172-31-26-136" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.415 [INFO][6136] ipam.go 1685: Creating new handle: k8s-pod-network.2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114 Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.423 [INFO][6136] ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" host="ip-172-31-26-136" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.439 [INFO][6136] ipam.go 1216: Successfully claimed IPs: [192.168.122.197/26] block=192.168.122.192/26 handle="k8s-pod-network.2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" host="ip-172-31-26-136" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.440 [INFO][6136] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.197/26] handle="k8s-pod-network.2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" host="ip-172-31-26-136" Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.440 [INFO][6136] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:01:43.477966 containerd[2020]: 2024-07-02 00:01:43.440 [INFO][6136] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.122.197/26] IPv6=[] ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" HandleID="k8s-pod-network.2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Workload="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" Jul 2 00:01:43.482715 containerd[2020]: 2024-07-02 00:01:43.444 [INFO][6123] k8s.go 386: Populated endpoint ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Namespace="calico-apiserver" Pod="calico-apiserver-58977cdb57-s5bx4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0", GenerateName:"calico-apiserver-58977cdb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"e742115c-8034-4ea0-83d4-0b6d4f00a174", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58977cdb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"", Pod:"calico-apiserver-58977cdb57-s5bx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibcb729505c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:43.482715 containerd[2020]: 2024-07-02 00:01:43.444 [INFO][6123] k8s.go 387: Calico CNI using IPs: [192.168.122.197/32] ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Namespace="calico-apiserver" Pod="calico-apiserver-58977cdb57-s5bx4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" Jul 2 00:01:43.482715 containerd[2020]: 2024-07-02 00:01:43.444 [INFO][6123] dataplane_linux.go 68: Setting the host side veth name to calibcb729505c5 ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Namespace="calico-apiserver" Pod="calico-apiserver-58977cdb57-s5bx4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" Jul 2 00:01:43.482715 containerd[2020]: 2024-07-02 00:01:43.452 [INFO][6123] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Namespace="calico-apiserver" Pod="calico-apiserver-58977cdb57-s5bx4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" Jul 2 00:01:43.482715 containerd[2020]: 2024-07-02 00:01:43.453 [INFO][6123] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Namespace="calico-apiserver" Pod="calico-apiserver-58977cdb57-s5bx4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0", GenerateName:"calico-apiserver-58977cdb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"e742115c-8034-4ea0-83d4-0b6d4f00a174", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58977cdb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-136", ContainerID:"2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114", Pod:"calico-apiserver-58977cdb57-s5bx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibcb729505c5", MAC:"da:8c:6d:04:f1:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:01:43.482715 containerd[2020]: 2024-07-02 00:01:43.472 [INFO][6123] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114" Namespace="calico-apiserver" Pod="calico-apiserver-58977cdb57-s5bx4" WorkloadEndpoint="ip--172--31--26--136-k8s-calico--apiserver--58977cdb57--s5bx4-eth0" Jul 2 00:01:43.563413 containerd[2020]: time="2024-07-02T00:01:43.562181765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:43.563413 containerd[2020]: time="2024-07-02T00:01:43.562686977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:43.563413 containerd[2020]: time="2024-07-02T00:01:43.562732769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:43.563413 containerd[2020]: time="2024-07-02T00:01:43.562818905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:43.644000 systemd[1]: Started cri-containerd-2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114.scope - libcontainer container 2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114. Jul 2 00:01:43.901309 containerd[2020]: time="2024-07-02T00:01:43.901150794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58977cdb57-s5bx4,Uid:e742115c-8034-4ea0-83d4-0b6d4f00a174,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114\"" Jul 2 00:01:43.905128 containerd[2020]: time="2024-07-02T00:01:43.904845510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:01:44.736876 systemd-networkd[1927]: calibcb729505c5: Gained IPv6LL Jul 2 00:01:46.408388 containerd[2020]: time="2024-07-02T00:01:46.408236779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:46.410963 containerd[2020]: time="2024-07-02T00:01:46.410880823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 00:01:46.415633 containerd[2020]: time="2024-07-02T00:01:46.414898699Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:46.422390 containerd[2020]: time="2024-07-02T00:01:46.422072047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:46.425891 containerd[2020]: time="2024-07-02T00:01:46.425826511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.520902173s" Jul 2 00:01:46.426259 containerd[2020]: time="2024-07-02T00:01:46.426079195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 00:01:46.436477 containerd[2020]: time="2024-07-02T00:01:46.436191751Z" level=info msg="CreateContainer within sandbox \"2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:01:46.496491 systemd[1]: run-containerd-runc-k8s.io-64bff3b825fd6da28b667fa5c9e2336da4a06c370ce383c40c04f733e4dfddda-runc.8M4jmE.mount: Deactivated successfully. Jul 2 00:01:46.503634 containerd[2020]: time="2024-07-02T00:01:46.502297663Z" level=info msg="CreateContainer within sandbox \"2d99bb4e10f32aa3c85d6b41b7b20d571e6515a32cdfbcd958f43722aa75d114\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"915730acc6c8548e743eb3261e542c92a7c5ee8442d58e7a1d99b03363331b9e\"" Jul 2 00:01:46.506727 containerd[2020]: time="2024-07-02T00:01:46.506346199Z" level=info msg="StartContainer for \"915730acc6c8548e743eb3261e542c92a7c5ee8442d58e7a1d99b03363331b9e\"" Jul 2 00:01:46.628465 systemd[1]: Started cri-containerd-915730acc6c8548e743eb3261e542c92a7c5ee8442d58e7a1d99b03363331b9e.scope - libcontainer container 915730acc6c8548e743eb3261e542c92a7c5ee8442d58e7a1d99b03363331b9e. Jul 2 00:01:46.721015 containerd[2020]: time="2024-07-02T00:01:46.719557676Z" level=info msg="StartContainer for \"915730acc6c8548e743eb3261e542c92a7c5ee8442d58e7a1d99b03363331b9e\" returns successfully" Jul 2 00:01:47.644777 ntpd[1986]: Listen normally on 13 calibcb729505c5 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 00:01:47.645301 ntpd[1986]: 2 Jul 00:01:47 ntpd[1986]: Listen normally on 13 calibcb729505c5 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 00:01:48.193923 systemd[1]: Started sshd@23-172.31.26.136:22-147.75.109.163:49090.service - OpenSSH per-connection server daemon (147.75.109.163:49090). Jul 2 00:01:48.393637 sshd[6271]: Accepted publickey for core from 147.75.109.163 port 49090 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:48.396981 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:48.407077 systemd-logind[1992]: New session 24 of user core. Jul 2 00:01:48.417281 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:01:48.716978 sshd[6271]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:48.726197 systemd[1]: sshd@23-172.31.26.136:22-147.75.109.163:49090.service: Deactivated successfully. Jul 2 00:01:48.732571 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:01:48.734574 systemd-logind[1992]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:01:48.737068 systemd-logind[1992]: Removed session 24. Jul 2 00:01:49.335546 kubelet[3434]: I0702 00:01:49.335482 3434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:01:49.542130 kubelet[3434]: I0702 00:01:49.541426 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58977cdb57-s5bx4" podStartSLOduration=5.015511037 podStartE2EDuration="7.54140365s" podCreationTimestamp="2024-07-02 00:01:42 +0000 UTC" firstStartedPulling="2024-07-02 00:01:43.903956406 +0000 UTC m=+97.685830086" lastFinishedPulling="2024-07-02 00:01:46.429849007 +0000 UTC m=+100.211722699" observedRunningTime="2024-07-02 00:01:47.347556631 +0000 UTC m=+101.129430347" watchObservedRunningTime="2024-07-02 00:01:49.54140365 +0000 UTC m=+103.323277354" Jul 2 00:01:53.762279 systemd[1]: Started sshd@24-172.31.26.136:22-147.75.109.163:57296.service - OpenSSH per-connection server daemon (147.75.109.163:57296). Jul 2 00:01:53.972759 sshd[6296]: Accepted publickey for core from 147.75.109.163 port 57296 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:53.978791 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:53.992455 systemd-logind[1992]: New session 25 of user core. Jul 2 00:01:54.071974 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:01:54.379162 sshd[6296]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:54.389820 systemd[1]: sshd@24-172.31.26.136:22-147.75.109.163:57296.service: Deactivated successfully. Jul 2 00:01:54.394180 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:01:54.397579 systemd-logind[1992]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:01:54.401012 systemd-logind[1992]: Removed session 25. Jul 2 00:01:59.421201 systemd[1]: Started sshd@25-172.31.26.136:22-147.75.109.163:57306.service - OpenSSH per-connection server daemon (147.75.109.163:57306). Jul 2 00:01:59.608090 sshd[6312]: Accepted publickey for core from 147.75.109.163 port 57306 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:01:59.611724 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:01:59.623929 systemd-logind[1992]: New session 26 of user core. Jul 2 00:01:59.630989 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:01:59.890008 sshd[6312]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:59.912954 systemd-logind[1992]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:01:59.917592 systemd[1]: sshd@25-172.31.26.136:22-147.75.109.163:57306.service: Deactivated successfully. Jul 2 00:01:59.928201 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:01:59.933338 systemd-logind[1992]: Removed session 26.