Jul 2 00:00:26.208349 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 00:00:26.211368 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 00:00:26.211401 kernel: KASLR disabled due to lack of seed Jul 2 00:00:26.211419 kernel: efi: EFI v2.7 by EDK II Jul 2 00:00:26.211435 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jul 2 00:00:26.211450 kernel: ACPI: Early table checksum verification disabled Jul 2 00:00:26.211481 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 00:00:26.211503 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 00:00:26.211520 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:00:26.211536 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 00:00:26.211560 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:00:26.211576 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 00:00:26.211592 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 00:00:26.211609 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 00:00:26.211628 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:00:26.211651 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 00:00:26.211669 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 00:00:26.211686 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 00:00:26.211705 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 00:00:26.211722 kernel: printk: bootconsole [uart0] enabled Jul 2 00:00:26.211739 kernel: NUMA: Failed to initialise from firmware Jul 2 00:00:26.211757 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:00:26.211774 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 2 00:00:26.211791 kernel: Zone ranges: Jul 2 00:00:26.211807 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 00:00:26.211824 kernel: DMA32 empty Jul 2 00:00:26.211847 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 00:00:26.211864 kernel: Movable zone start for each node Jul 2 00:00:26.211880 kernel: Early memory node ranges Jul 2 00:00:26.211897 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 00:00:26.211913 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 00:00:26.211932 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 00:00:26.211949 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 00:00:26.211966 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 00:00:26.211982 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 00:00:26.211998 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 00:00:26.212014 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 00:00:26.212030 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:00:26.212052 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 00:00:26.212069 kernel: psci: probing for conduit method from ACPI. Jul 2 00:00:26.212094 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 00:00:26.212112 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:00:26.212129 kernel: psci: Trusted OS migration not required Jul 2 00:00:26.212152 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:00:26.212171 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 00:00:26.212188 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 00:00:26.212206 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 00:00:26.212224 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:00:26.212241 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:00:26.212258 kernel: CPU features: detected: Spectre-v2 Jul 2 00:00:26.212275 kernel: CPU features: detected: Spectre-v3a Jul 2 00:00:26.212293 kernel: CPU features: detected: Spectre-BHB Jul 2 00:00:26.212310 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 00:00:26.212327 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 00:00:26.212373 kernel: alternatives: applying boot alternatives Jul 2 00:00:26.212405 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:00:26.212425 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:00:26.212444 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:00:26.212462 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:00:26.212479 kernel: Fallback order for Node 0: 0 Jul 2 00:00:26.212497 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 00:00:26.212515 kernel: Policy zone: Normal Jul 2 00:00:26.212533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:00:26.212550 kernel: software IO TLB: area num 2. Jul 2 00:00:26.212568 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 00:00:26.212595 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jul 2 00:00:26.212614 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:00:26.212632 kernel: trace event string verifier disabled Jul 2 00:00:26.212650 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:00:26.212668 kernel: rcu: RCU event tracing is enabled. Jul 2 00:00:26.212686 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:00:26.212704 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:00:26.212722 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:00:26.212740 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:00:26.212757 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:00:26.212774 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:00:26.212797 kernel: GICv3: 96 SPIs implemented Jul 2 00:00:26.212815 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:00:26.212832 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:00:26.212849 kernel: GICv3: GICv3 features: 16 PPIs Jul 2 00:00:26.212867 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 00:00:26.212884 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 00:00:26.212901 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:00:26.212919 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:00:26.212937 kernel: GICv3: using LPI property table @0x00000004000e0000 Jul 2 00:00:26.212955 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 00:00:26.212972 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jul 2 00:00:26.212990 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:00:26.213012 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 00:00:26.213030 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 00:00:26.213048 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 00:00:26.213065 kernel: Console: colour dummy device 80x25 Jul 2 00:00:26.213084 kernel: printk: console [tty1] enabled Jul 2 00:00:26.213101 kernel: ACPI: Core revision 20230628 Jul 2 00:00:26.213119 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 00:00:26.213137 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:00:26.213155 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:00:26.213172 kernel: SELinux: Initializing. Jul 2 00:00:26.213195 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:00:26.213213 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:00:26.213231 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:00:26.213248 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:00:26.213266 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:00:26.213283 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:00:26.213301 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 00:00:26.213318 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 00:00:26.213336 kernel: Remapping and enabling EFI services. Jul 2 00:00:26.213401 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:00:26.213422 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:00:26.213440 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 00:00:26.213458 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jul 2 00:00:26.213476 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 00:00:26.213578 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:00:26.213754 kernel: SMP: Total of 2 processors activated. Jul 2 00:00:26.213772 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:00:26.213789 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 00:00:26.213814 kernel: CPU features: detected: CRC32 instructions Jul 2 00:00:26.213832 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:00:26.213862 kernel: alternatives: applying system-wide alternatives Jul 2 00:00:26.213885 kernel: devtmpfs: initialized Jul 2 00:00:26.213903 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:00:26.213922 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:00:26.213940 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:00:26.213959 kernel: SMBIOS 3.0.0 present. Jul 2 00:00:26.213977 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 00:00:26.214001 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:00:26.214019 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:00:26.214038 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:00:26.214056 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:00:26.214075 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:00:26.214093 kernel: audit: type=2000 audit(0.300:1): state=initialized audit_enabled=0 res=1 Jul 2 00:00:26.214111 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:00:26.214134 kernel: cpuidle: using governor menu Jul 2 00:00:26.214153 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:00:26.214171 kernel: ASID allocator initialised with 65536 entries Jul 2 00:00:26.214189 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:00:26.214207 kernel: Serial: AMBA PL011 UART driver Jul 2 00:00:26.214225 kernel: Modules: 17600 pages in range for non-PLT usage Jul 2 00:00:26.214244 kernel: Modules: 509120 pages in range for PLT usage Jul 2 00:00:26.214262 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:00:26.214280 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:00:26.214303 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:00:26.214322 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 00:00:26.214340 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:00:26.214409 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:00:26.214430 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:00:26.214449 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 00:00:26.214467 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:00:26.214485 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:00:26.214503 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:00:26.214528 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:00:26.214547 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:00:26.214565 kernel: ACPI: Interpreter enabled Jul 2 00:00:26.214583 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:00:26.214601 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:00:26.214620 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 00:00:26.214973 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:00:26.215186 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:00:26.217586 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:00:26.217824 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 00:00:26.218028 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 00:00:26.218055 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 00:00:26.218074 kernel: acpiphp: Slot [1] registered Jul 2 00:00:26.218093 kernel: acpiphp: Slot [2] registered Jul 2 00:00:26.218112 kernel: acpiphp: Slot [3] registered Jul 2 00:00:26.218131 kernel: acpiphp: Slot [4] registered Jul 2 00:00:26.218149 kernel: acpiphp: Slot [5] registered Jul 2 00:00:26.218177 kernel: acpiphp: Slot [6] registered Jul 2 00:00:26.218196 kernel: acpiphp: Slot [7] registered Jul 2 00:00:26.218214 kernel: acpiphp: Slot [8] registered Jul 2 00:00:26.218232 kernel: acpiphp: Slot [9] registered Jul 2 00:00:26.218250 kernel: acpiphp: Slot [10] registered Jul 2 00:00:26.218268 kernel: acpiphp: Slot [11] registered Jul 2 00:00:26.218287 kernel: acpiphp: Slot [12] registered Jul 2 00:00:26.218305 kernel: acpiphp: Slot [13] registered Jul 2 00:00:26.218323 kernel: acpiphp: Slot [14] registered Jul 2 00:00:26.218346 kernel: acpiphp: Slot [15] registered Jul 2 00:00:26.218394 kernel: acpiphp: Slot [16] registered Jul 2 00:00:26.218413 kernel: acpiphp: Slot [17] registered Jul 2 00:00:26.218431 kernel: acpiphp: Slot [18] registered Jul 2 00:00:26.218450 kernel: acpiphp: Slot [19] registered Jul 2 00:00:26.218468 kernel: acpiphp: Slot [20] registered Jul 2 00:00:26.218486 kernel: acpiphp: Slot [21] registered Jul 2 00:00:26.218505 kernel: acpiphp: Slot [22] registered Jul 2 00:00:26.218523 kernel: acpiphp: Slot [23] registered Jul 2 00:00:26.218542 kernel: acpiphp: Slot [24] registered Jul 2 00:00:26.218566 kernel: acpiphp: Slot [25] registered Jul 2 00:00:26.218585 kernel: acpiphp: Slot [26] registered Jul 2 00:00:26.218603 kernel: acpiphp: Slot [27] registered Jul 2 00:00:26.218621 kernel: acpiphp: Slot [28] registered Jul 2 00:00:26.218639 kernel: acpiphp: Slot [29] registered Jul 2 00:00:26.218657 kernel: acpiphp: Slot [30] registered Jul 2 00:00:26.218676 kernel: acpiphp: Slot [31] registered Jul 2 00:00:26.218694 kernel: PCI host bridge to bus 0000:00 Jul 2 00:00:26.218917 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 00:00:26.219115 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:00:26.219325 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 00:00:26.221228 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 00:00:26.221543 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 00:00:26.221838 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 00:00:26.222060 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 00:00:26.222296 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:00:26.222636 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 00:00:26.222849 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:00:26.223079 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:00:26.223284 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 00:00:26.223546 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 00:00:26.223753 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 00:00:26.223966 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:00:26.224172 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 00:00:26.224404 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 00:00:26.224646 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 00:00:26.226920 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 00:00:26.227182 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 00:00:26.227438 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 00:00:26.227640 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:00:26.227823 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 00:00:26.227849 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:00:26.227868 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:00:26.227887 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:00:26.227906 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:00:26.227924 kernel: iommu: Default domain type: Translated Jul 2 00:00:26.227942 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:00:26.227967 kernel: efivars: Registered efivars operations Jul 2 00:00:26.227985 kernel: vgaarb: loaded Jul 2 00:00:26.228004 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:00:26.228022 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:00:26.228040 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:00:26.228058 kernel: pnp: PnP ACPI init Jul 2 00:00:26.228262 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 00:00:26.228290 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:00:26.228315 kernel: NET: Registered PF_INET protocol family Jul 2 00:00:26.228334 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:00:26.228372 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:00:26.228396 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:00:26.228414 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:00:26.228433 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:00:26.228451 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:00:26.228470 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:00:26.228488 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:00:26.228514 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:00:26.228532 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:00:26.228551 kernel: kvm [1]: HYP mode not available Jul 2 00:00:26.228570 kernel: Initialise system trusted keyrings Jul 2 00:00:26.228589 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:00:26.228608 kernel: Key type asymmetric registered Jul 2 00:00:26.228626 kernel: Asymmetric key parser 'x509' registered Jul 2 00:00:26.228644 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 00:00:26.228662 kernel: io scheduler mq-deadline registered Jul 2 00:00:26.228685 kernel: io scheduler kyber registered Jul 2 00:00:26.228704 kernel: io scheduler bfq registered Jul 2 00:00:26.228964 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 00:00:26.228994 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:00:26.229013 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:00:26.229032 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 00:00:26.229051 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 00:00:26.229070 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:00:26.229097 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 00:00:26.229309 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 00:00:26.229337 kernel: printk: console [ttyS0] disabled Jul 2 00:00:26.229383 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 00:00:26.229404 kernel: printk: console [ttyS0] enabled Jul 2 00:00:26.229423 kernel: printk: bootconsole [uart0] disabled Jul 2 00:00:26.229442 kernel: thunder_xcv, ver 1.0 Jul 2 00:00:26.229461 kernel: thunder_bgx, ver 1.0 Jul 2 00:00:26.229479 kernel: nicpf, ver 1.0 Jul 2 00:00:26.229497 kernel: nicvf, ver 1.0 Jul 2 00:00:26.229735 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:00:26.229935 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:00:25 UTC (1719878425) Jul 2 00:00:26.229962 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:00:26.229981 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 00:00:26.230000 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 00:00:26.230019 kernel: watchdog: Hard watchdog permanently disabled Jul 2 00:00:26.230037 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:00:26.230056 kernel: Segment Routing with IPv6 Jul 2 00:00:26.230082 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:00:26.230101 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:00:26.230119 kernel: Key type dns_resolver registered Jul 2 00:00:26.230137 kernel: registered taskstats version 1 Jul 2 00:00:26.230156 kernel: Loading compiled-in X.509 certificates Jul 2 00:00:26.230174 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 00:00:26.230192 kernel: Key type .fscrypt registered Jul 2 00:00:26.230210 kernel: Key type fscrypt-provisioning registered Jul 2 00:00:26.230229 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:00:26.230252 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:00:26.230271 kernel: ima: No architecture policies found Jul 2 00:00:26.230289 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:00:26.230307 kernel: clk: Disabling unused clocks Jul 2 00:00:26.230326 kernel: Freeing unused kernel memory: 39040K Jul 2 00:00:26.230345 kernel: Run /init as init process Jul 2 00:00:26.230414 kernel: with arguments: Jul 2 00:00:26.230435 kernel: /init Jul 2 00:00:26.230453 kernel: with environment: Jul 2 00:00:26.230480 kernel: HOME=/ Jul 2 00:00:26.230499 kernel: TERM=linux Jul 2 00:00:26.230518 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:00:26.230541 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:00:26.230566 systemd[1]: Detected virtualization amazon. Jul 2 00:00:26.230587 systemd[1]: Detected architecture arm64. Jul 2 00:00:26.230606 systemd[1]: Running in initrd. Jul 2 00:00:26.230625 systemd[1]: No hostname configured, using default hostname. Jul 2 00:00:26.230650 systemd[1]: Hostname set to . Jul 2 00:00:26.230672 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:00:26.230692 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:00:26.230713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:00:26.230733 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:00:26.230755 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:00:26.230777 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:00:26.230802 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:00:26.230823 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:00:26.230846 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:00:26.230866 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:00:26.230887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:00:26.230907 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:00:26.230927 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:00:26.230953 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:00:26.230973 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:00:26.230993 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:00:26.231013 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:00:26.231033 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:00:26.231053 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:00:26.231073 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:00:26.231093 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:00:26.231113 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:00:26.231138 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:00:26.231159 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:00:26.231179 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:00:26.231199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:00:26.231219 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:00:26.231239 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:00:26.231259 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:00:26.231279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:00:26.231326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:00:26.231393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:00:26.231421 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:00:26.231441 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:00:26.231463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:00:26.231490 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:26.231556 systemd-journald[251]: Collecting audit messages is disabled. Jul 2 00:00:26.231602 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:00:26.231623 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:00:26.231648 kernel: Bridge firewalling registered Jul 2 00:00:26.231668 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:00:26.231688 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:00:26.231708 systemd-journald[251]: Journal started Jul 2 00:00:26.231746 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2d1599b47e38abb8c071a3e83b2be8) is 8.0M, max 75.3M, 67.3M free. Jul 2 00:00:26.171086 systemd-modules-load[252]: Inserted module 'overlay' Jul 2 00:00:26.220771 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 2 00:00:26.237780 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:00:26.252723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:00:26.265652 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:00:26.271674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:00:26.284403 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:00:26.296968 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:00:26.312839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:00:26.324498 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:00:26.330653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:00:26.354783 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:00:26.370464 dracut-cmdline[279]: dracut-dracut-053 Jul 2 00:00:26.379421 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:00:26.450658 systemd-resolved[287]: Positive Trust Anchors: Jul 2 00:00:26.450685 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:00:26.450747 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:00:26.561649 kernel: SCSI subsystem initialized Jul 2 00:00:26.569499 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:00:26.582475 kernel: iscsi: registered transport (tcp) Jul 2 00:00:26.604873 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:00:26.604949 kernel: QLogic iSCSI HBA Driver Jul 2 00:00:26.692303 kernel: random: crng init done Jul 2 00:00:26.690552 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 2 00:00:26.692703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:00:26.698266 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:00:26.722110 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:00:26.731642 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:00:26.773624 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:00:26.773716 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:00:26.773744 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:00:26.840401 kernel: raid6: neonx8 gen() 6678 MB/s Jul 2 00:00:26.857388 kernel: raid6: neonx4 gen() 6496 MB/s Jul 2 00:00:26.874387 kernel: raid6: neonx2 gen() 5431 MB/s Jul 2 00:00:26.891387 kernel: raid6: neonx1 gen() 3946 MB/s Jul 2 00:00:26.908388 kernel: raid6: int64x8 gen() 3804 MB/s Jul 2 00:00:26.925386 kernel: raid6: int64x4 gen() 3719 MB/s Jul 2 00:00:26.942386 kernel: raid6: int64x2 gen() 3596 MB/s Jul 2 00:00:26.960069 kernel: raid6: int64x1 gen() 2771 MB/s Jul 2 00:00:26.960103 kernel: raid6: using algorithm neonx8 gen() 6678 MB/s Jul 2 00:00:26.978049 kernel: raid6: .... xor() 4929 MB/s, rmw enabled Jul 2 00:00:26.978086 kernel: raid6: using neon recovery algorithm Jul 2 00:00:26.985389 kernel: xor: measuring software checksum speed Jul 2 00:00:26.987387 kernel: 8regs : 11035 MB/sec Jul 2 00:00:26.989384 kernel: 32regs : 11922 MB/sec Jul 2 00:00:26.991395 kernel: arm64_neon : 9604 MB/sec Jul 2 00:00:26.991428 kernel: xor: using function: 32regs (11922 MB/sec) Jul 2 00:00:27.077750 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:00:27.095739 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:00:27.105682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:00:27.144652 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jul 2 00:00:27.153157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:00:27.173729 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:00:27.202742 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Jul 2 00:00:27.258291 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:00:27.267706 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:00:27.391429 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:00:27.416725 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:00:27.468479 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:00:27.488638 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:00:27.491860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:00:27.494528 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:00:27.508748 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:00:27.556875 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:00:27.575956 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:00:27.576029 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 00:00:27.606566 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:00:27.606823 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:00:27.607056 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:17:15:64:5d:53 Jul 2 00:00:27.603572 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:00:27.603790 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:00:27.608143 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:00:27.612098 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:27.612511 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:00:27.647553 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 00:00:27.647591 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:00:27.612809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:27.615526 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:00:27.654267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:00:27.662410 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:00:27.673146 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:00:27.673218 kernel: GPT:9289727 != 16777215 Jul 2 00:00:27.673244 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:00:27.674805 kernel: GPT:9289727 != 16777215 Jul 2 00:00:27.674839 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:00:27.675625 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:00:27.678148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:27.693776 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:00:27.725154 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:00:27.881956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 00:00:27.910452 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (526) Jul 2 00:00:27.945406 kernel: BTRFS: device fsid 2e7aff7f-b51e-4094-8f16-54690a62fb17 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (534) Jul 2 00:00:27.963807 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 00:00:28.017453 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:00:28.032605 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 00:00:28.034825 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 00:00:28.065639 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:00:28.079411 disk-uuid[661]: Primary Header is updated. Jul 2 00:00:28.079411 disk-uuid[661]: Secondary Entries is updated. Jul 2 00:00:28.079411 disk-uuid[661]: Secondary Header is updated. Jul 2 00:00:28.088401 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:00:28.100397 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:00:28.108389 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:00:29.110330 disk-uuid[662]: The operation has completed successfully. Jul 2 00:00:29.112900 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:00:29.287054 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:00:29.289137 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:00:29.321651 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:00:29.339222 sh[1005]: Success Jul 2 00:00:29.366395 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:00:29.473236 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:00:29.486577 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:00:29.491612 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:00:29.510388 kernel: BTRFS info (device dm-0): first mount of filesystem 2e7aff7f-b51e-4094-8f16-54690a62fb17 Jul 2 00:00:29.510454 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:00:29.510481 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:00:29.512015 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:00:29.514192 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:00:29.592399 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 00:00:29.633544 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:00:29.637442 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:00:29.646660 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:00:29.663017 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:00:29.689609 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:29.689706 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:00:29.691063 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:00:29.698216 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:00:29.713073 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:00:29.716616 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:29.734160 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:00:29.742686 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:00:29.841422 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:00:29.859637 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:00:29.901854 systemd-networkd[1197]: lo: Link UP Jul 2 00:00:29.903391 systemd-networkd[1197]: lo: Gained carrier Jul 2 00:00:29.907263 systemd-networkd[1197]: Enumeration completed Jul 2 00:00:29.908009 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:00:29.908016 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:00:29.909117 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:00:29.919418 systemd[1]: Reached target network.target - Network. Jul 2 00:00:29.925454 systemd-networkd[1197]: eth0: Link UP Jul 2 00:00:29.925469 systemd-networkd[1197]: eth0: Gained carrier Jul 2 00:00:29.925487 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:00:29.938503 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.25.138/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:00:30.043127 ignition[1120]: Ignition 2.18.0 Jul 2 00:00:30.043149 ignition[1120]: Stage: fetch-offline Jul 2 00:00:30.043713 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:30.043737 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:30.044878 ignition[1120]: Ignition finished successfully Jul 2 00:00:30.053818 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:00:30.063666 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:00:30.096711 ignition[1207]: Ignition 2.18.0 Jul 2 00:00:30.096737 ignition[1207]: Stage: fetch Jul 2 00:00:30.097341 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:30.097408 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:30.097552 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:30.106588 ignition[1207]: PUT result: OK Jul 2 00:00:30.109464 ignition[1207]: parsed url from cmdline: "" Jul 2 00:00:30.109485 ignition[1207]: no config URL provided Jul 2 00:00:30.109535 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:00:30.109564 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:00:30.109596 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:30.111171 ignition[1207]: PUT result: OK Jul 2 00:00:30.111305 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:00:30.118025 ignition[1207]: GET result: OK Jul 2 00:00:30.121121 ignition[1207]: parsing config with SHA512: e97b3ac5975bbf59445d04f4b53bcabf12c771ee38619d479b8fb6528194573305e7ffd3cfb1bd198acb464e9b0cd0fa22c4ef7bae96ea718d0e5666880d4955 Jul 2 00:00:30.128591 unknown[1207]: fetched base config from "system" Jul 2 00:00:30.128612 unknown[1207]: fetched base config from "system" Jul 2 00:00:30.128626 unknown[1207]: fetched user config from "aws" Jul 2 00:00:30.135849 ignition[1207]: fetch: fetch complete Jul 2 00:00:30.139961 ignition[1207]: fetch: fetch passed Jul 2 00:00:30.140921 ignition[1207]: Ignition finished successfully Jul 2 00:00:30.145856 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:00:30.153636 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:00:30.188298 ignition[1214]: Ignition 2.18.0 Jul 2 00:00:30.188834 ignition[1214]: Stage: kargs Jul 2 00:00:30.189456 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:30.189480 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:30.190385 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:30.192540 ignition[1214]: PUT result: OK Jul 2 00:00:30.200685 ignition[1214]: kargs: kargs passed Jul 2 00:00:30.200778 ignition[1214]: Ignition finished successfully Jul 2 00:00:30.206725 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:00:30.212701 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:00:30.248961 ignition[1221]: Ignition 2.18.0 Jul 2 00:00:30.248990 ignition[1221]: Stage: disks Jul 2 00:00:30.250498 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:30.250524 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:30.250663 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:30.252547 ignition[1221]: PUT result: OK Jul 2 00:00:30.262289 ignition[1221]: disks: disks passed Jul 2 00:00:30.262474 ignition[1221]: Ignition finished successfully Jul 2 00:00:30.271745 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:00:30.272622 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:00:30.278329 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:00:30.282296 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:00:30.301872 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:00:30.303835 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:00:30.325734 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:00:30.372531 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:00:30.381784 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:00:30.394978 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:00:30.482380 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 95038baa-e9f1-4207-86a5-38a4ce3cff7d r/w with ordered data mode. Quota mode: none. Jul 2 00:00:30.483502 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:00:30.487286 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:00:30.510037 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:00:30.517261 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:00:30.521340 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:00:30.522820 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:00:30.522871 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:00:30.547396 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) Jul 2 00:00:30.550860 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:30.550913 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:00:30.550941 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:00:30.558081 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:00:30.565071 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:00:30.570663 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:00:30.576785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:00:30.936684 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:00:30.945565 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:00:30.954214 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:00:30.963148 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:00:31.254846 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:00:31.264573 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:00:31.277635 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:00:31.294291 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:00:31.296229 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:31.329422 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:00:31.345979 ignition[1362]: INFO : Ignition 2.18.0 Jul 2 00:00:31.345979 ignition[1362]: INFO : Stage: mount Jul 2 00:00:31.349106 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:31.349106 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:31.353082 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:31.356151 ignition[1362]: INFO : PUT result: OK Jul 2 00:00:31.361337 ignition[1362]: INFO : mount: mount passed Jul 2 00:00:31.362900 ignition[1362]: INFO : Ignition finished successfully Jul 2 00:00:31.366419 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:00:31.383220 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:00:31.404793 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:00:31.424383 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1374) Jul 2 00:00:31.428988 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:00:31.429036 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:00:31.429063 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:00:31.433380 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:00:31.437843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:00:31.470133 ignition[1391]: INFO : Ignition 2.18.0 Jul 2 00:00:31.470133 ignition[1391]: INFO : Stage: files Jul 2 00:00:31.474371 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:31.474371 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:31.474371 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:31.474371 ignition[1391]: INFO : PUT result: OK Jul 2 00:00:31.484483 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:00:31.487013 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:00:31.487013 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:00:31.527740 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:00:31.530623 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:00:31.533413 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:00:31.533208 unknown[1391]: wrote ssh authorized keys file for user: core Jul 2 00:00:31.546798 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:00:31.550408 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:00:31.605759 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:00:31.659572 systemd-networkd[1197]: eth0: Gained IPv6LL Jul 2 00:00:31.693756 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:00:31.697628 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:00:31.732957 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:00:31.732957 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:00:31.732957 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 00:00:32.264342 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:00:33.838809 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:00:33.838809 ignition[1391]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:00:33.845805 ignition[1391]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:00:33.845805 ignition[1391]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:00:33.845805 ignition[1391]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:00:33.845805 ignition[1391]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:00:33.845805 ignition[1391]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:00:33.845805 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:00:33.845805 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:00:33.845805 ignition[1391]: INFO : files: files passed Jul 2 00:00:33.845805 ignition[1391]: INFO : Ignition finished successfully Jul 2 00:00:33.870049 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:00:33.884084 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:00:33.889991 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:00:33.897126 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:00:33.898934 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:00:33.924987 initrd-setup-root-after-ignition[1420]: grep: Jul 2 00:00:33.924987 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:00:33.929694 initrd-setup-root-after-ignition[1420]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:00:33.929694 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:00:33.936992 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:00:33.943570 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:00:33.952681 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:00:34.013438 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:00:34.014465 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:00:34.017922 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:00:34.020741 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:00:34.021111 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:00:34.043761 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:00:34.070903 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:00:34.079676 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:00:34.118408 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:00:34.119280 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:00:34.124950 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:00:34.127096 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:00:34.129649 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:00:34.132915 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:00:34.133015 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:00:34.135343 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:00:34.137191 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:00:34.138807 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:00:34.140728 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:00:34.142814 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:00:34.144843 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:00:34.146748 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:00:34.151048 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:00:34.154325 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:00:34.157758 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:00:34.161163 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:00:34.161318 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:00:34.174911 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:00:34.188783 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:00:34.192901 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:00:34.196412 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:00:34.200971 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:00:34.201079 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:00:34.206759 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:00:34.206860 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:00:34.211001 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:00:34.211086 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:00:34.232646 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:00:34.236525 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:00:34.243313 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:00:34.246388 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:00:34.251326 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:00:34.251493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:00:34.282397 ignition[1445]: INFO : Ignition 2.18.0 Jul 2 00:00:34.282397 ignition[1445]: INFO : Stage: umount Jul 2 00:00:34.282397 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:00:34.291727 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:00:34.291727 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:00:34.291727 ignition[1445]: INFO : PUT result: OK Jul 2 00:00:34.289564 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:00:34.302805 ignition[1445]: INFO : umount: umount passed Jul 2 00:00:34.302805 ignition[1445]: INFO : Ignition finished successfully Jul 2 00:00:34.307706 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:00:34.310591 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:00:34.315273 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:00:34.317497 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:00:34.322269 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:00:34.322477 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:00:34.327469 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:00:34.327570 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:00:34.329440 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:00:34.329517 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:00:34.331324 systemd[1]: Stopped target network.target - Network. Jul 2 00:00:34.340349 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:00:34.340457 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:00:34.342652 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:00:34.344697 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:00:34.348427 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:00:34.349529 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:00:34.350268 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:00:34.351015 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:00:34.351092 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:00:34.354525 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:00:34.354599 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:00:34.355147 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:00:34.355226 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:00:34.355878 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:00:34.355955 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:00:34.358641 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:00:34.358716 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:00:34.359552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:00:34.360085 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:00:34.403474 systemd-networkd[1197]: eth0: DHCPv6 lease lost Jul 2 00:00:34.406594 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:00:34.406837 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:00:34.410646 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:00:34.410721 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:00:34.425798 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:00:34.430728 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:00:34.430842 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:00:34.435183 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:00:34.442221 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:00:34.444724 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:00:34.461297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:00:34.461553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:00:34.471617 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:00:34.471735 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:00:34.473763 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:00:34.473840 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:00:34.476499 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:00:34.476775 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:00:34.480119 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:00:34.480257 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:00:34.485968 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:00:34.486045 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:00:34.488003 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:00:34.488092 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:00:34.490329 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:00:34.490434 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:00:34.512432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:00:34.512536 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:00:34.528711 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:00:34.533520 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:00:34.533673 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:00:34.536165 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:00:34.536276 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:00:34.538624 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:00:34.538727 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:00:34.549745 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:00:34.555630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:34.558340 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:00:34.558957 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:00:34.580937 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:00:34.581347 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:00:34.587676 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:00:34.596740 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:00:34.626687 systemd[1]: Switching root. Jul 2 00:00:34.674638 systemd-journald[251]: Journal stopped Jul 2 00:00:37.156508 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 2 00:00:37.156625 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:00:37.156669 kernel: SELinux: policy capability open_perms=1 Jul 2 00:00:37.156713 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:00:37.156746 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:00:37.156777 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:00:37.156808 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:00:37.156839 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:00:37.156868 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:00:37.156898 kernel: audit: type=1403 audit(1719878435.355:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:00:37.156937 systemd[1]: Successfully loaded SELinux policy in 67.550ms. Jul 2 00:00:37.156982 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.871ms. Jul 2 00:00:37.157018 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:00:37.157049 systemd[1]: Detected virtualization amazon. Jul 2 00:00:37.157083 systemd[1]: Detected architecture arm64. Jul 2 00:00:37.157114 systemd[1]: Detected first boot. Jul 2 00:00:37.157145 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:00:37.157176 zram_generator::config[1487]: No configuration found. Jul 2 00:00:37.157213 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:00:37.157246 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:00:37.157280 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:00:37.157310 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:00:37.157344 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:00:37.167041 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:00:37.167087 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:00:37.167120 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:00:37.167154 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:00:37.167188 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:00:37.167222 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:00:37.167282 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:00:37.167313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:00:37.167347 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:00:37.169417 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:00:37.169452 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:00:37.169494 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:00:37.169525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:00:37.169554 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:00:37.169587 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:00:37.169624 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:00:37.169656 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:00:37.169686 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:00:37.169719 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:00:37.169751 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:00:37.169780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:00:37.169813 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:00:37.169849 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:00:37.169883 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:00:37.169915 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:00:37.169945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:00:37.169976 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:00:37.170005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:00:37.170038 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:00:37.170071 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:00:37.170100 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:00:37.170131 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:00:37.170168 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:00:37.170197 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:00:37.170227 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:00:37.170261 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:00:37.170291 systemd[1]: Reached target machines.target - Containers. Jul 2 00:00:37.170320 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:00:37.170524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:00:37.175075 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:00:37.175124 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:00:37.175158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:00:37.175190 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:00:37.175224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:00:37.175277 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:00:37.175311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:00:37.175341 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:00:37.175390 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:00:37.175429 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:00:37.175458 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:00:37.175489 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:00:37.175523 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:00:37.175552 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:00:37.175582 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:00:37.175612 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:00:37.175643 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:00:37.175677 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:00:37.175713 systemd[1]: Stopped verity-setup.service. Jul 2 00:00:37.175746 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:00:37.175777 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:00:37.175807 kernel: loop: module loaded Jul 2 00:00:37.175835 kernel: fuse: init (API version 7.39) Jul 2 00:00:37.175867 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:00:37.175898 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:00:37.175928 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:00:37.175957 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:00:37.175990 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:00:37.176020 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:00:37.176049 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:00:37.176077 kernel: ACPI: bus type drm_connector registered Jul 2 00:00:37.176105 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:00:37.176139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:00:37.176169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:00:37.176200 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:00:37.176230 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:00:37.176259 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:00:37.176292 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:00:37.181431 systemd-journald[1575]: Collecting audit messages is disabled. Jul 2 00:00:37.181523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:00:37.181557 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:00:37.181587 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:00:37.181617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:00:37.181646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:00:37.181679 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:00:37.181715 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:00:37.181747 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:00:37.181775 systemd-journald[1575]: Journal started Jul 2 00:00:37.181821 systemd-journald[1575]: Runtime Journal (/run/log/journal/ec2d1599b47e38abb8c071a3e83b2be8) is 8.0M, max 75.3M, 67.3M free. Jul 2 00:00:36.565159 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:00:36.608639 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 00:00:36.609447 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:00:37.202199 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:00:37.217441 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:00:37.217510 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:00:37.221719 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:00:37.230309 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:00:37.244600 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:00:37.251453 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:00:37.256391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:00:37.269645 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:00:37.273477 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:00:37.284478 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:00:37.289400 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:00:37.300395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:00:37.311530 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:00:37.318396 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:00:37.328112 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:00:37.325970 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:00:37.328421 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:00:37.331132 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:00:37.376672 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:00:37.390072 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:00:37.407817 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:00:37.413086 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:00:37.415985 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:00:37.430783 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:00:37.438714 kernel: loop0: detected capacity change from 0 to 113672 Jul 2 00:00:37.438801 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:00:37.462926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:00:37.477509 systemd-journald[1575]: Time spent on flushing to /var/log/journal/ec2d1599b47e38abb8c071a3e83b2be8 is 43.517ms for 917 entries. Jul 2 00:00:37.477509 systemd-journald[1575]: System Journal (/var/log/journal/ec2d1599b47e38abb8c071a3e83b2be8) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:00:37.529537 systemd-journald[1575]: Received client request to flush runtime journal. Jul 2 00:00:37.529629 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:00:37.514647 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Jul 2 00:00:37.514672 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Jul 2 00:00:37.540790 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:00:37.555451 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:00:37.573963 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:00:37.582067 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:00:37.584811 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:00:37.591976 udevadm[1625]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:00:37.601525 kernel: loop1: detected capacity change from 0 to 59672 Jul 2 00:00:37.631058 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:00:37.643739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:00:37.681652 kernel: loop2: detected capacity change from 0 to 194512 Jul 2 00:00:37.691205 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jul 2 00:00:37.691262 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jul 2 00:00:37.702473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:00:37.736408 kernel: loop3: detected capacity change from 0 to 51896 Jul 2 00:00:37.824400 kernel: loop4: detected capacity change from 0 to 113672 Jul 2 00:00:37.837398 kernel: loop5: detected capacity change from 0 to 59672 Jul 2 00:00:37.851391 kernel: loop6: detected capacity change from 0 to 194512 Jul 2 00:00:37.899391 kernel: loop7: detected capacity change from 0 to 51896 Jul 2 00:00:37.913857 (sd-merge)[1644]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 00:00:37.914838 (sd-merge)[1644]: Merged extensions into '/usr'. Jul 2 00:00:37.926207 systemd[1]: Reloading requested from client PID 1597 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:00:37.926241 systemd[1]: Reloading... Jul 2 00:00:38.063400 zram_generator::config[1665]: No configuration found. Jul 2 00:00:38.480427 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:00:38.600001 systemd[1]: Reloading finished in 672 ms. Jul 2 00:00:38.639509 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:00:38.655682 systemd[1]: Starting ensure-sysext.service... Jul 2 00:00:38.669663 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:00:38.702580 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:00:38.702620 systemd[1]: Reloading... Jul 2 00:00:38.764992 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:00:38.765674 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:00:38.769731 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:00:38.770535 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Jul 2 00:00:38.772002 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Jul 2 00:00:38.782621 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:00:38.782821 systemd-tmpfiles[1720]: Skipping /boot Jul 2 00:00:38.810193 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:00:38.810216 systemd-tmpfiles[1720]: Skipping /boot Jul 2 00:00:38.827936 ldconfig[1593]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:00:38.874441 zram_generator::config[1746]: No configuration found. Jul 2 00:00:39.104480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:00:39.214980 systemd[1]: Reloading finished in 511 ms. Jul 2 00:00:39.240541 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:00:39.243184 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:00:39.250420 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:00:39.276767 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:00:39.285665 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:00:39.298679 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:00:39.314523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:00:39.323289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:00:39.332696 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:00:39.347334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:00:39.356874 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:00:39.367845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:00:39.385127 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:00:39.387173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:00:39.391323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:00:39.393813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:00:39.404471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:00:39.415789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:00:39.417946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:00:39.418089 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:00:39.427722 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:00:39.430520 systemd[1]: Finished ensure-sysext.service. Jul 2 00:00:39.434496 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:00:39.457295 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:00:39.468511 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:00:39.474525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:00:39.475677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:00:39.479165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:00:39.479946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:00:39.487866 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:00:39.488828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:00:39.500255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:00:39.501330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:00:39.517833 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:00:39.519188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:00:39.524456 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:00:39.538490 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:00:39.541389 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:00:39.573244 systemd-udevd[1812]: Using default interface naming scheme 'v255'. Jul 2 00:00:39.582989 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:00:39.596068 augenrules[1837]: No rules Jul 2 00:00:39.593451 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:00:39.632640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:00:39.645663 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:00:39.794803 systemd-resolved[1808]: Positive Trust Anchors: Jul 2 00:00:39.794840 systemd-resolved[1808]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:00:39.794903 systemd-resolved[1808]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:00:39.808519 systemd-resolved[1808]: Defaulting to hostname 'linux'. Jul 2 00:00:39.813499 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:00:39.816481 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:00:39.841553 systemd-networkd[1849]: lo: Link UP Jul 2 00:00:39.841568 systemd-networkd[1849]: lo: Gained carrier Jul 2 00:00:39.843513 systemd-networkd[1849]: Enumeration completed Jul 2 00:00:39.843833 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:00:39.846909 systemd[1]: Reached target network.target - Network. Jul 2 00:00:39.876096 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:00:39.886643 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:00:39.890175 (udev-worker)[1853]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:39.894506 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1863) Jul 2 00:00:40.006638 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:00:40.006948 systemd-networkd[1849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:00:40.009678 systemd-networkd[1849]: eth0: Link UP Jul 2 00:00:40.011101 systemd-networkd[1849]: eth0: Gained carrier Jul 2 00:00:40.011140 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:00:40.020608 systemd-networkd[1849]: eth0: DHCPv4 address 172.31.25.138/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:00:40.052441 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1862) Jul 2 00:00:40.134062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:00:40.276820 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:00:40.280074 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:00:40.288680 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:00:40.301616 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:00:40.323123 lvm[1968]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:00:40.339458 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:00:40.365999 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:00:40.369499 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:00:40.378758 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:00:40.404395 lvm[1974]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:00:40.426266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:00:40.429844 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:00:40.432220 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:00:40.434724 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:00:40.437640 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:00:40.439918 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:00:40.443345 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:00:40.445644 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:00:40.445813 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:00:40.447539 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:00:40.450822 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:00:40.456872 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:00:40.464915 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:00:40.469450 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:00:40.472026 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:00:40.475522 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:00:40.477437 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:00:40.479261 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:00:40.479319 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:00:40.484590 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:00:40.495699 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:00:40.502092 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:00:40.509617 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:00:40.521853 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:00:40.525548 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:00:40.531897 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:00:40.540687 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 00:00:40.546808 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:00:40.552734 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 00:00:40.562622 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:00:40.568708 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:00:40.581784 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:00:40.588470 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:00:40.589322 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:00:40.593708 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:00:40.599636 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:00:40.609465 jq[1983]: false Jul 2 00:00:40.616785 dbus-daemon[1982]: [system] SELinux support is enabled Jul 2 00:00:40.619426 dbus-daemon[1982]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1849 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:00:40.619860 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:00:40.631316 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:00:40.637755 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:00:40.638627 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:00:40.638694 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:00:40.641643 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:00:40.645124 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 00:00:40.641694 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:00:40.685845 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 00:00:40.747126 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:00:40.748512 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:00:40.769383 jq[1995]: true Jul 2 00:00:40.775663 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 00:00:40.775663 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:00:40.775663 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: ---------------------------------------------------- Jul 2 00:00:40.775663 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:00:40.775663 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:00:40.775663 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: corporation. Support and training for ntp-4 are Jul 2 00:00:40.775663 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: available at https://www.nwtime.org/support Jul 2 00:00:40.775663 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: ---------------------------------------------------- Jul 2 00:00:40.771315 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:11:12 UTC 2024 (1): Starting Jul 2 00:00:40.771378 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:00:40.771401 ntpd[1986]: ---------------------------------------------------- Jul 2 00:00:40.771420 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:00:40.771438 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:00:40.771457 ntpd[1986]: corporation. Support and training for ntp-4 are Jul 2 00:00:40.771475 ntpd[1986]: available at https://www.nwtime.org/support Jul 2 00:00:40.771499 ntpd[1986]: ---------------------------------------------------- Jul 2 00:00:40.779846 ntpd[1986]: proto: precision = 0.108 usec (-23) Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: proto: precision = 0.108 usec (-23) Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: basedate set to 2024-06-19 Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: gps base set to 2024-06-23 (week 2320) Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: Listen normally on 3 eth0 172.31.25.138:123 Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: Listen normally on 4 lo [::1]:123 Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: bind(21) AF_INET6 fe80::417:15ff:fe64:5d53%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: unable to create socket on eth0 (5) for fe80::417:15ff:fe64:5d53%2#123 Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: failed to init interface for address fe80::417:15ff:fe64:5d53%2 Jul 2 00:00:40.793644 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Jul 2 00:00:40.780295 ntpd[1986]: basedate set to 2024-06-19 Jul 2 00:00:40.794215 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:00:40.794215 ntpd[1986]: 2 Jul 00:00:40 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:00:40.780319 ntpd[1986]: gps base set to 2024-06-23 (week 2320) Jul 2 00:00:40.786984 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:00:40.787067 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:00:40.789387 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:00:40.789519 ntpd[1986]: Listen normally on 3 eth0 172.31.25.138:123 Jul 2 00:00:40.789588 ntpd[1986]: Listen normally on 4 lo [::1]:123 Jul 2 00:00:40.789679 ntpd[1986]: bind(21) AF_INET6 fe80::417:15ff:fe64:5d53%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:00:40.789717 ntpd[1986]: unable to create socket on eth0 (5) for fe80::417:15ff:fe64:5d53%2#123 Jul 2 00:00:40.789744 ntpd[1986]: failed to init interface for address fe80::417:15ff:fe64:5d53%2 Jul 2 00:00:40.789800 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Jul 2 00:00:40.793991 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:00:40.794039 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:00:40.816779 extend-filesystems[1984]: Found loop4 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found loop5 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found loop6 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found loop7 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found nvme0n1 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found nvme0n1p1 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found nvme0n1p2 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found nvme0n1p3 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found usr Jul 2 00:00:40.816779 extend-filesystems[1984]: Found nvme0n1p4 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found nvme0n1p6 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found nvme0n1p7 Jul 2 00:00:40.816779 extend-filesystems[1984]: Found nvme0n1p9 Jul 2 00:00:40.816779 extend-filesystems[1984]: Checking size of /dev/nvme0n1p9 Jul 2 00:00:40.871147 update_engine[1994]: I0702 00:00:40.812333 1994 main.cc:92] Flatcar Update Engine starting Jul 2 00:00:40.871147 update_engine[1994]: I0702 00:00:40.864084 1994 update_check_scheduler.cc:74] Next update check in 8m3s Jul 2 00:00:40.826900 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:00:40.863796 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:00:40.878760 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:00:40.896000 tar[2010]: linux-arm64/helm Jul 2 00:00:40.913379 coreos-metadata[1981]: Jul 02 00:00:40.909 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:00:40.916475 coreos-metadata[1981]: Jul 02 00:00:40.915 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 00:00:40.918602 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:00:40.921080 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:00:40.923572 coreos-metadata[1981]: Jul 02 00:00:40.923 INFO Fetch successful Jul 2 00:00:40.923572 coreos-metadata[1981]: Jul 02 00:00:40.923 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 00:00:40.929713 extend-filesystems[1984]: Resized partition /dev/nvme0n1p9 Jul 2 00:00:40.927740 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 00:00:40.932427 coreos-metadata[1981]: Jul 02 00:00:40.931 INFO Fetch successful Jul 2 00:00:40.932427 coreos-metadata[1981]: Jul 02 00:00:40.931 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 00:00:40.933819 coreos-metadata[1981]: Jul 02 00:00:40.933 INFO Fetch successful Jul 2 00:00:40.933819 coreos-metadata[1981]: Jul 02 00:00:40.933 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 00:00:40.934802 coreos-metadata[1981]: Jul 02 00:00:40.934 INFO Fetch successful Jul 2 00:00:40.934802 coreos-metadata[1981]: Jul 02 00:00:40.934 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 00:00:40.935595 coreos-metadata[1981]: Jul 02 00:00:40.935 INFO Fetch failed with 404: resource not found Jul 2 00:00:40.935595 coreos-metadata[1981]: Jul 02 00:00:40.935 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 00:00:40.936615 coreos-metadata[1981]: Jul 02 00:00:40.936 INFO Fetch successful Jul 2 00:00:40.937863 coreos-metadata[1981]: Jul 02 00:00:40.937 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 00:00:40.938408 coreos-metadata[1981]: Jul 02 00:00:40.938 INFO Fetch successful Jul 2 00:00:40.938408 coreos-metadata[1981]: Jul 02 00:00:40.938 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 00:00:40.940094 coreos-metadata[1981]: Jul 02 00:00:40.939 INFO Fetch successful Jul 2 00:00:40.940435 coreos-metadata[1981]: Jul 02 00:00:40.940 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 00:00:40.942989 coreos-metadata[1981]: Jul 02 00:00:40.941 INFO Fetch successful Jul 2 00:00:40.943513 coreos-metadata[1981]: Jul 02 00:00:40.943 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 00:00:40.947378 coreos-metadata[1981]: Jul 02 00:00:40.944 INFO Fetch successful Jul 2 00:00:40.950349 jq[2018]: true Jul 2 00:00:40.956625 extend-filesystems[2034]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:00:40.978400 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:00:41.014054 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:00:41.034792 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:00:41.036925 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:00:41.080417 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:00:41.139470 extend-filesystems[2034]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:00:41.139470 extend-filesystems[2034]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:00:41.139470 extend-filesystems[2034]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:00:41.150402 extend-filesystems[1984]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:00:41.156746 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:00:41.157125 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:00:41.193384 bash[2063]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:00:41.200348 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:00:41.246652 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1871) Jul 2 00:00:41.248069 systemd[1]: Starting sshkeys.service... Jul 2 00:00:41.261470 systemd-networkd[1849]: eth0: Gained IPv6LL Jul 2 00:00:41.269867 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:00:41.272501 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:00:41.275581 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:00:41.275615 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 00:00:41.278819 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 00:00:41.279509 systemd-logind[1993]: New seat seat0. Jul 2 00:00:41.288611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:00:41.301819 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:00:41.304766 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:00:41.334236 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:00:41.343893 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:00:41.345627 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:00:41.351469 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 00:00:41.367602 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2001 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:00:41.380028 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 00:00:41.454386 amazon-ssm-agent[2074]: Initializing new seelog logger Jul 2 00:00:41.454386 amazon-ssm-agent[2074]: New Seelog Logger Creation Complete Jul 2 00:00:41.454386 amazon-ssm-agent[2074]: 2024/07/02 00:00:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.454386 amazon-ssm-agent[2074]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.459403 amazon-ssm-agent[2074]: 2024/07/02 00:00:41 processing appconfig overrides Jul 2 00:00:41.459403 amazon-ssm-agent[2074]: 2024/07/02 00:00:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.459403 amazon-ssm-agent[2074]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.459403 amazon-ssm-agent[2074]: 2024/07/02 00:00:41 processing appconfig overrides Jul 2 00:00:41.459403 amazon-ssm-agent[2074]: 2024/07/02 00:00:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.459403 amazon-ssm-agent[2074]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.459403 amazon-ssm-agent[2074]: 2024/07/02 00:00:41 processing appconfig overrides Jul 2 00:00:41.462888 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO Proxy environment variables: Jul 2 00:00:41.481735 amazon-ssm-agent[2074]: 2024/07/02 00:00:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.483333 amazon-ssm-agent[2074]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:00:41.483333 amazon-ssm-agent[2074]: 2024/07/02 00:00:41 processing appconfig overrides Jul 2 00:00:41.497197 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:00:41.553084 polkitd[2081]: Started polkitd version 121 Jul 2 00:00:41.572953 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO https_proxy: Jul 2 00:00:41.616285 polkitd[2081]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:00:41.616438 polkitd[2081]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:00:41.627264 polkitd[2081]: Finished loading, compiling and executing 2 rules Jul 2 00:00:41.632618 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:00:41.632917 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 00:00:41.639189 polkitd[2081]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:00:41.673384 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO http_proxy: Jul 2 00:00:41.708912 systemd-hostnamed[2001]: Hostname set to (transient) Jul 2 00:00:41.709089 systemd-resolved[1808]: System hostname changed to 'ip-172-31-25-138'. Jul 2 00:00:41.717500 locksmithd[2028]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:00:41.771730 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO no_proxy: Jul 2 00:00:41.841471 sshd_keygen[2006]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:00:41.871577 coreos-metadata[2079]: Jul 02 00:00:41.870 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:00:41.875623 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO Checking if agent identity type OnPrem can be assumed Jul 2 00:00:41.881923 coreos-metadata[2079]: Jul 02 00:00:41.881 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 00:00:41.888454 coreos-metadata[2079]: Jul 02 00:00:41.883 INFO Fetch successful Jul 2 00:00:41.888454 coreos-metadata[2079]: Jul 02 00:00:41.883 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:00:41.895378 coreos-metadata[2079]: Jul 02 00:00:41.894 INFO Fetch successful Jul 2 00:00:41.908889 unknown[2079]: wrote ssh authorized keys file for user: core Jul 2 00:00:41.976674 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO Checking if agent identity type EC2 can be assumed Jul 2 00:00:41.993667 containerd[2015]: time="2024-07-02T00:00:41.992729125Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:00:42.008767 update-ssh-keys[2190]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:00:42.013430 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:00:42.023434 systemd[1]: Finished sshkeys.service. Jul 2 00:00:42.062929 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:00:42.073535 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:00:42.086937 systemd[1]: Started sshd@0-172.31.25.138:22-147.75.109.163:37502.service - OpenSSH per-connection server daemon (147.75.109.163:37502). Jul 2 00:00:42.101448 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO Agent will take identity from EC2 Jul 2 00:00:42.156006 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:00:42.156389 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:00:42.176109 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:00:42.196710 containerd[2015]: time="2024-07-02T00:00:42.196648930Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:00:42.198662 containerd[2015]: time="2024-07-02T00:00:42.198342574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.204400 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:00:42.212210 containerd[2015]: time="2024-07-02T00:00:42.210079666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:00:42.212210 containerd[2015]: time="2024-07-02T00:00:42.210157307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.215830 containerd[2015]: time="2024-07-02T00:00:42.215775455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:00:42.220408 containerd[2015]: time="2024-07-02T00:00:42.219212243Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:00:42.220408 containerd[2015]: time="2024-07-02T00:00:42.219476279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.220408 containerd[2015]: time="2024-07-02T00:00:42.219593615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:00:42.220408 containerd[2015]: time="2024-07-02T00:00:42.219653723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.220408 containerd[2015]: time="2024-07-02T00:00:42.219807335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.220408 containerd[2015]: time="2024-07-02T00:00:42.220246919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.220408 containerd[2015]: time="2024-07-02T00:00:42.220284167Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:00:42.220408 containerd[2015]: time="2024-07-02T00:00:42.220308959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:00:42.225053 containerd[2015]: time="2024-07-02T00:00:42.224644739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:00:42.225053 containerd[2015]: time="2024-07-02T00:00:42.224731403Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:00:42.225053 containerd[2015]: time="2024-07-02T00:00:42.224883875Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:00:42.225053 containerd[2015]: time="2024-07-02T00:00:42.224909531Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.238380227Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.238455011Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.238485983Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.238565183Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.238680443Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.238711427Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.238741511Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.238994591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.239027891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.239057147Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.239088131Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.239119559Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.239156495Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.239389 containerd[2015]: time="2024-07-02T00:00:42.239227283Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.240085 containerd[2015]: time="2024-07-02T00:00:42.239261135Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.240085 containerd[2015]: time="2024-07-02T00:00:42.239291183Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.240085 containerd[2015]: time="2024-07-02T00:00:42.239320607Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.244133 containerd[2015]: time="2024-07-02T00:00:42.241410167Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.244133 containerd[2015]: time="2024-07-02T00:00:42.241490891Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:00:42.244133 containerd[2015]: time="2024-07-02T00:00:42.241728863Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.247903715Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.247984607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.248017415Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.248066375Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.250539239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.250645979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.250694999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.250726727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.250758167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.250787519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.250818479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.250847279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.250878935Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:00:42.251390 containerd[2015]: time="2024-07-02T00:00:42.251188139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.252058 containerd[2015]: time="2024-07-02T00:00:42.251245775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.252058 containerd[2015]: time="2024-07-02T00:00:42.251276207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.252058 containerd[2015]: time="2024-07-02T00:00:42.251308427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.256933 containerd[2015]: time="2024-07-02T00:00:42.251338667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.256933 containerd[2015]: time="2024-07-02T00:00:42.254486987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.256933 containerd[2015]: time="2024-07-02T00:00:42.254535455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.256933 containerd[2015]: time="2024-07-02T00:00:42.254564363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:00:42.257222 containerd[2015]: time="2024-07-02T00:00:42.255128291Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:00:42.257222 containerd[2015]: time="2024-07-02T00:00:42.255252599Z" level=info msg="Connect containerd service" Jul 2 00:00:42.257222 containerd[2015]: time="2024-07-02T00:00:42.255314075Z" level=info msg="using legacy CRI server" Jul 2 00:00:42.257222 containerd[2015]: time="2024-07-02T00:00:42.255331931Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:00:42.257222 containerd[2015]: time="2024-07-02T00:00:42.255506927Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:00:42.261930 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260416079Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260515847Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260560499Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260587967Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260620007Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260637887Z" level=info msg="Start subscribing containerd event" Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260719787Z" level=info msg="Start recovering state" Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260843531Z" level=info msg="Start event monitor" Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260867171Z" level=info msg="Start snapshots syncer" Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260889251Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.260907287Z" level=info msg="Start streaming server" Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.261432479Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.264440399Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:00:42.270243 containerd[2015]: time="2024-07-02T00:00:42.267481691Z" level=info msg="containerd successfully booted in 0.283338s" Jul 2 00:00:42.274002 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:00:42.285944 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:00:42.288538 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:00:42.292137 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:00:42.307700 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:00:42.377870 sshd[2204]: Accepted publickey for core from 147.75.109.163 port 37502 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:42.382405 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:42.401463 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:00:42.407226 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:00:42.420723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:00:42.434485 systemd-logind[1993]: New session 1 of user core. Jul 2 00:00:42.468408 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:00:42.495902 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:00:42.502447 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 00:00:42.507609 (systemd)[2225]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:42.605893 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 2 00:00:42.709543 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 00:00:42.793073 systemd[2225]: Queued start job for default target default.target. Jul 2 00:00:42.801417 systemd[2225]: Created slice app.slice - User Application Slice. Jul 2 00:00:42.801484 systemd[2225]: Reached target paths.target - Paths. Jul 2 00:00:42.801517 systemd[2225]: Reached target timers.target - Timers. Jul 2 00:00:42.808208 systemd[2225]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:00:42.810759 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 00:00:42.849476 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO [Registrar] Starting registrar module Jul 2 00:00:42.849476 amazon-ssm-agent[2074]: 2024-07-02 00:00:41 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 00:00:42.849657 amazon-ssm-agent[2074]: 2024-07-02 00:00:42 INFO [EC2Identity] EC2 registration was successful. Jul 2 00:00:42.849657 amazon-ssm-agent[2074]: 2024-07-02 00:00:42 INFO [CredentialRefresher] credentialRefresher has started Jul 2 00:00:42.849657 amazon-ssm-agent[2074]: 2024-07-02 00:00:42 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 00:00:42.849657 amazon-ssm-agent[2074]: 2024-07-02 00:00:42 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 00:00:42.854808 systemd[2225]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:00:42.855078 systemd[2225]: Reached target sockets.target - Sockets. Jul 2 00:00:42.855129 systemd[2225]: Reached target basic.target - Basic System. Jul 2 00:00:42.855243 systemd[2225]: Reached target default.target - Main User Target. Jul 2 00:00:42.855311 systemd[2225]: Startup finished in 329ms. Jul 2 00:00:42.855343 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:00:42.866690 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:00:42.910907 amazon-ssm-agent[2074]: 2024-07-02 00:00:42 INFO [CredentialRefresher] Next credential rotation will be in 32.474991055 minutes Jul 2 00:00:43.016949 tar[2010]: linux-arm64/LICENSE Jul 2 00:00:43.016949 tar[2010]: linux-arm64/README.md Jul 2 00:00:43.057636 systemd[1]: Started sshd@1-172.31.25.138:22-147.75.109.163:58052.service - OpenSSH per-connection server daemon (147.75.109.163:58052). Jul 2 00:00:43.080217 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:00:43.249978 sshd[2239]: Accepted publickey for core from 147.75.109.163 port 58052 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:43.252676 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:43.263970 systemd-logind[1993]: New session 2 of user core. Jul 2 00:00:43.267977 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:00:43.370852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:43.378226 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:00:43.384209 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:00:43.390561 systemd[1]: Startup finished in 1.160s (kernel) + 9.540s (initrd) + 8.100s (userspace) = 18.802s. Jul 2 00:00:43.404675 sshd[2239]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:43.416196 systemd[1]: sshd@1-172.31.25.138:22-147.75.109.163:58052.service: Deactivated successfully. Jul 2 00:00:43.421212 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:00:43.425252 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:00:43.442034 systemd[1]: Started sshd@2-172.31.25.138:22-147.75.109.163:58056.service - OpenSSH per-connection server daemon (147.75.109.163:58056). Jul 2 00:00:43.444083 systemd-logind[1993]: Removed session 2. Jul 2 00:00:43.635121 sshd[2257]: Accepted publickey for core from 147.75.109.163 port 58056 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:43.637386 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:43.646291 systemd-logind[1993]: New session 3 of user core. Jul 2 00:00:43.652626 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:00:43.772065 ntpd[1986]: Listen normally on 6 eth0 [fe80::417:15ff:fe64:5d53%2]:123 Jul 2 00:00:43.772544 ntpd[1986]: 2 Jul 00:00:43 ntpd[1986]: Listen normally on 6 eth0 [fe80::417:15ff:fe64:5d53%2]:123 Jul 2 00:00:43.785154 sshd[2257]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:43.793204 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:00:43.794110 systemd[1]: sshd@2-172.31.25.138:22-147.75.109.163:58056.service: Deactivated successfully. Jul 2 00:00:43.800560 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:00:43.803044 systemd-logind[1993]: Removed session 3. Jul 2 00:00:43.889704 amazon-ssm-agent[2074]: 2024-07-02 00:00:43 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 00:00:43.990665 amazon-ssm-agent[2074]: 2024-07-02 00:00:43 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2268) started Jul 2 00:00:44.091637 amazon-ssm-agent[2074]: 2024-07-02 00:00:43 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 00:00:44.215318 kubelet[2249]: E0702 00:00:44.215079 2249 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:00:44.220938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:00:44.221298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:00:44.222062 systemd[1]: kubelet.service: Consumed 1.352s CPU time. Jul 2 00:00:53.826874 systemd[1]: Started sshd@3-172.31.25.138:22-147.75.109.163:37392.service - OpenSSH per-connection server daemon (147.75.109.163:37392). Jul 2 00:00:54.005829 sshd[2282]: Accepted publickey for core from 147.75.109.163 port 37392 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:54.008403 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:54.015780 systemd-logind[1993]: New session 4 of user core. Jul 2 00:00:54.027613 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:00:54.159834 sshd[2282]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:54.166237 systemd[1]: sshd@3-172.31.25.138:22-147.75.109.163:37392.service: Deactivated successfully. Jul 2 00:00:54.170438 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:00:54.171917 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:00:54.173879 systemd-logind[1993]: Removed session 4. Jul 2 00:00:54.206897 systemd[1]: Started sshd@4-172.31.25.138:22-147.75.109.163:37404.service - OpenSSH per-connection server daemon (147.75.109.163:37404). Jul 2 00:00:54.326495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:00:54.342708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:00:54.383337 sshd[2289]: Accepted publickey for core from 147.75.109.163 port 37404 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:54.385849 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:54.396478 systemd-logind[1993]: New session 5 of user core. Jul 2 00:00:54.402677 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:00:54.527018 sshd[2289]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:54.533287 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:00:54.534263 systemd[1]: sshd@4-172.31.25.138:22-147.75.109.163:37404.service: Deactivated successfully. Jul 2 00:00:54.537203 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:00:54.540610 systemd-logind[1993]: Removed session 5. Jul 2 00:00:54.567128 systemd[1]: Started sshd@5-172.31.25.138:22-147.75.109.163:37412.service - OpenSSH per-connection server daemon (147.75.109.163:37412). Jul 2 00:00:54.734325 sshd[2299]: Accepted publickey for core from 147.75.109.163 port 37412 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:54.737076 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:54.744636 systemd-logind[1993]: New session 6 of user core. Jul 2 00:00:54.756136 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:00:54.833693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:00:54.835225 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:00:54.887736 sshd[2299]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:54.895239 systemd[1]: sshd@5-172.31.25.138:22-147.75.109.163:37412.service: Deactivated successfully. Jul 2 00:00:54.905596 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:00:54.907563 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:00:54.928901 systemd[1]: Started sshd@6-172.31.25.138:22-147.75.109.163:37416.service - OpenSSH per-connection server daemon (147.75.109.163:37416). Jul 2 00:00:54.931078 systemd-logind[1993]: Removed session 6. Jul 2 00:00:54.948890 kubelet[2307]: E0702 00:00:54.948804 2307 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:00:54.958175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:00:54.958550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:00:55.115732 sshd[2318]: Accepted publickey for core from 147.75.109.163 port 37416 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:55.118639 sshd[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:55.126132 systemd-logind[1993]: New session 7 of user core. Jul 2 00:00:55.136599 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:00:55.253107 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:00:55.253759 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:00:55.268800 sudo[2322]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:55.292760 sshd[2318]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:55.298104 systemd[1]: sshd@6-172.31.25.138:22-147.75.109.163:37416.service: Deactivated successfully. Jul 2 00:00:55.301629 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:00:55.305928 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:00:55.308181 systemd-logind[1993]: Removed session 7. Jul 2 00:00:55.339878 systemd[1]: Started sshd@7-172.31.25.138:22-147.75.109.163:37424.service - OpenSSH per-connection server daemon (147.75.109.163:37424). Jul 2 00:00:55.508827 sshd[2327]: Accepted publickey for core from 147.75.109.163 port 37424 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:55.510800 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:55.517921 systemd-logind[1993]: New session 8 of user core. Jul 2 00:00:55.527636 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:00:55.632627 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:00:55.633140 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:00:55.640093 sudo[2331]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:55.649843 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:00:55.650440 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:00:55.678564 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:00:55.680653 auditctl[2334]: No rules Jul 2 00:00:55.682538 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:00:55.683523 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:00:55.692003 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:00:55.739288 augenrules[2352]: No rules Jul 2 00:00:55.741893 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:00:55.745988 sudo[2330]: pam_unix(sudo:session): session closed for user root Jul 2 00:00:55.769058 sshd[2327]: pam_unix(sshd:session): session closed for user core Jul 2 00:00:55.775460 systemd[1]: sshd@7-172.31.25.138:22-147.75.109.163:37424.service: Deactivated successfully. Jul 2 00:00:55.779044 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:00:55.782909 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:00:55.784839 systemd-logind[1993]: Removed session 8. Jul 2 00:00:55.808882 systemd[1]: Started sshd@8-172.31.25.138:22-147.75.109.163:37440.service - OpenSSH per-connection server daemon (147.75.109.163:37440). Jul 2 00:00:55.991160 sshd[2360]: Accepted publickey for core from 147.75.109.163 port 37440 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:00:55.993618 sshd[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:00:56.002676 systemd-logind[1993]: New session 9 of user core. Jul 2 00:00:56.010615 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:00:56.114176 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:00:56.114936 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:00:56.321854 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:00:56.326306 (dockerd)[2372]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:00:56.842939 dockerd[2372]: time="2024-07-02T00:00:56.842564562Z" level=info msg="Starting up" Jul 2 00:00:56.922910 systemd[1]: var-lib-docker-metacopy\x2dcheck3264967310-merged.mount: Deactivated successfully. Jul 2 00:00:56.946069 dockerd[2372]: time="2024-07-02T00:00:56.945986365Z" level=info msg="Loading containers: start." Jul 2 00:00:57.218421 kernel: Initializing XFRM netlink socket Jul 2 00:00:57.350102 (udev-worker)[2384]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:00:57.429645 systemd-networkd[1849]: docker0: Link UP Jul 2 00:00:57.448687 dockerd[2372]: time="2024-07-02T00:00:57.448621552Z" level=info msg="Loading containers: done." Jul 2 00:00:57.725578 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3574503634-merged.mount: Deactivated successfully. Jul 2 00:00:57.735870 dockerd[2372]: time="2024-07-02T00:00:57.735795789Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:00:57.736169 dockerd[2372]: time="2024-07-02T00:00:57.736116413Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:00:57.736377 dockerd[2372]: time="2024-07-02T00:00:57.736329570Z" level=info msg="Daemon has completed initialization" Jul 2 00:00:57.793742 dockerd[2372]: time="2024-07-02T00:00:57.793658348Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:00:57.794157 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:00:59.038114 containerd[2015]: time="2024-07-02T00:00:59.038002616Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:00:59.680062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579070719.mount: Deactivated successfully. Jul 2 00:01:01.663984 containerd[2015]: time="2024-07-02T00:01:01.663924294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:01.665689 containerd[2015]: time="2024-07-02T00:01:01.665627598Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=32256347" Jul 2 00:01:01.668113 containerd[2015]: time="2024-07-02T00:01:01.668049187Z" level=info msg="ImageCreate event name:\"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:01.673760 containerd[2015]: time="2024-07-02T00:01:01.673678379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:01.676262 containerd[2015]: time="2024-07-02T00:01:01.676029863Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"32253147\" in 2.637946119s" Jul 2 00:01:01.676262 containerd[2015]: time="2024-07-02T00:01:01.676087590Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 00:01:01.718002 containerd[2015]: time="2024-07-02T00:01:01.717942332Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:01:04.042434 containerd[2015]: time="2024-07-02T00:01:04.041699611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:04.043972 containerd[2015]: time="2024-07-02T00:01:04.043900090Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=29228084" Jul 2 00:01:04.045717 containerd[2015]: time="2024-07-02T00:01:04.045627646Z" level=info msg="ImageCreate event name:\"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:04.051369 containerd[2015]: time="2024-07-02T00:01:04.051266877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:04.054285 containerd[2015]: time="2024-07-02T00:01:04.053630499Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"30685210\" in 2.33558142s" Jul 2 00:01:04.054285 containerd[2015]: time="2024-07-02T00:01:04.053740172Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 00:01:04.096680 containerd[2015]: time="2024-07-02T00:01:04.096618902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:01:05.076538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:01:05.090375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:01:05.618675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:01:05.630059 (kubelet)[2583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:01:05.745283 kubelet[2583]: E0702 00:01:05.745007 2583 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:01:05.750836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:01:05.751155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:01:05.816532 containerd[2015]: time="2024-07-02T00:01:05.816464462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:05.819129 containerd[2015]: time="2024-07-02T00:01:05.818500444Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=15578348" Jul 2 00:01:05.819997 containerd[2015]: time="2024-07-02T00:01:05.819924875Z" level=info msg="ImageCreate event name:\"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:05.826558 containerd[2015]: time="2024-07-02T00:01:05.826474499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:05.830421 containerd[2015]: time="2024-07-02T00:01:05.830226917Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"17035492\" in 1.733541941s" Jul 2 00:01:05.830421 containerd[2015]: time="2024-07-02T00:01:05.830282390Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 00:01:05.869893 containerd[2015]: time="2024-07-02T00:01:05.868897876Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:01:07.201661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873923541.mount: Deactivated successfully. Jul 2 00:01:07.774600 containerd[2015]: time="2024-07-02T00:01:07.774535475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:07.779974 containerd[2015]: time="2024-07-02T00:01:07.778812979Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=25052710" Jul 2 00:01:07.781889 containerd[2015]: time="2024-07-02T00:01:07.781756342Z" level=info msg="ImageCreate event name:\"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:07.787582 containerd[2015]: time="2024-07-02T00:01:07.787470296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:07.789603 containerd[2015]: time="2024-07-02T00:01:07.789413324Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"25051729\" in 1.920455202s" Jul 2 00:01:07.789603 containerd[2015]: time="2024-07-02T00:01:07.789474481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 00:01:07.836135 containerd[2015]: time="2024-07-02T00:01:07.836056803Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:01:08.439956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064447305.mount: Deactivated successfully. Jul 2 00:01:09.664188 containerd[2015]: time="2024-07-02T00:01:09.664105778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:09.666343 containerd[2015]: time="2024-07-02T00:01:09.666266965Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jul 2 00:01:09.667894 containerd[2015]: time="2024-07-02T00:01:09.667800794Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:09.674079 containerd[2015]: time="2024-07-02T00:01:09.673970640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:09.676675 containerd[2015]: time="2024-07-02T00:01:09.676610135Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.840488097s" Jul 2 00:01:09.676795 containerd[2015]: time="2024-07-02T00:01:09.676670501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 00:01:09.717080 containerd[2015]: time="2024-07-02T00:01:09.717015463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:01:10.238670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144629989.mount: Deactivated successfully. Jul 2 00:01:10.250161 containerd[2015]: time="2024-07-02T00:01:10.250081610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:10.252407 containerd[2015]: time="2024-07-02T00:01:10.252316740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 00:01:10.254145 containerd[2015]: time="2024-07-02T00:01:10.254068943Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:10.259010 containerd[2015]: time="2024-07-02T00:01:10.258878897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:10.260735 containerd[2015]: time="2024-07-02T00:01:10.260512875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 543.415026ms" Jul 2 00:01:10.260735 containerd[2015]: time="2024-07-02T00:01:10.260573325Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:01:10.302778 containerd[2015]: time="2024-07-02T00:01:10.302720085Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:01:10.902409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480891391.mount: Deactivated successfully. Jul 2 00:01:11.743741 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:01:14.154415 containerd[2015]: time="2024-07-02T00:01:14.153626337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:14.156295 containerd[2015]: time="2024-07-02T00:01:14.156216717Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jul 2 00:01:14.160047 containerd[2015]: time="2024-07-02T00:01:14.159973921Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:14.169850 containerd[2015]: time="2024-07-02T00:01:14.169772156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:14.174912 containerd[2015]: time="2024-07-02T00:01:14.174676910Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.871894012s" Jul 2 00:01:14.174912 containerd[2015]: time="2024-07-02T00:01:14.174744928Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 00:01:15.826534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:01:15.835947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:01:16.439859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:01:16.450075 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:01:16.548397 kubelet[2780]: E0702 00:01:16.546027 2780 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:01:16.551871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:01:16.552483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:01:21.050826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:01:21.058916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:01:21.108583 systemd[1]: Reloading requested from client PID 2794 ('systemctl') (unit session-9.scope)... Jul 2 00:01:21.108819 systemd[1]: Reloading... Jul 2 00:01:21.303428 zram_generator::config[2832]: No configuration found. Jul 2 00:01:21.550946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:01:21.722442 systemd[1]: Reloading finished in 612 ms. Jul 2 00:01:21.813225 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:01:21.813462 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:01:21.814086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:01:21.820938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:01:22.536252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:01:22.556904 (kubelet)[2895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:01:22.639137 kubelet[2895]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:01:22.639137 kubelet[2895]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:01:22.639661 kubelet[2895]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:01:22.639661 kubelet[2895]: I0702 00:01:22.639257 2895 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:01:23.635436 kubelet[2895]: I0702 00:01:23.635228 2895 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:01:23.635436 kubelet[2895]: I0702 00:01:23.635273 2895 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:01:23.635740 kubelet[2895]: I0702 00:01:23.635685 2895 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:01:23.671074 kubelet[2895]: E0702 00:01:23.670498 2895 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.25.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:23.671074 kubelet[2895]: I0702 00:01:23.670576 2895 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:01:23.690258 kubelet[2895]: I0702 00:01:23.690167 2895 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:01:23.692697 kubelet[2895]: I0702 00:01:23.692647 2895 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:01:23.693025 kubelet[2895]: I0702 00:01:23.692978 2895 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:01:23.693025 kubelet[2895]: I0702 00:01:23.693021 2895 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:01:23.693252 kubelet[2895]: I0702 00:01:23.693043 2895 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:01:23.695935 kubelet[2895]: I0702 00:01:23.695887 2895 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:01:23.700595 kubelet[2895]: I0702 00:01:23.700438 2895 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:01:23.700595 kubelet[2895]: I0702 00:01:23.700484 2895 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:01:23.700595 kubelet[2895]: I0702 00:01:23.700527 2895 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:01:23.700595 kubelet[2895]: I0702 00:01:23.700560 2895 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:01:23.702392 kubelet[2895]: W0702 00:01:23.701418 2895 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.25.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-138&limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:23.702392 kubelet[2895]: E0702 00:01:23.701495 2895 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.25.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-138&limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:23.703669 kubelet[2895]: W0702 00:01:23.703595 2895 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.25.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:23.703669 kubelet[2895]: E0702 00:01:23.703673 2895 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.25.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:23.705113 kubelet[2895]: I0702 00:01:23.704169 2895 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:01:23.705113 kubelet[2895]: I0702 00:01:23.704677 2895 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:01:23.707414 kubelet[2895]: W0702 00:01:23.705918 2895 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:01:23.707414 kubelet[2895]: I0702 00:01:23.707017 2895 server.go:1256] "Started kubelet" Jul 2 00:01:23.712834 kubelet[2895]: I0702 00:01:23.712789 2895 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:01:23.713885 kubelet[2895]: I0702 00:01:23.713848 2895 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:01:23.714251 kubelet[2895]: I0702 00:01:23.714205 2895 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:01:23.714655 kubelet[2895]: I0702 00:01:23.714629 2895 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:01:23.716959 kubelet[2895]: E0702 00:01:23.716918 2895 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.138:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.138:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-138.17de3c5e5e312fa5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-138,UID:ip-172-31-25-138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-138,},FirstTimestamp:2024-07-02 00:01:23.706982309 +0000 UTC m=+1.143213187,LastTimestamp:2024-07-02 00:01:23.706982309 +0000 UTC m=+1.143213187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-138,}" Jul 2 00:01:23.720487 kubelet[2895]: I0702 00:01:23.720438 2895 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:01:23.723930 kubelet[2895]: I0702 00:01:23.723888 2895 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:01:23.726840 kubelet[2895]: I0702 00:01:23.726800 2895 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:01:23.729349 kubelet[2895]: I0702 00:01:23.729297 2895 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:01:23.730054 kubelet[2895]: W0702 00:01:23.729971 2895 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.25.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:23.730054 kubelet[2895]: E0702 00:01:23.730058 2895 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.25.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:23.730393 kubelet[2895]: E0702 00:01:23.730337 2895 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:01:23.731027 kubelet[2895]: E0702 00:01:23.730934 2895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-138?timeout=10s\": dial tcp 172.31.25.138:6443: connect: connection refused" interval="200ms" Jul 2 00:01:23.732892 kubelet[2895]: I0702 00:01:23.732860 2895 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:01:23.734238 kubelet[2895]: I0702 00:01:23.733202 2895 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:01:23.735338 kubelet[2895]: I0702 00:01:23.735304 2895 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:01:23.775829 kubelet[2895]: I0702 00:01:23.775717 2895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:01:23.777888 kubelet[2895]: I0702 00:01:23.777831 2895 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:01:23.778022 kubelet[2895]: I0702 00:01:23.777879 2895 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:01:23.778022 kubelet[2895]: I0702 00:01:23.777936 2895 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:01:23.783169 kubelet[2895]: I0702 00:01:23.783119 2895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:01:23.795595 kubelet[2895]: I0702 00:01:23.783190 2895 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:01:23.795595 kubelet[2895]: I0702 00:01:23.783223 2895 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:01:23.795595 kubelet[2895]: E0702 00:01:23.783381 2895 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:01:23.795595 kubelet[2895]: W0702 00:01:23.785127 2895 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.25.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:23.795595 kubelet[2895]: E0702 00:01:23.785274 2895 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.25.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:23.797385 kubelet[2895]: I0702 00:01:23.796575 2895 policy_none.go:49] "None policy: Start" Jul 2 00:01:23.798099 kubelet[2895]: I0702 00:01:23.798056 2895 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:01:23.798184 kubelet[2895]: I0702 00:01:23.798149 2895 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:01:23.817969 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:01:23.834556 kubelet[2895]: I0702 00:01:23.834496 2895 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-138" Jul 2 00:01:23.835077 kubelet[2895]: E0702 00:01:23.835036 2895 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.138:6443/api/v1/nodes\": dial tcp 172.31.25.138:6443: connect: connection refused" node="ip-172-31-25-138" Jul 2 00:01:23.842174 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:01:23.849240 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:01:23.861619 kubelet[2895]: I0702 00:01:23.860923 2895 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:01:23.861755 kubelet[2895]: I0702 00:01:23.861736 2895 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:01:23.865941 kubelet[2895]: E0702 00:01:23.865659 2895 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-138\" not found" Jul 2 00:01:23.883723 kubelet[2895]: I0702 00:01:23.883665 2895 topology_manager.go:215] "Topology Admit Handler" podUID="1cd3b218a4d2458ae029402fc5ae4b01" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:23.887651 kubelet[2895]: I0702 00:01:23.886241 2895 topology_manager.go:215] "Topology Admit Handler" podUID="14243f51ad17b4ba12f363db0d865bb5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-25-138" Jul 2 00:01:23.890798 kubelet[2895]: I0702 00:01:23.890740 2895 topology_manager.go:215] "Topology Admit Handler" podUID="c6e2d131d26d078e014ee8e2bea4a97f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-25-138" Jul 2 00:01:23.904655 systemd[1]: Created slice kubepods-burstable-pod1cd3b218a4d2458ae029402fc5ae4b01.slice - libcontainer container kubepods-burstable-pod1cd3b218a4d2458ae029402fc5ae4b01.slice. Jul 2 00:01:23.925898 systemd[1]: Created slice kubepods-burstable-pod14243f51ad17b4ba12f363db0d865bb5.slice - libcontainer container kubepods-burstable-pod14243f51ad17b4ba12f363db0d865bb5.slice. Jul 2 00:01:23.932100 kubelet[2895]: E0702 00:01:23.932046 2895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-138?timeout=10s\": dial tcp 172.31.25.138:6443: connect: connection refused" interval="400ms" Jul 2 00:01:23.937758 systemd[1]: Created slice kubepods-burstable-podc6e2d131d26d078e014ee8e2bea4a97f.slice - libcontainer container kubepods-burstable-podc6e2d131d26d078e014ee8e2bea4a97f.slice. Jul 2 00:01:24.030894 kubelet[2895]: I0702 00:01:24.030832 2895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:24.031044 kubelet[2895]: I0702 00:01:24.030917 2895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:24.031044 kubelet[2895]: I0702 00:01:24.030965 2895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14243f51ad17b4ba12f363db0d865bb5-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-138\" (UID: \"14243f51ad17b4ba12f363db0d865bb5\") " pod="kube-system/kube-scheduler-ip-172-31-25-138" Jul 2 00:01:24.031044 kubelet[2895]: I0702 00:01:24.031010 2895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6e2d131d26d078e014ee8e2bea4a97f-ca-certs\") pod \"kube-apiserver-ip-172-31-25-138\" (UID: \"c6e2d131d26d078e014ee8e2bea4a97f\") " pod="kube-system/kube-apiserver-ip-172-31-25-138" Jul 2 00:01:24.031204 kubelet[2895]: I0702 00:01:24.031057 2895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6e2d131d26d078e014ee8e2bea4a97f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-138\" (UID: \"c6e2d131d26d078e014ee8e2bea4a97f\") " pod="kube-system/kube-apiserver-ip-172-31-25-138" Jul 2 00:01:24.031204 kubelet[2895]: I0702 00:01:24.031104 2895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:24.031204 kubelet[2895]: I0702 00:01:24.031146 2895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:24.031204 kubelet[2895]: I0702 00:01:24.031203 2895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:24.031430 kubelet[2895]: I0702 00:01:24.031250 2895 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6e2d131d26d078e014ee8e2bea4a97f-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-138\" (UID: \"c6e2d131d26d078e014ee8e2bea4a97f\") " pod="kube-system/kube-apiserver-ip-172-31-25-138" Jul 2 00:01:24.037730 kubelet[2895]: I0702 00:01:24.037647 2895 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-138" Jul 2 00:01:24.038235 kubelet[2895]: E0702 00:01:24.038181 2895 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.138:6443/api/v1/nodes\": dial tcp 172.31.25.138:6443: connect: connection refused" node="ip-172-31-25-138" Jul 2 00:01:24.221028 containerd[2015]: time="2024-07-02T00:01:24.220837667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-138,Uid:1cd3b218a4d2458ae029402fc5ae4b01,Namespace:kube-system,Attempt:0,}" Jul 2 00:01:24.232283 containerd[2015]: time="2024-07-02T00:01:24.232185730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-138,Uid:14243f51ad17b4ba12f363db0d865bb5,Namespace:kube-system,Attempt:0,}" Jul 2 00:01:24.243773 containerd[2015]: time="2024-07-02T00:01:24.243255904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-138,Uid:c6e2d131d26d078e014ee8e2bea4a97f,Namespace:kube-system,Attempt:0,}" Jul 2 00:01:24.333185 kubelet[2895]: E0702 00:01:24.333126 2895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-138?timeout=10s\": dial tcp 172.31.25.138:6443: connect: connection refused" interval="800ms" Jul 2 00:01:24.440549 kubelet[2895]: I0702 00:01:24.440494 2895 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-138" Jul 2 00:01:24.441105 kubelet[2895]: E0702 00:01:24.441016 2895 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.138:6443/api/v1/nodes\": dial tcp 172.31.25.138:6443: connect: connection refused" node="ip-172-31-25-138" Jul 2 00:01:24.509282 kubelet[2895]: W0702 00:01:24.509073 2895 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.25.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:24.509282 kubelet[2895]: E0702 00:01:24.509163 2895 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.25.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:24.765210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376519638.mount: Deactivated successfully. Jul 2 00:01:24.778389 containerd[2015]: time="2024-07-02T00:01:24.778295052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:01:24.780255 containerd[2015]: time="2024-07-02T00:01:24.780206512Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:01:24.782274 containerd[2015]: time="2024-07-02T00:01:24.782231591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:01:24.783848 containerd[2015]: time="2024-07-02T00:01:24.783779189Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 00:01:24.786447 containerd[2015]: time="2024-07-02T00:01:24.785926607Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:01:24.789201 containerd[2015]: time="2024-07-02T00:01:24.788571440Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:01:24.789201 containerd[2015]: time="2024-07-02T00:01:24.788770936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:01:24.795581 containerd[2015]: time="2024-07-02T00:01:24.795502352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:01:24.799214 containerd[2015]: time="2024-07-02T00:01:24.798902436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.488123ms" Jul 2 00:01:24.805187 containerd[2015]: time="2024-07-02T00:01:24.805107495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.115562ms" Jul 2 00:01:24.805921 containerd[2015]: time="2024-07-02T00:01:24.805701102Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.280448ms" Jul 2 00:01:25.078711 containerd[2015]: time="2024-07-02T00:01:25.078540411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:25.078711 containerd[2015]: time="2024-07-02T00:01:25.078647794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:25.079723 containerd[2015]: time="2024-07-02T00:01:25.079464825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:25.079957 containerd[2015]: time="2024-07-02T00:01:25.079870030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:25.080587 containerd[2015]: time="2024-07-02T00:01:25.080198786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:25.080587 containerd[2015]: time="2024-07-02T00:01:25.080306276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:25.081140 containerd[2015]: time="2024-07-02T00:01:25.080964915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:25.082208 containerd[2015]: time="2024-07-02T00:01:25.082020746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:25.088265 containerd[2015]: time="2024-07-02T00:01:25.088073231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:25.088733 containerd[2015]: time="2024-07-02T00:01:25.088207276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:25.089427 containerd[2015]: time="2024-07-02T00:01:25.088561962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:25.089427 containerd[2015]: time="2024-07-02T00:01:25.088604565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:25.129577 kubelet[2895]: W0702 00:01:25.129446 2895 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.25.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:25.129577 kubelet[2895]: E0702 00:01:25.129541 2895 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.25.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:25.133983 kubelet[2895]: E0702 00:01:25.133685 2895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-138?timeout=10s\": dial tcp 172.31.25.138:6443: connect: connection refused" interval="1.6s" Jul 2 00:01:25.132713 systemd[1]: Started cri-containerd-3fbafc60d2835a5bfafcb06e794640394640067c866e54d4b9b6687744067ad5.scope - libcontainer container 3fbafc60d2835a5bfafcb06e794640394640067c866e54d4b9b6687744067ad5. Jul 2 00:01:25.148656 systemd[1]: Started cri-containerd-d8afed146e38be201def94a1a6fc570b58a101d4309e7cd197987377a8191665.scope - libcontainer container d8afed146e38be201def94a1a6fc570b58a101d4309e7cd197987377a8191665. Jul 2 00:01:25.153998 kubelet[2895]: W0702 00:01:25.153879 2895 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.25.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-138&limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:25.154620 kubelet[2895]: E0702 00:01:25.154188 2895 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.25.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-138&limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:25.162750 systemd[1]: Started cri-containerd-14462829601f05f4c12432a84ff6917655010e46bced95f93af8aa0399ab99d5.scope - libcontainer container 14462829601f05f4c12432a84ff6917655010e46bced95f93af8aa0399ab99d5. Jul 2 00:01:25.244163 kubelet[2895]: W0702 00:01:25.243830 2895 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.25.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:25.244163 kubelet[2895]: E0702 00:01:25.243897 2895 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.25.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:25.245845 kubelet[2895]: I0702 00:01:25.245436 2895 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-138" Jul 2 00:01:25.247437 kubelet[2895]: E0702 00:01:25.247076 2895 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.138:6443/api/v1/nodes\": dial tcp 172.31.25.138:6443: connect: connection refused" node="ip-172-31-25-138" Jul 2 00:01:25.255832 containerd[2015]: time="2024-07-02T00:01:25.254811180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-138,Uid:c6e2d131d26d078e014ee8e2bea4a97f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fbafc60d2835a5bfafcb06e794640394640067c866e54d4b9b6687744067ad5\"" Jul 2 00:01:25.266759 containerd[2015]: time="2024-07-02T00:01:25.266309073Z" level=info msg="CreateContainer within sandbox \"3fbafc60d2835a5bfafcb06e794640394640067c866e54d4b9b6687744067ad5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:01:25.283300 containerd[2015]: time="2024-07-02T00:01:25.282967239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-138,Uid:1cd3b218a4d2458ae029402fc5ae4b01,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8afed146e38be201def94a1a6fc570b58a101d4309e7cd197987377a8191665\"" Jul 2 00:01:25.299007 containerd[2015]: time="2024-07-02T00:01:25.298829101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-138,Uid:14243f51ad17b4ba12f363db0d865bb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"14462829601f05f4c12432a84ff6917655010e46bced95f93af8aa0399ab99d5\"" Jul 2 00:01:25.301697 containerd[2015]: time="2024-07-02T00:01:25.301645892Z" level=info msg="CreateContainer within sandbox \"d8afed146e38be201def94a1a6fc570b58a101d4309e7cd197987377a8191665\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:01:25.307848 containerd[2015]: time="2024-07-02T00:01:25.307541074Z" level=info msg="CreateContainer within sandbox \"14462829601f05f4c12432a84ff6917655010e46bced95f93af8aa0399ab99d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:01:25.335630 containerd[2015]: time="2024-07-02T00:01:25.334631826Z" level=info msg="CreateContainer within sandbox \"3fbafc60d2835a5bfafcb06e794640394640067c866e54d4b9b6687744067ad5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b641494727322eb98d6daad0c6a5fff799c11d739ddfe539c7021d7e0e81b26d\"" Jul 2 00:01:25.337862 containerd[2015]: time="2024-07-02T00:01:25.337708995Z" level=info msg="StartContainer for \"b641494727322eb98d6daad0c6a5fff799c11d739ddfe539c7021d7e0e81b26d\"" Jul 2 00:01:25.359079 containerd[2015]: time="2024-07-02T00:01:25.358938267Z" level=info msg="CreateContainer within sandbox \"14462829601f05f4c12432a84ff6917655010e46bced95f93af8aa0399ab99d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90a76020e17c053f43a743ace10687079cd1a205cf9575fb3e984350574af413\"" Jul 2 00:01:25.360304 containerd[2015]: time="2024-07-02T00:01:25.359896288Z" level=info msg="StartContainer for \"90a76020e17c053f43a743ace10687079cd1a205cf9575fb3e984350574af413\"" Jul 2 00:01:25.368385 containerd[2015]: time="2024-07-02T00:01:25.368283920Z" level=info msg="CreateContainer within sandbox \"d8afed146e38be201def94a1a6fc570b58a101d4309e7cd197987377a8191665\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1fe5342473f208a54286f21576f119168cb86f707c56a53071fedc9efacfbb18\"" Jul 2 00:01:25.369263 containerd[2015]: time="2024-07-02T00:01:25.369181923Z" level=info msg="StartContainer for \"1fe5342473f208a54286f21576f119168cb86f707c56a53071fedc9efacfbb18\"" Jul 2 00:01:25.415777 systemd[1]: Started cri-containerd-b641494727322eb98d6daad0c6a5fff799c11d739ddfe539c7021d7e0e81b26d.scope - libcontainer container b641494727322eb98d6daad0c6a5fff799c11d739ddfe539c7021d7e0e81b26d. Jul 2 00:01:25.461664 systemd[1]: Started cri-containerd-90a76020e17c053f43a743ace10687079cd1a205cf9575fb3e984350574af413.scope - libcontainer container 90a76020e17c053f43a743ace10687079cd1a205cf9575fb3e984350574af413. Jul 2 00:01:25.474689 systemd[1]: Started cri-containerd-1fe5342473f208a54286f21576f119168cb86f707c56a53071fedc9efacfbb18.scope - libcontainer container 1fe5342473f208a54286f21576f119168cb86f707c56a53071fedc9efacfbb18. Jul 2 00:01:25.534122 containerd[2015]: time="2024-07-02T00:01:25.534046890Z" level=info msg="StartContainer for \"b641494727322eb98d6daad0c6a5fff799c11d739ddfe539c7021d7e0e81b26d\" returns successfully" Jul 2 00:01:25.590152 containerd[2015]: time="2024-07-02T00:01:25.589946870Z" level=info msg="StartContainer for \"90a76020e17c053f43a743ace10687079cd1a205cf9575fb3e984350574af413\" returns successfully" Jul 2 00:01:25.625159 containerd[2015]: time="2024-07-02T00:01:25.624839251Z" level=info msg="StartContainer for \"1fe5342473f208a54286f21576f119168cb86f707c56a53071fedc9efacfbb18\" returns successfully" Jul 2 00:01:25.697908 kubelet[2895]: E0702 00:01:25.697864 2895 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.25.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.25.138:6443: connect: connection refused Jul 2 00:01:26.592373 update_engine[1994]: I0702 00:01:26.589395 1994 update_attempter.cc:509] Updating boot flags... Jul 2 00:01:26.707424 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3183) Jul 2 00:01:26.852421 kubelet[2895]: I0702 00:01:26.852125 2895 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-138" Jul 2 00:01:29.425234 kubelet[2895]: E0702 00:01:29.425164 2895 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-138\" not found" node="ip-172-31-25-138" Jul 2 00:01:29.454439 kubelet[2895]: I0702 00:01:29.452306 2895 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-25-138" Jul 2 00:01:29.706907 kubelet[2895]: I0702 00:01:29.706626 2895 apiserver.go:52] "Watching apiserver" Jul 2 00:01:29.730384 kubelet[2895]: I0702 00:01:29.730316 2895 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:01:32.224992 systemd[1]: Reloading requested from client PID 3269 ('systemctl') (unit session-9.scope)... Jul 2 00:01:32.225494 systemd[1]: Reloading... Jul 2 00:01:32.416430 zram_generator::config[3310]: No configuration found. Jul 2 00:01:32.648323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:01:32.847261 systemd[1]: Reloading finished in 621 ms. Jul 2 00:01:32.915526 kubelet[2895]: I0702 00:01:32.915079 2895 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:01:32.916203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:01:32.930894 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:01:32.931310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:01:32.931418 systemd[1]: kubelet.service: Consumed 1.850s CPU time, 116.4M memory peak, 0B memory swap peak. Jul 2 00:01:32.945895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:01:33.656471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:01:33.676585 (kubelet)[3367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:01:33.805156 kubelet[3367]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:01:33.805156 kubelet[3367]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:01:33.805156 kubelet[3367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:01:33.805719 kubelet[3367]: I0702 00:01:33.805256 3367 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:01:33.814919 kubelet[3367]: I0702 00:01:33.814655 3367 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:01:33.814919 kubelet[3367]: I0702 00:01:33.814709 3367 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:01:33.815333 kubelet[3367]: I0702 00:01:33.815299 3367 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:01:33.819073 kubelet[3367]: I0702 00:01:33.819030 3367 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:01:33.831478 kubelet[3367]: I0702 00:01:33.831166 3367 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:01:33.848388 kubelet[3367]: I0702 00:01:33.847653 3367 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:01:33.848388 kubelet[3367]: I0702 00:01:33.848062 3367 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:01:33.848388 kubelet[3367]: I0702 00:01:33.848328 3367 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:01:33.848783 kubelet[3367]: I0702 00:01:33.848756 3367 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:01:33.848906 kubelet[3367]: I0702 00:01:33.848888 3367 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:01:33.849034 kubelet[3367]: I0702 00:01:33.849016 3367 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:01:33.849321 kubelet[3367]: I0702 00:01:33.849300 3367 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:01:33.849505 kubelet[3367]: I0702 00:01:33.849485 3367 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:01:33.849713 kubelet[3367]: I0702 00:01:33.849690 3367 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:01:33.850095 kubelet[3367]: I0702 00:01:33.849847 3367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:01:33.860391 kubelet[3367]: I0702 00:01:33.855849 3367 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:01:33.860391 kubelet[3367]: I0702 00:01:33.856435 3367 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:01:33.860391 kubelet[3367]: I0702 00:01:33.859469 3367 server.go:1256] "Started kubelet" Jul 2 00:01:33.864888 kubelet[3367]: I0702 00:01:33.864833 3367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:01:33.886538 kubelet[3367]: I0702 00:01:33.886487 3367 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:01:33.890515 kubelet[3367]: I0702 00:01:33.890472 3367 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:01:33.896257 kubelet[3367]: I0702 00:01:33.896193 3367 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:01:33.896602 kubelet[3367]: I0702 00:01:33.896540 3367 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:01:33.907318 kubelet[3367]: I0702 00:01:33.907190 3367 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:01:33.915093 kubelet[3367]: I0702 00:01:33.915049 3367 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:01:33.915876 kubelet[3367]: I0702 00:01:33.915843 3367 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:01:33.920621 kubelet[3367]: I0702 00:01:33.917800 3367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:01:33.926345 kubelet[3367]: I0702 00:01:33.926293 3367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:01:33.926345 kubelet[3367]: I0702 00:01:33.926377 3367 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:01:33.926620 kubelet[3367]: I0702 00:01:33.926444 3367 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:01:33.926620 kubelet[3367]: E0702 00:01:33.926529 3367 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:01:33.949788 kubelet[3367]: I0702 00:01:33.949753 3367 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:01:33.950549 kubelet[3367]: I0702 00:01:33.950504 3367 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:01:33.960030 kubelet[3367]: E0702 00:01:33.959990 3367 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:01:33.961449 kubelet[3367]: I0702 00:01:33.961335 3367 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:01:34.021438 kubelet[3367]: I0702 00:01:34.021323 3367 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-138" Jul 2 00:01:34.026977 kubelet[3367]: E0702 00:01:34.026882 3367 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:01:34.075165 kubelet[3367]: I0702 00:01:34.074513 3367 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-25-138" Jul 2 00:01:34.077371 kubelet[3367]: I0702 00:01:34.075481 3367 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-25-138" Jul 2 00:01:34.096557 kubelet[3367]: I0702 00:01:34.095254 3367 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:01:34.096557 kubelet[3367]: I0702 00:01:34.095335 3367 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:01:34.096557 kubelet[3367]: I0702 00:01:34.095417 3367 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:01:34.096557 kubelet[3367]: I0702 00:01:34.095955 3367 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:01:34.096557 kubelet[3367]: I0702 00:01:34.096048 3367 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:01:34.096557 kubelet[3367]: I0702 00:01:34.096070 3367 policy_none.go:49] "None policy: Start" Jul 2 00:01:34.099068 kubelet[3367]: I0702 00:01:34.099031 3367 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:01:34.101490 kubelet[3367]: I0702 00:01:34.100461 3367 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:01:34.101490 kubelet[3367]: I0702 00:01:34.100748 3367 state_mem.go:75] "Updated machine memory state" Jul 2 00:01:34.114555 kubelet[3367]: I0702 00:01:34.114009 3367 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:01:34.116127 kubelet[3367]: I0702 00:01:34.115856 3367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:01:34.228102 kubelet[3367]: I0702 00:01:34.227956 3367 topology_manager.go:215] "Topology Admit Handler" podUID="c6e2d131d26d078e014ee8e2bea4a97f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-25-138" Jul 2 00:01:34.228220 kubelet[3367]: I0702 00:01:34.228110 3367 topology_manager.go:215] "Topology Admit Handler" podUID="1cd3b218a4d2458ae029402fc5ae4b01" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:34.228220 kubelet[3367]: I0702 00:01:34.228209 3367 topology_manager.go:215] "Topology Admit Handler" podUID="14243f51ad17b4ba12f363db0d865bb5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-25-138" Jul 2 00:01:34.242898 kubelet[3367]: E0702 00:01:34.242826 3367 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-25-138\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:34.326027 kubelet[3367]: I0702 00:01:34.325966 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14243f51ad17b4ba12f363db0d865bb5-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-138\" (UID: \"14243f51ad17b4ba12f363db0d865bb5\") " pod="kube-system/kube-scheduler-ip-172-31-25-138" Jul 2 00:01:34.326182 kubelet[3367]: I0702 00:01:34.326045 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6e2d131d26d078e014ee8e2bea4a97f-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-138\" (UID: \"c6e2d131d26d078e014ee8e2bea4a97f\") " pod="kube-system/kube-apiserver-ip-172-31-25-138" Jul 2 00:01:34.326182 kubelet[3367]: I0702 00:01:34.326095 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6e2d131d26d078e014ee8e2bea4a97f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-138\" (UID: \"c6e2d131d26d078e014ee8e2bea4a97f\") " pod="kube-system/kube-apiserver-ip-172-31-25-138" Jul 2 00:01:34.326182 kubelet[3367]: I0702 00:01:34.326139 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:34.326409 kubelet[3367]: I0702 00:01:34.326187 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:34.326409 kubelet[3367]: I0702 00:01:34.326229 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6e2d131d26d078e014ee8e2bea4a97f-ca-certs\") pod \"kube-apiserver-ip-172-31-25-138\" (UID: \"c6e2d131d26d078e014ee8e2bea4a97f\") " pod="kube-system/kube-apiserver-ip-172-31-25-138" Jul 2 00:01:34.326409 kubelet[3367]: I0702 00:01:34.326273 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:34.326409 kubelet[3367]: I0702 00:01:34.326315 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:34.326409 kubelet[3367]: I0702 00:01:34.326399 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cd3b218a4d2458ae029402fc5ae4b01-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-138\" (UID: \"1cd3b218a4d2458ae029402fc5ae4b01\") " pod="kube-system/kube-controller-manager-ip-172-31-25-138" Jul 2 00:01:34.854390 kubelet[3367]: I0702 00:01:34.852454 3367 apiserver.go:52] "Watching apiserver" Jul 2 00:01:34.916849 kubelet[3367]: I0702 00:01:34.915749 3367 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:01:35.065400 kubelet[3367]: E0702 00:01:35.063837 3367 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-25-138\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-138" Jul 2 00:01:35.216824 kubelet[3367]: I0702 00:01:35.216397 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-138" podStartSLOduration=1.216308162 podStartE2EDuration="1.216308162s" podCreationTimestamp="2024-07-02 00:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:01:35.161677891 +0000 UTC m=+1.474351757" watchObservedRunningTime="2024-07-02 00:01:35.216308162 +0000 UTC m=+1.528982004" Jul 2 00:01:35.255511 kubelet[3367]: I0702 00:01:35.255203 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-138" podStartSLOduration=1.2551469530000001 podStartE2EDuration="1.255146953s" podCreationTimestamp="2024-07-02 00:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:01:35.219181577 +0000 UTC m=+1.531855443" watchObservedRunningTime="2024-07-02 00:01:35.255146953 +0000 UTC m=+1.567820807" Jul 2 00:01:37.725059 kubelet[3367]: I0702 00:01:37.724997 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-138" podStartSLOduration=6.724940358 podStartE2EDuration="6.724940358s" podCreationTimestamp="2024-07-02 00:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:01:35.256086191 +0000 UTC m=+1.568760069" watchObservedRunningTime="2024-07-02 00:01:37.724940358 +0000 UTC m=+4.037614212" Jul 2 00:01:38.639759 sudo[2363]: pam_unix(sudo:session): session closed for user root Jul 2 00:01:38.663721 sshd[2360]: pam_unix(sshd:session): session closed for user core Jul 2 00:01:38.670970 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:01:38.672289 systemd[1]: sshd@8-172.31.25.138:22-147.75.109.163:37440.service: Deactivated successfully. Jul 2 00:01:38.677345 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:01:38.677886 systemd[1]: session-9.scope: Consumed 10.176s CPU time, 134.4M memory peak, 0B memory swap peak. Jul 2 00:01:38.679738 systemd-logind[1993]: Removed session 9. Jul 2 00:01:46.329541 kubelet[3367]: I0702 00:01:46.329043 3367 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:01:46.331556 kubelet[3367]: I0702 00:01:46.330422 3367 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:01:46.331632 containerd[2015]: time="2024-07-02T00:01:46.329656877Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:01:46.927786 kubelet[3367]: I0702 00:01:46.926664 3367 topology_manager.go:215] "Topology Admit Handler" podUID="4eab0c03-a693-4bf2-83b3-5c0a1a745f7f" podNamespace="kube-system" podName="kube-proxy-6n29n" Jul 2 00:01:46.947451 systemd[1]: Created slice kubepods-besteffort-pod4eab0c03_a693_4bf2_83b3_5c0a1a745f7f.slice - libcontainer container kubepods-besteffort-pod4eab0c03_a693_4bf2_83b3_5c0a1a745f7f.slice. Jul 2 00:01:47.003746 kubelet[3367]: I0702 00:01:47.003635 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eab0c03-a693-4bf2-83b3-5c0a1a745f7f-xtables-lock\") pod \"kube-proxy-6n29n\" (UID: \"4eab0c03-a693-4bf2-83b3-5c0a1a745f7f\") " pod="kube-system/kube-proxy-6n29n" Jul 2 00:01:47.003746 kubelet[3367]: I0702 00:01:47.003706 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtr48\" (UniqueName: \"kubernetes.io/projected/4eab0c03-a693-4bf2-83b3-5c0a1a745f7f-kube-api-access-qtr48\") pod \"kube-proxy-6n29n\" (UID: \"4eab0c03-a693-4bf2-83b3-5c0a1a745f7f\") " pod="kube-system/kube-proxy-6n29n" Jul 2 00:01:47.004235 kubelet[3367]: I0702 00:01:47.003983 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4eab0c03-a693-4bf2-83b3-5c0a1a745f7f-kube-proxy\") pod \"kube-proxy-6n29n\" (UID: \"4eab0c03-a693-4bf2-83b3-5c0a1a745f7f\") " pod="kube-system/kube-proxy-6n29n" Jul 2 00:01:47.004235 kubelet[3367]: I0702 00:01:47.004043 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eab0c03-a693-4bf2-83b3-5c0a1a745f7f-lib-modules\") pod \"kube-proxy-6n29n\" (UID: \"4eab0c03-a693-4bf2-83b3-5c0a1a745f7f\") " pod="kube-system/kube-proxy-6n29n" Jul 2 00:01:47.117575 kubelet[3367]: E0702 00:01:47.117529 3367 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 00:01:47.117575 kubelet[3367]: E0702 00:01:47.117578 3367 projected.go:200] Error preparing data for projected volume kube-api-access-qtr48 for pod kube-system/kube-proxy-6n29n: configmap "kube-root-ca.crt" not found Jul 2 00:01:47.117793 kubelet[3367]: E0702 00:01:47.117690 3367 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4eab0c03-a693-4bf2-83b3-5c0a1a745f7f-kube-api-access-qtr48 podName:4eab0c03-a693-4bf2-83b3-5c0a1a745f7f nodeName:}" failed. No retries permitted until 2024-07-02 00:01:47.617650541 +0000 UTC m=+13.930324395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qtr48" (UniqueName: "kubernetes.io/projected/4eab0c03-a693-4bf2-83b3-5c0a1a745f7f-kube-api-access-qtr48") pod "kube-proxy-6n29n" (UID: "4eab0c03-a693-4bf2-83b3-5c0a1a745f7f") : configmap "kube-root-ca.crt" not found Jul 2 00:01:47.467440 kubelet[3367]: I0702 00:01:47.465304 3367 topology_manager.go:215] "Topology Admit Handler" podUID="c3183364-2970-4e05-9e99-ddaf0ebc8c64" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-ltqcv" Jul 2 00:01:47.484004 systemd[1]: Created slice kubepods-besteffort-podc3183364_2970_4e05_9e99_ddaf0ebc8c64.slice - libcontainer container kubepods-besteffort-podc3183364_2970_4e05_9e99_ddaf0ebc8c64.slice. Jul 2 00:01:47.508820 kubelet[3367]: I0702 00:01:47.508769 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3183364-2970-4e05-9e99-ddaf0ebc8c64-var-lib-calico\") pod \"tigera-operator-76c4974c85-ltqcv\" (UID: \"c3183364-2970-4e05-9e99-ddaf0ebc8c64\") " pod="tigera-operator/tigera-operator-76c4974c85-ltqcv" Jul 2 00:01:47.511389 kubelet[3367]: I0702 00:01:47.509687 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpll\" (UniqueName: \"kubernetes.io/projected/c3183364-2970-4e05-9e99-ddaf0ebc8c64-kube-api-access-6xpll\") pod \"tigera-operator-76c4974c85-ltqcv\" (UID: \"c3183364-2970-4e05-9e99-ddaf0ebc8c64\") " pod="tigera-operator/tigera-operator-76c4974c85-ltqcv" Jul 2 00:01:47.794620 containerd[2015]: time="2024-07-02T00:01:47.794469853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-ltqcv,Uid:c3183364-2970-4e05-9e99-ddaf0ebc8c64,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:01:47.850291 containerd[2015]: time="2024-07-02T00:01:47.848927818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:47.850291 containerd[2015]: time="2024-07-02T00:01:47.849052352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:47.850291 containerd[2015]: time="2024-07-02T00:01:47.849103050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:47.850291 containerd[2015]: time="2024-07-02T00:01:47.849137077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:47.860689 containerd[2015]: time="2024-07-02T00:01:47.860635258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6n29n,Uid:4eab0c03-a693-4bf2-83b3-5c0a1a745f7f,Namespace:kube-system,Attempt:0,}" Jul 2 00:01:47.896000 systemd[1]: Started cri-containerd-bb43b870e1d5974a08f18acc14f71c5946194d742fcaef65bcefcb5846bde2bd.scope - libcontainer container bb43b870e1d5974a08f18acc14f71c5946194d742fcaef65bcefcb5846bde2bd. Jul 2 00:01:47.941552 containerd[2015]: time="2024-07-02T00:01:47.941309229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:47.941552 containerd[2015]: time="2024-07-02T00:01:47.941476593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:47.941552 containerd[2015]: time="2024-07-02T00:01:47.941521342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:47.941552 containerd[2015]: time="2024-07-02T00:01:47.941555429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:47.981897 systemd[1]: Started cri-containerd-1a49e0e4b011b0ecb0769a6af0dc744ee51d8dd2400256b7a79bff665776277e.scope - libcontainer container 1a49e0e4b011b0ecb0769a6af0dc744ee51d8dd2400256b7a79bff665776277e. Jul 2 00:01:47.991925 containerd[2015]: time="2024-07-02T00:01:47.991584251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-ltqcv,Uid:c3183364-2970-4e05-9e99-ddaf0ebc8c64,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bb43b870e1d5974a08f18acc14f71c5946194d742fcaef65bcefcb5846bde2bd\"" Jul 2 00:01:47.999029 containerd[2015]: time="2024-07-02T00:01:47.998954840Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:01:48.033890 containerd[2015]: time="2024-07-02T00:01:48.033738315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6n29n,Uid:4eab0c03-a693-4bf2-83b3-5c0a1a745f7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a49e0e4b011b0ecb0769a6af0dc744ee51d8dd2400256b7a79bff665776277e\"" Jul 2 00:01:48.040903 containerd[2015]: time="2024-07-02T00:01:48.040830331Z" level=info msg="CreateContainer within sandbox \"1a49e0e4b011b0ecb0769a6af0dc744ee51d8dd2400256b7a79bff665776277e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:01:48.069671 containerd[2015]: time="2024-07-02T00:01:48.069527464Z" level=info msg="CreateContainer within sandbox \"1a49e0e4b011b0ecb0769a6af0dc744ee51d8dd2400256b7a79bff665776277e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"164f4735c5725868783e4e2202c8a466d303102f70f2b77875e9fbce738bc046\"" Jul 2 00:01:48.071485 containerd[2015]: time="2024-07-02T00:01:48.071432855Z" level=info msg="StartContainer for \"164f4735c5725868783e4e2202c8a466d303102f70f2b77875e9fbce738bc046\"" Jul 2 00:01:48.119669 systemd[1]: Started cri-containerd-164f4735c5725868783e4e2202c8a466d303102f70f2b77875e9fbce738bc046.scope - libcontainer container 164f4735c5725868783e4e2202c8a466d303102f70f2b77875e9fbce738bc046. Jul 2 00:01:48.180982 containerd[2015]: time="2024-07-02T00:01:48.180909433Z" level=info msg="StartContainer for \"164f4735c5725868783e4e2202c8a466d303102f70f2b77875e9fbce738bc046\" returns successfully" Jul 2 00:01:49.559115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717135280.mount: Deactivated successfully. Jul 2 00:01:50.364571 containerd[2015]: time="2024-07-02T00:01:50.364506195Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:50.366310 containerd[2015]: time="2024-07-02T00:01:50.366243634Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473590" Jul 2 00:01:50.368098 containerd[2015]: time="2024-07-02T00:01:50.368006224Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:50.372665 containerd[2015]: time="2024-07-02T00:01:50.372598235Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:50.375342 containerd[2015]: time="2024-07-02T00:01:50.375143242Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.376110202s" Jul 2 00:01:50.375342 containerd[2015]: time="2024-07-02T00:01:50.375221107Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 00:01:50.379038 containerd[2015]: time="2024-07-02T00:01:50.378980243Z" level=info msg="CreateContainer within sandbox \"bb43b870e1d5974a08f18acc14f71c5946194d742fcaef65bcefcb5846bde2bd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:01:50.405443 containerd[2015]: time="2024-07-02T00:01:50.405343738Z" level=info msg="CreateContainer within sandbox \"bb43b870e1d5974a08f18acc14f71c5946194d742fcaef65bcefcb5846bde2bd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cad08ec43bb43c44e2579c1874e49bc09540a5fd990a978a41a640c34eeaf87d\"" Jul 2 00:01:50.405765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484953238.mount: Deactivated successfully. Jul 2 00:01:50.410602 containerd[2015]: time="2024-07-02T00:01:50.406317063Z" level=info msg="StartContainer for \"cad08ec43bb43c44e2579c1874e49bc09540a5fd990a978a41a640c34eeaf87d\"" Jul 2 00:01:50.466671 systemd[1]: Started cri-containerd-cad08ec43bb43c44e2579c1874e49bc09540a5fd990a978a41a640c34eeaf87d.scope - libcontainer container cad08ec43bb43c44e2579c1874e49bc09540a5fd990a978a41a640c34eeaf87d. Jul 2 00:01:50.561993 containerd[2015]: time="2024-07-02T00:01:50.561774069Z" level=info msg="StartContainer for \"cad08ec43bb43c44e2579c1874e49bc09540a5fd990a978a41a640c34eeaf87d\" returns successfully" Jul 2 00:01:51.074286 kubelet[3367]: I0702 00:01:51.073833 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6n29n" podStartSLOduration=5.07377321 podStartE2EDuration="5.07377321s" podCreationTimestamp="2024-07-02 00:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:01:49.0726867 +0000 UTC m=+15.385360578" watchObservedRunningTime="2024-07-02 00:01:51.07377321 +0000 UTC m=+17.386447064" Jul 2 00:01:56.642762 kubelet[3367]: I0702 00:01:56.642691 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-ltqcv" podStartSLOduration=7.2624285539999995 podStartE2EDuration="9.642631157s" podCreationTimestamp="2024-07-02 00:01:47 +0000 UTC" firstStartedPulling="2024-07-02 00:01:47.995704813 +0000 UTC m=+14.308378667" lastFinishedPulling="2024-07-02 00:01:50.375907404 +0000 UTC m=+16.688581270" observedRunningTime="2024-07-02 00:01:51.075483375 +0000 UTC m=+17.388157241" watchObservedRunningTime="2024-07-02 00:01:56.642631157 +0000 UTC m=+22.955305047" Jul 2 00:01:56.643746 kubelet[3367]: I0702 00:01:56.642901 3367 topology_manager.go:215] "Topology Admit Handler" podUID="c38759c1-1624-433d-9079-76ada9d9c67b" podNamespace="calico-system" podName="calico-typha-75559bcfb9-k649w" Jul 2 00:01:56.661942 systemd[1]: Created slice kubepods-besteffort-podc38759c1_1624_433d_9079_76ada9d9c67b.slice - libcontainer container kubepods-besteffort-podc38759c1_1624_433d_9079_76ada9d9c67b.slice. Jul 2 00:01:56.673752 kubelet[3367]: I0702 00:01:56.673679 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c38759c1-1624-433d-9079-76ada9d9c67b-tigera-ca-bundle\") pod \"calico-typha-75559bcfb9-k649w\" (UID: \"c38759c1-1624-433d-9079-76ada9d9c67b\") " pod="calico-system/calico-typha-75559bcfb9-k649w" Jul 2 00:01:56.674513 kubelet[3367]: I0702 00:01:56.673763 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c38759c1-1624-433d-9079-76ada9d9c67b-typha-certs\") pod \"calico-typha-75559bcfb9-k649w\" (UID: \"c38759c1-1624-433d-9079-76ada9d9c67b\") " pod="calico-system/calico-typha-75559bcfb9-k649w" Jul 2 00:01:56.674513 kubelet[3367]: I0702 00:01:56.673814 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jlwv\" (UniqueName: \"kubernetes.io/projected/c38759c1-1624-433d-9079-76ada9d9c67b-kube-api-access-5jlwv\") pod \"calico-typha-75559bcfb9-k649w\" (UID: \"c38759c1-1624-433d-9079-76ada9d9c67b\") " pod="calico-system/calico-typha-75559bcfb9-k649w" Jul 2 00:01:56.863658 kubelet[3367]: I0702 00:01:56.863581 3367 topology_manager.go:215] "Topology Admit Handler" podUID="1f341b25-0d0c-422c-88b9-4228206b542a" podNamespace="calico-system" podName="calico-node-v4mbz" Jul 2 00:01:56.882492 systemd[1]: Created slice kubepods-besteffort-pod1f341b25_0d0c_422c_88b9_4228206b542a.slice - libcontainer container kubepods-besteffort-pod1f341b25_0d0c_422c_88b9_4228206b542a.slice. Jul 2 00:01:56.972636 containerd[2015]: time="2024-07-02T00:01:56.972401642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75559bcfb9-k649w,Uid:c38759c1-1624-433d-9079-76ada9d9c67b,Namespace:calico-system,Attempt:0,}" Jul 2 00:01:56.976622 kubelet[3367]: I0702 00:01:56.975680 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1f341b25-0d0c-422c-88b9-4228206b542a-cni-bin-dir\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.976622 kubelet[3367]: I0702 00:01:56.975784 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1f341b25-0d0c-422c-88b9-4228206b542a-node-certs\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.976622 kubelet[3367]: I0702 00:01:56.975861 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1f341b25-0d0c-422c-88b9-4228206b542a-var-run-calico\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.976622 kubelet[3367]: I0702 00:01:56.975933 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f341b25-0d0c-422c-88b9-4228206b542a-var-lib-calico\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.976622 kubelet[3367]: I0702 00:01:56.976015 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1f341b25-0d0c-422c-88b9-4228206b542a-cni-log-dir\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.978452 kubelet[3367]: I0702 00:01:56.976109 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1f341b25-0d0c-422c-88b9-4228206b542a-flexvol-driver-host\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.978452 kubelet[3367]: I0702 00:01:56.976185 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f341b25-0d0c-422c-88b9-4228206b542a-xtables-lock\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.978452 kubelet[3367]: I0702 00:01:56.976305 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f341b25-0d0c-422c-88b9-4228206b542a-lib-modules\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.978452 kubelet[3367]: I0702 00:01:56.976464 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1f341b25-0d0c-422c-88b9-4228206b542a-policysync\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.978452 kubelet[3367]: I0702 00:01:56.976552 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxj44\" (UniqueName: \"kubernetes.io/projected/1f341b25-0d0c-422c-88b9-4228206b542a-kube-api-access-gxj44\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.979295 kubelet[3367]: I0702 00:01:56.976624 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f341b25-0d0c-422c-88b9-4228206b542a-tigera-ca-bundle\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:56.979295 kubelet[3367]: I0702 00:01:56.976711 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1f341b25-0d0c-422c-88b9-4228206b542a-cni-net-dir\") pod \"calico-node-v4mbz\" (UID: \"1f341b25-0d0c-422c-88b9-4228206b542a\") " pod="calico-system/calico-node-v4mbz" Jul 2 00:01:57.004446 kubelet[3367]: I0702 00:01:57.004217 3367 topology_manager.go:215] "Topology Admit Handler" podUID="8454d6f1-3a3a-451c-993f-c65deaee9160" podNamespace="calico-system" podName="csi-node-driver-fcq78" Jul 2 00:01:57.005134 kubelet[3367]: E0702 00:01:57.004837 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcq78" podUID="8454d6f1-3a3a-451c-993f-c65deaee9160" Jul 2 00:01:57.052646 containerd[2015]: time="2024-07-02T00:01:57.049692130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:57.052646 containerd[2015]: time="2024-07-02T00:01:57.052559140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:57.053603 containerd[2015]: time="2024-07-02T00:01:57.052605677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:57.053603 containerd[2015]: time="2024-07-02T00:01:57.052641599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:57.077131 kubelet[3367]: I0702 00:01:57.076979 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8454d6f1-3a3a-451c-993f-c65deaee9160-registration-dir\") pod \"csi-node-driver-fcq78\" (UID: \"8454d6f1-3a3a-451c-993f-c65deaee9160\") " pod="calico-system/csi-node-driver-fcq78" Jul 2 00:01:57.077131 kubelet[3367]: I0702 00:01:57.077064 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xczw\" (UniqueName: \"kubernetes.io/projected/8454d6f1-3a3a-451c-993f-c65deaee9160-kube-api-access-2xczw\") pod \"csi-node-driver-fcq78\" (UID: \"8454d6f1-3a3a-451c-993f-c65deaee9160\") " pod="calico-system/csi-node-driver-fcq78" Jul 2 00:01:57.077413 kubelet[3367]: I0702 00:01:57.077283 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8454d6f1-3a3a-451c-993f-c65deaee9160-varrun\") pod \"csi-node-driver-fcq78\" (UID: \"8454d6f1-3a3a-451c-993f-c65deaee9160\") " pod="calico-system/csi-node-driver-fcq78" Jul 2 00:01:57.077413 kubelet[3367]: I0702 00:01:57.077330 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8454d6f1-3a3a-451c-993f-c65deaee9160-kubelet-dir\") pod \"csi-node-driver-fcq78\" (UID: \"8454d6f1-3a3a-451c-993f-c65deaee9160\") " pod="calico-system/csi-node-driver-fcq78" Jul 2 00:01:57.077528 kubelet[3367]: I0702 00:01:57.077411 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8454d6f1-3a3a-451c-993f-c65deaee9160-socket-dir\") pod \"csi-node-driver-fcq78\" (UID: \"8454d6f1-3a3a-451c-993f-c65deaee9160\") " pod="calico-system/csi-node-driver-fcq78" Jul 2 00:01:57.081045 kubelet[3367]: E0702 00:01:57.080493 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.081045 kubelet[3367]: W0702 00:01:57.080532 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.081045 kubelet[3367]: E0702 00:01:57.080569 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.082234 kubelet[3367]: E0702 00:01:57.081792 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.082234 kubelet[3367]: W0702 00:01:57.081827 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.082234 kubelet[3367]: E0702 00:01:57.082075 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.083594 kubelet[3367]: E0702 00:01:57.083457 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.083594 kubelet[3367]: W0702 00:01:57.083494 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.084974 kubelet[3367]: E0702 00:01:57.084191 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.086649 kubelet[3367]: E0702 00:01:57.085807 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.086649 kubelet[3367]: W0702 00:01:57.085844 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.086649 kubelet[3367]: E0702 00:01:57.085967 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.087142 kubelet[3367]: E0702 00:01:57.086831 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.087142 kubelet[3367]: W0702 00:01:57.086857 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.087142 kubelet[3367]: E0702 00:01:57.086954 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.089294 kubelet[3367]: E0702 00:01:57.088242 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.089294 kubelet[3367]: W0702 00:01:57.088526 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.089294 kubelet[3367]: E0702 00:01:57.088714 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.090048 kubelet[3367]: E0702 00:01:57.089979 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.090048 kubelet[3367]: W0702 00:01:57.090014 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.090670 kubelet[3367]: E0702 00:01:57.090544 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.092070 kubelet[3367]: E0702 00:01:57.091995 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.092070 kubelet[3367]: W0702 00:01:57.092033 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.093636 kubelet[3367]: E0702 00:01:57.093571 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.094364 kubelet[3367]: E0702 00:01:57.094311 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.094555 kubelet[3367]: W0702 00:01:57.094343 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.095001 kubelet[3367]: E0702 00:01:57.094961 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.095117 kubelet[3367]: E0702 00:01:57.095089 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.095117 kubelet[3367]: W0702 00:01:57.095107 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.095457 kubelet[3367]: E0702 00:01:57.095172 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.097142 kubelet[3367]: E0702 00:01:57.097096 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.097142 kubelet[3367]: W0702 00:01:57.097133 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.097496 kubelet[3367]: E0702 00:01:57.097211 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.101188 kubelet[3367]: E0702 00:01:57.101135 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.101188 kubelet[3367]: W0702 00:01:57.101175 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.101188 kubelet[3367]: E0702 00:01:57.101373 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.102278 kubelet[3367]: E0702 00:01:57.101660 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.102278 kubelet[3367]: W0702 00:01:57.101679 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.102278 kubelet[3367]: E0702 00:01:57.101752 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.102278 kubelet[3367]: E0702 00:01:57.102150 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.102278 kubelet[3367]: W0702 00:01:57.102170 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.102578 kubelet[3367]: E0702 00:01:57.102341 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.102912 kubelet[3367]: E0702 00:01:57.102674 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.102912 kubelet[3367]: W0702 00:01:57.102702 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.103720 kubelet[3367]: E0702 00:01:57.103601 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.104381 kubelet[3367]: E0702 00:01:57.104206 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.104381 kubelet[3367]: W0702 00:01:57.104239 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.104848 kubelet[3367]: E0702 00:01:57.104681 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.105813 kubelet[3367]: E0702 00:01:57.105728 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.105813 kubelet[3367]: W0702 00:01:57.105764 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.107854 kubelet[3367]: E0702 00:01:57.106654 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.107854 kubelet[3367]: E0702 00:01:57.107303 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.107854 kubelet[3367]: W0702 00:01:57.107333 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.108204 kubelet[3367]: E0702 00:01:57.108117 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.108681 kubelet[3367]: E0702 00:01:57.108642 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.108681 kubelet[3367]: W0702 00:01:57.108673 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.110048 kubelet[3367]: E0702 00:01:57.110001 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.111534 kubelet[3367]: E0702 00:01:57.111482 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.111534 kubelet[3367]: W0702 00:01:57.111522 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.111874 kubelet[3367]: E0702 00:01:57.111791 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.113384 kubelet[3367]: E0702 00:01:57.112902 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.113384 kubelet[3367]: W0702 00:01:57.112939 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.113551 kubelet[3367]: E0702 00:01:57.113419 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.116404 kubelet[3367]: E0702 00:01:57.114114 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.116404 kubelet[3367]: W0702 00:01:57.114148 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.116404 kubelet[3367]: E0702 00:01:57.114493 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.117023 kubelet[3367]: E0702 00:01:57.116665 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.117023 kubelet[3367]: W0702 00:01:57.116705 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.117023 kubelet[3367]: E0702 00:01:57.116832 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.119608 kubelet[3367]: E0702 00:01:57.118729 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.119608 kubelet[3367]: W0702 00:01:57.118767 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.124015 kubelet[3367]: E0702 00:01:57.123968 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.129063 kubelet[3367]: E0702 00:01:57.126468 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.129063 kubelet[3367]: W0702 00:01:57.126507 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.129063 kubelet[3367]: E0702 00:01:57.126573 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.129063 kubelet[3367]: E0702 00:01:57.129064 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.131980 kubelet[3367]: W0702 00:01:57.129092 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.131980 kubelet[3367]: E0702 00:01:57.131264 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.131980 kubelet[3367]: E0702 00:01:57.131669 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.131980 kubelet[3367]: W0702 00:01:57.131690 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.135814 kubelet[3367]: E0702 00:01:57.132966 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.136699 kubelet[3367]: E0702 00:01:57.136212 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.136699 kubelet[3367]: W0702 00:01:57.136284 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.137544 kubelet[3367]: E0702 00:01:57.137331 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.137678 kubelet[3367]: W0702 00:01:57.137639 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.141951 kubelet[3367]: E0702 00:01:57.141899 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.142095 kubelet[3367]: E0702 00:01:57.141973 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.142095 kubelet[3367]: E0702 00:01:57.142078 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.142227 kubelet[3367]: W0702 00:01:57.142093 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.144938 kubelet[3367]: E0702 00:01:57.144894 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.144938 kubelet[3367]: W0702 00:01:57.144928 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.146748 kubelet[3367]: E0702 00:01:57.146696 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.146748 kubelet[3367]: W0702 00:01:57.146734 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.146945 kubelet[3367]: E0702 00:01:57.146772 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.146945 kubelet[3367]: E0702 00:01:57.146824 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.149540 kubelet[3367]: E0702 00:01:57.148686 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.149540 kubelet[3367]: W0702 00:01:57.148729 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.149540 kubelet[3367]: E0702 00:01:57.148767 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.149540 kubelet[3367]: E0702 00:01:57.149096 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.153840 kubelet[3367]: E0702 00:01:57.153788 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.153840 kubelet[3367]: W0702 00:01:57.153826 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.154074 kubelet[3367]: E0702 00:01:57.153864 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.159189 systemd[1]: Started cri-containerd-dfab556f6ef9d05f0fe5bf88d08ca89f38be64232b386961d0578e795e00b1a9.scope - libcontainer container dfab556f6ef9d05f0fe5bf88d08ca89f38be64232b386961d0578e795e00b1a9. Jul 2 00:01:57.163084 kubelet[3367]: E0702 00:01:57.160492 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.163084 kubelet[3367]: W0702 00:01:57.160528 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.163084 kubelet[3367]: E0702 00:01:57.160566 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.164304 kubelet[3367]: E0702 00:01:57.164236 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.164304 kubelet[3367]: W0702 00:01:57.164296 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.164670 kubelet[3367]: E0702 00:01:57.164332 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.179486 kubelet[3367]: E0702 00:01:57.179438 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.179486 kubelet[3367]: W0702 00:01:57.179474 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.180397 kubelet[3367]: E0702 00:01:57.179509 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.182562 kubelet[3367]: E0702 00:01:57.182499 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.182562 kubelet[3367]: W0702 00:01:57.182537 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.182562 kubelet[3367]: E0702 00:01:57.182573 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.183191 kubelet[3367]: E0702 00:01:57.183142 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.183191 kubelet[3367]: W0702 00:01:57.183172 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.183371 kubelet[3367]: E0702 00:01:57.183328 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.184315 kubelet[3367]: E0702 00:01:57.184261 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.184315 kubelet[3367]: W0702 00:01:57.184295 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.184315 kubelet[3367]: E0702 00:01:57.184340 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.186186 kubelet[3367]: E0702 00:01:57.186141 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.186186 kubelet[3367]: W0702 00:01:57.186177 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.186547 kubelet[3367]: E0702 00:01:57.186478 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.187567 kubelet[3367]: E0702 00:01:57.187504 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.188258 kubelet[3367]: W0702 00:01:57.187656 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.188518 kubelet[3367]: E0702 00:01:57.188440 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.188842 kubelet[3367]: E0702 00:01:57.188727 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.189610 kubelet[3367]: W0702 00:01:57.188757 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.189777 kubelet[3367]: E0702 00:01:57.189635 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.192413 kubelet[3367]: E0702 00:01:57.192264 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.193669 kubelet[3367]: W0702 00:01:57.193445 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.195674 kubelet[3367]: E0702 00:01:57.193987 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.195674 kubelet[3367]: W0702 00:01:57.194021 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.195674 kubelet[3367]: E0702 00:01:57.194637 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.195674 kubelet[3367]: E0702 00:01:57.195474 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.196129 kubelet[3367]: E0702 00:01:57.196087 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.196129 kubelet[3367]: W0702 00:01:57.196122 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.196317 kubelet[3367]: E0702 00:01:57.196269 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.197418 kubelet[3367]: E0702 00:01:57.196780 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.197418 kubelet[3367]: W0702 00:01:57.196835 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.197418 kubelet[3367]: E0702 00:01:57.196929 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.197769 kubelet[3367]: E0702 00:01:57.197520 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.197769 kubelet[3367]: W0702 00:01:57.197543 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.197769 kubelet[3367]: E0702 00:01:57.197676 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.198441 kubelet[3367]: E0702 00:01:57.198399 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.198441 kubelet[3367]: W0702 00:01:57.198432 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.198657 kubelet[3367]: E0702 00:01:57.198622 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.199734 kubelet[3367]: E0702 00:01:57.199684 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.199734 kubelet[3367]: W0702 00:01:57.199721 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.200391 kubelet[3367]: E0702 00:01:57.199982 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.200391 kubelet[3367]: E0702 00:01:57.200287 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.200391 kubelet[3367]: W0702 00:01:57.200306 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.200647 kubelet[3367]: E0702 00:01:57.200559 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.201808 kubelet[3367]: E0702 00:01:57.201760 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.201808 kubelet[3367]: W0702 00:01:57.201796 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.202006 kubelet[3367]: E0702 00:01:57.201982 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.202770 kubelet[3367]: E0702 00:01:57.202238 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.202770 kubelet[3367]: W0702 00:01:57.202255 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.202770 kubelet[3367]: E0702 00:01:57.202469 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.204133 kubelet[3367]: E0702 00:01:57.203521 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.204133 kubelet[3367]: W0702 00:01:57.203579 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.204133 kubelet[3367]: E0702 00:01:57.203683 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.204938 kubelet[3367]: E0702 00:01:57.204814 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.204938 kubelet[3367]: W0702 00:01:57.204876 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.205056 kubelet[3367]: E0702 00:01:57.205034 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.206778 kubelet[3367]: E0702 00:01:57.206034 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.206778 kubelet[3367]: W0702 00:01:57.206573 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.206987 kubelet[3367]: E0702 00:01:57.206853 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.209434 kubelet[3367]: E0702 00:01:57.208448 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.209434 kubelet[3367]: W0702 00:01:57.208484 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.209434 kubelet[3367]: E0702 00:01:57.209308 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.209434 kubelet[3367]: W0702 00:01:57.209332 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.210240 kubelet[3367]: E0702 00:01:57.210120 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.210240 kubelet[3367]: E0702 00:01:57.210190 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.210612 kubelet[3367]: E0702 00:01:57.210425 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.210612 kubelet[3367]: W0702 00:01:57.210446 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.210969 kubelet[3367]: E0702 00:01:57.210707 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.211782 kubelet[3367]: E0702 00:01:57.211435 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.211782 kubelet[3367]: W0702 00:01:57.211480 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.211782 kubelet[3367]: E0702 00:01:57.211548 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.213229 kubelet[3367]: E0702 00:01:57.212375 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.213229 kubelet[3367]: W0702 00:01:57.212409 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.213229 kubelet[3367]: E0702 00:01:57.213155 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.213584 kubelet[3367]: E0702 00:01:57.213510 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.213584 kubelet[3367]: W0702 00:01:57.213530 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.213716 kubelet[3367]: E0702 00:01:57.213701 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.214005 kubelet[3367]: E0702 00:01:57.213968 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.214005 kubelet[3367]: W0702 00:01:57.213997 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.214233 kubelet[3367]: E0702 00:01:57.214055 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.215343 kubelet[3367]: E0702 00:01:57.215287 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.215546 kubelet[3367]: W0702 00:01:57.215338 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.215830 kubelet[3367]: E0702 00:01:57.215716 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.217141 kubelet[3367]: E0702 00:01:57.217042 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.217141 kubelet[3367]: W0702 00:01:57.217086 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.217901 kubelet[3367]: E0702 00:01:57.217481 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.217901 kubelet[3367]: W0702 00:01:57.217510 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.217901 kubelet[3367]: E0702 00:01:57.217541 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.218471 kubelet[3367]: E0702 00:01:57.217541 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.218648 kubelet[3367]: E0702 00:01:57.218625 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.220135 kubelet[3367]: W0702 00:01:57.219498 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.220135 kubelet[3367]: E0702 00:01:57.219546 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.222106 kubelet[3367]: E0702 00:01:57.222065 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.222282 kubelet[3367]: W0702 00:01:57.222254 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.222450 kubelet[3367]: E0702 00:01:57.222427 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.243966 kubelet[3367]: E0702 00:01:57.243850 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:01:57.244404 kubelet[3367]: W0702 00:01:57.244130 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:01:57.244545 kubelet[3367]: E0702 00:01:57.244522 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:01:57.334716 containerd[2015]: time="2024-07-02T00:01:57.334536737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75559bcfb9-k649w,Uid:c38759c1-1624-433d-9079-76ada9d9c67b,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfab556f6ef9d05f0fe5bf88d08ca89f38be64232b386961d0578e795e00b1a9\"" Jul 2 00:01:57.342672 containerd[2015]: time="2024-07-02T00:01:57.342607776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:01:57.495285 containerd[2015]: time="2024-07-02T00:01:57.495096158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v4mbz,Uid:1f341b25-0d0c-422c-88b9-4228206b542a,Namespace:calico-system,Attempt:0,}" Jul 2 00:01:57.553333 containerd[2015]: time="2024-07-02T00:01:57.552155706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:01:57.553333 containerd[2015]: time="2024-07-02T00:01:57.552258854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:57.553333 containerd[2015]: time="2024-07-02T00:01:57.552468114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:01:57.553333 containerd[2015]: time="2024-07-02T00:01:57.552545727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:01:57.589762 systemd[1]: Started cri-containerd-cd6372204a25a5827ac6dce88b16430533f24a400f8cc0b3060b56b12ea1752f.scope - libcontainer container cd6372204a25a5827ac6dce88b16430533f24a400f8cc0b3060b56b12ea1752f. Jul 2 00:01:57.687586 containerd[2015]: time="2024-07-02T00:01:57.687517292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v4mbz,Uid:1f341b25-0d0c-422c-88b9-4228206b542a,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd6372204a25a5827ac6dce88b16430533f24a400f8cc0b3060b56b12ea1752f\"" Jul 2 00:01:58.928427 kubelet[3367]: E0702 00:01:58.927568 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcq78" podUID="8454d6f1-3a3a-451c-993f-c65deaee9160" Jul 2 00:01:59.813321 containerd[2015]: time="2024-07-02T00:01:59.811798074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:59.814451 containerd[2015]: time="2024-07-02T00:01:59.814065552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 00:01:59.816266 containerd[2015]: time="2024-07-02T00:01:59.816172250Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:59.825627 containerd[2015]: time="2024-07-02T00:01:59.825537081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:01:59.828297 containerd[2015]: time="2024-07-02T00:01:59.828214286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.485281605s" Jul 2 00:01:59.828297 containerd[2015]: time="2024-07-02T00:01:59.828290100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 00:01:59.829867 containerd[2015]: time="2024-07-02T00:01:59.828988343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:01:59.876154 containerd[2015]: time="2024-07-02T00:01:59.876070778Z" level=info msg="CreateContainer within sandbox \"dfab556f6ef9d05f0fe5bf88d08ca89f38be64232b386961d0578e795e00b1a9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:01:59.910544 containerd[2015]: time="2024-07-02T00:01:59.910469534Z" level=info msg="CreateContainer within sandbox \"dfab556f6ef9d05f0fe5bf88d08ca89f38be64232b386961d0578e795e00b1a9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1562c64b47ea09f4d96354b4a775d08a4cdbf049f1a0fbe56556efc771e77749\"" Jul 2 00:01:59.912016 containerd[2015]: time="2024-07-02T00:01:59.911952233Z" level=info msg="StartContainer for \"1562c64b47ea09f4d96354b4a775d08a4cdbf049f1a0fbe56556efc771e77749\"" Jul 2 00:01:59.988753 systemd[1]: Started cri-containerd-1562c64b47ea09f4d96354b4a775d08a4cdbf049f1a0fbe56556efc771e77749.scope - libcontainer container 1562c64b47ea09f4d96354b4a775d08a4cdbf049f1a0fbe56556efc771e77749. Jul 2 00:02:00.121077 containerd[2015]: time="2024-07-02T00:02:00.120988379Z" level=info msg="StartContainer for \"1562c64b47ea09f4d96354b4a775d08a4cdbf049f1a0fbe56556efc771e77749\" returns successfully" Jul 2 00:02:00.927607 kubelet[3367]: E0702 00:02:00.927448 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcq78" podUID="8454d6f1-3a3a-451c-993f-c65deaee9160" Jul 2 00:02:01.143314 containerd[2015]: time="2024-07-02T00:02:01.143033620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:01.149267 containerd[2015]: time="2024-07-02T00:02:01.146540030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 00:02:01.150060 containerd[2015]: time="2024-07-02T00:02:01.149795537Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:01.158980 containerd[2015]: time="2024-07-02T00:02:01.158916914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:01.161274 containerd[2015]: time="2024-07-02T00:02:01.161189057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.332132996s" Jul 2 00:02:01.161567 containerd[2015]: time="2024-07-02T00:02:01.161261009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 00:02:01.169224 containerd[2015]: time="2024-07-02T00:02:01.168686255Z" level=info msg="CreateContainer within sandbox \"cd6372204a25a5827ac6dce88b16430533f24a400f8cc0b3060b56b12ea1752f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:02:01.197732 kubelet[3367]: E0702 00:02:01.197579 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.198012 kubelet[3367]: W0702 00:02:01.197979 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.198162 kubelet[3367]: E0702 00:02:01.198140 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.198932 kubelet[3367]: E0702 00:02:01.198886 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.199101 kubelet[3367]: W0702 00:02:01.199072 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.199594 kubelet[3367]: E0702 00:02:01.199234 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.200234 kubelet[3367]: E0702 00:02:01.200147 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.200234 kubelet[3367]: W0702 00:02:01.200179 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.200861 kubelet[3367]: E0702 00:02:01.200586 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.202126 kubelet[3367]: E0702 00:02:01.201721 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.202126 kubelet[3367]: W0702 00:02:01.201773 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.202126 kubelet[3367]: E0702 00:02:01.201809 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.202403 containerd[2015]: time="2024-07-02T00:02:01.202334019Z" level=info msg="CreateContainer within sandbox \"cd6372204a25a5827ac6dce88b16430533f24a400f8cc0b3060b56b12ea1752f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5a34471965bca69fb4f1a1e490dad09738a320f901ae57c1d22a9f5b4ab7f5a6\"" Jul 2 00:02:01.203522 kubelet[3367]: E0702 00:02:01.203106 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.203522 kubelet[3367]: W0702 00:02:01.203135 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.203522 kubelet[3367]: E0702 00:02:01.203193 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.204210 kubelet[3367]: E0702 00:02:01.203934 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.204210 kubelet[3367]: W0702 00:02:01.203959 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.204210 kubelet[3367]: E0702 00:02:01.203989 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.207448 kubelet[3367]: E0702 00:02:01.206483 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.207448 kubelet[3367]: W0702 00:02:01.206539 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.207448 kubelet[3367]: E0702 00:02:01.206575 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.207448 kubelet[3367]: E0702 00:02:01.207138 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.207448 kubelet[3367]: W0702 00:02:01.207161 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.207448 kubelet[3367]: E0702 00:02:01.207194 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.207848 containerd[2015]: time="2024-07-02T00:02:01.204866624Z" level=info msg="StartContainer for \"5a34471965bca69fb4f1a1e490dad09738a320f901ae57c1d22a9f5b4ab7f5a6\"" Jul 2 00:02:01.208471 kubelet[3367]: E0702 00:02:01.208406 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.208673 kubelet[3367]: W0702 00:02:01.208647 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.208845 kubelet[3367]: E0702 00:02:01.208824 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.209723 kubelet[3367]: E0702 00:02:01.209670 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.209907 kubelet[3367]: W0702 00:02:01.209881 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.210097 kubelet[3367]: E0702 00:02:01.210072 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.210873 kubelet[3367]: E0702 00:02:01.210828 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.211207 kubelet[3367]: W0702 00:02:01.211175 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.211499 kubelet[3367]: E0702 00:02:01.211474 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.212186 kubelet[3367]: E0702 00:02:01.212159 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.212368 kubelet[3367]: W0702 00:02:01.212329 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.212535 kubelet[3367]: E0702 00:02:01.212516 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.213052 kubelet[3367]: E0702 00:02:01.213024 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.213444 kubelet[3367]: W0702 00:02:01.213130 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.213444 kubelet[3367]: E0702 00:02:01.213166 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.214029 kubelet[3367]: E0702 00:02:01.213827 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.214029 kubelet[3367]: W0702 00:02:01.213857 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.214029 kubelet[3367]: E0702 00:02:01.213890 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.216524 kubelet[3367]: E0702 00:02:01.216316 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.216524 kubelet[3367]: W0702 00:02:01.216379 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.216524 kubelet[3367]: E0702 00:02:01.216420 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.242725 kubelet[3367]: E0702 00:02:01.242477 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.242725 kubelet[3367]: W0702 00:02:01.242515 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.242725 kubelet[3367]: E0702 00:02:01.242554 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.245158 kubelet[3367]: E0702 00:02:01.243778 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.245158 kubelet[3367]: W0702 00:02:01.243815 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.245934 kubelet[3367]: E0702 00:02:01.245848 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.246517 kubelet[3367]: E0702 00:02:01.246050 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.246517 kubelet[3367]: W0702 00:02:01.246068 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.246517 kubelet[3367]: E0702 00:02:01.246109 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.248312 kubelet[3367]: E0702 00:02:01.247935 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.248312 kubelet[3367]: W0702 00:02:01.247990 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.248312 kubelet[3367]: E0702 00:02:01.248042 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.250381 kubelet[3367]: E0702 00:02:01.250302 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.250381 kubelet[3367]: W0702 00:02:01.250341 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.250381 kubelet[3367]: E0702 00:02:01.250411 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.252708 kubelet[3367]: E0702 00:02:01.252397 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.252708 kubelet[3367]: W0702 00:02:01.252437 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.252708 kubelet[3367]: E0702 00:02:01.252486 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.253445 kubelet[3367]: E0702 00:02:01.253228 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.253445 kubelet[3367]: W0702 00:02:01.253256 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.253445 kubelet[3367]: E0702 00:02:01.253395 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.254335 kubelet[3367]: E0702 00:02:01.254093 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.254335 kubelet[3367]: W0702 00:02:01.254129 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.254335 kubelet[3367]: E0702 00:02:01.254281 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.255980 kubelet[3367]: E0702 00:02:01.255042 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.255980 kubelet[3367]: W0702 00:02:01.255071 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.255980 kubelet[3367]: E0702 00:02:01.255924 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.257107 kubelet[3367]: E0702 00:02:01.256796 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.257107 kubelet[3367]: W0702 00:02:01.256827 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.258181 kubelet[3367]: E0702 00:02:01.257605 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.261345 kubelet[3367]: E0702 00:02:01.261308 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.261743 kubelet[3367]: W0702 00:02:01.261571 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.261916 kubelet[3367]: E0702 00:02:01.261844 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.262590 kubelet[3367]: E0702 00:02:01.262335 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.262590 kubelet[3367]: W0702 00:02:01.262388 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.263616 kubelet[3367]: E0702 00:02:01.263398 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.263869 kubelet[3367]: E0702 00:02:01.263754 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.263869 kubelet[3367]: W0702 00:02:01.263821 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.264174 kubelet[3367]: E0702 00:02:01.264076 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.265080 kubelet[3367]: E0702 00:02:01.265035 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.265080 kubelet[3367]: W0702 00:02:01.265071 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.266407 kubelet[3367]: E0702 00:02:01.266226 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.266407 kubelet[3367]: W0702 00:02:01.266301 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.266804 kubelet[3367]: E0702 00:02:01.266439 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.266804 kubelet[3367]: E0702 00:02:01.266500 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.266905 kubelet[3367]: E0702 00:02:01.266885 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.266964 kubelet[3367]: W0702 00:02:01.266904 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.266964 kubelet[3367]: E0702 00:02:01.266930 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.268878 kubelet[3367]: E0702 00:02:01.268671 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.268878 kubelet[3367]: W0702 00:02:01.268704 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.268878 kubelet[3367]: E0702 00:02:01.268793 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.269838 kubelet[3367]: E0702 00:02:01.269637 3367 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:02:01.269838 kubelet[3367]: W0702 00:02:01.269670 3367 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:02:01.271007 kubelet[3367]: E0702 00:02:01.269703 3367 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:02:01.296706 systemd[1]: Started cri-containerd-5a34471965bca69fb4f1a1e490dad09738a320f901ae57c1d22a9f5b4ab7f5a6.scope - libcontainer container 5a34471965bca69fb4f1a1e490dad09738a320f901ae57c1d22a9f5b4ab7f5a6. Jul 2 00:02:01.393322 containerd[2015]: time="2024-07-02T00:02:01.391880898Z" level=info msg="StartContainer for \"5a34471965bca69fb4f1a1e490dad09738a320f901ae57c1d22a9f5b4ab7f5a6\" returns successfully" Jul 2 00:02:01.416647 systemd[1]: cri-containerd-5a34471965bca69fb4f1a1e490dad09738a320f901ae57c1d22a9f5b4ab7f5a6.scope: Deactivated successfully. Jul 2 00:02:01.484571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a34471965bca69fb4f1a1e490dad09738a320f901ae57c1d22a9f5b4ab7f5a6-rootfs.mount: Deactivated successfully. Jul 2 00:02:02.123956 kubelet[3367]: I0702 00:02:02.123907 3367 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:02:02.150773 kubelet[3367]: I0702 00:02:02.150684 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-75559bcfb9-k649w" podStartSLOduration=3.661534111 podStartE2EDuration="6.150604441s" podCreationTimestamp="2024-07-02 00:01:56 +0000 UTC" firstStartedPulling="2024-07-02 00:01:57.339728472 +0000 UTC m=+23.652402338" lastFinishedPulling="2024-07-02 00:01:59.828798802 +0000 UTC m=+26.141472668" observedRunningTime="2024-07-02 00:02:01.152544406 +0000 UTC m=+27.465218272" watchObservedRunningTime="2024-07-02 00:02:02.150604441 +0000 UTC m=+28.463278295" Jul 2 00:02:02.391885 containerd[2015]: time="2024-07-02T00:02:02.391220789Z" level=info msg="shim disconnected" id=5a34471965bca69fb4f1a1e490dad09738a320f901ae57c1d22a9f5b4ab7f5a6 namespace=k8s.io Jul 2 00:02:02.391885 containerd[2015]: time="2024-07-02T00:02:02.391298691Z" level=warning msg="cleaning up after shim disconnected" id=5a34471965bca69fb4f1a1e490dad09738a320f901ae57c1d22a9f5b4ab7f5a6 namespace=k8s.io Jul 2 00:02:02.391885 containerd[2015]: time="2024-07-02T00:02:02.391321395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:02.927566 kubelet[3367]: E0702 00:02:02.927496 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcq78" podUID="8454d6f1-3a3a-451c-993f-c65deaee9160" Jul 2 00:02:03.135688 containerd[2015]: time="2024-07-02T00:02:03.135538339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:02:04.927804 kubelet[3367]: E0702 00:02:04.927739 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcq78" podUID="8454d6f1-3a3a-451c-993f-c65deaee9160" Jul 2 00:02:06.080398 kubelet[3367]: I0702 00:02:06.080167 3367 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:02:06.927096 kubelet[3367]: E0702 00:02:06.926990 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcq78" podUID="8454d6f1-3a3a-451c-993f-c65deaee9160" Jul 2 00:02:07.055263 containerd[2015]: time="2024-07-02T00:02:07.055180021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:07.056835 containerd[2015]: time="2024-07-02T00:02:07.056755217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 00:02:07.058264 containerd[2015]: time="2024-07-02T00:02:07.058187553Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:07.062760 containerd[2015]: time="2024-07-02T00:02:07.062682016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:07.064181 containerd[2015]: time="2024-07-02T00:02:07.064120685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.928525327s" Jul 2 00:02:07.065661 containerd[2015]: time="2024-07-02T00:02:07.064178124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 00:02:07.069012 containerd[2015]: time="2024-07-02T00:02:07.068886632Z" level=info msg="CreateContainer within sandbox \"cd6372204a25a5827ac6dce88b16430533f24a400f8cc0b3060b56b12ea1752f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:02:07.099429 containerd[2015]: time="2024-07-02T00:02:07.099338499Z" level=info msg="CreateContainer within sandbox \"cd6372204a25a5827ac6dce88b16430533f24a400f8cc0b3060b56b12ea1752f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"439f5d2473ae45fda91b7c871e4c07bea0d57f321b83bec4f3b38b3e44b5d9fe\"" Jul 2 00:02:07.101433 containerd[2015]: time="2024-07-02T00:02:07.100207476Z" level=info msg="StartContainer for \"439f5d2473ae45fda91b7c871e4c07bea0d57f321b83bec4f3b38b3e44b5d9fe\"" Jul 2 00:02:07.172752 systemd[1]: Started cri-containerd-439f5d2473ae45fda91b7c871e4c07bea0d57f321b83bec4f3b38b3e44b5d9fe.scope - libcontainer container 439f5d2473ae45fda91b7c871e4c07bea0d57f321b83bec4f3b38b3e44b5d9fe. Jul 2 00:02:07.227554 containerd[2015]: time="2024-07-02T00:02:07.227186539Z" level=info msg="StartContainer for \"439f5d2473ae45fda91b7c871e4c07bea0d57f321b83bec4f3b38b3e44b5d9fe\" returns successfully" Jul 2 00:02:08.895929 systemd[1]: cri-containerd-439f5d2473ae45fda91b7c871e4c07bea0d57f321b83bec4f3b38b3e44b5d9fe.scope: Deactivated successfully. Jul 2 00:02:08.910942 kubelet[3367]: I0702 00:02:08.910883 3367 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:02:08.948326 systemd[1]: Created slice kubepods-besteffort-pod8454d6f1_3a3a_451c_993f_c65deaee9160.slice - libcontainer container kubepods-besteffort-pod8454d6f1_3a3a_451c_993f_c65deaee9160.slice. Jul 2 00:02:08.958165 containerd[2015]: time="2024-07-02T00:02:08.957567689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fcq78,Uid:8454d6f1-3a3a-451c-993f-c65deaee9160,Namespace:calico-system,Attempt:0,}" Jul 2 00:02:08.975986 kubelet[3367]: I0702 00:02:08.975443 3367 topology_manager.go:215] "Topology Admit Handler" podUID="817cb3cd-9ece-480a-be6d-b9c58d6e26ca" podNamespace="kube-system" podName="coredns-76f75df574-5csz7" Jul 2 00:02:08.985715 kubelet[3367]: I0702 00:02:08.985554 3367 topology_manager.go:215] "Topology Admit Handler" podUID="27764a61-fb1a-445e-be33-e807bd376940" podNamespace="kube-system" podName="coredns-76f75df574-m52t5" Jul 2 00:02:08.995224 kubelet[3367]: I0702 00:02:08.995091 3367 topology_manager.go:215] "Topology Admit Handler" podUID="ba54b70a-23ca-47a5-bbe3-92ee55439155" podNamespace="calico-system" podName="calico-kube-controllers-6648c847d6-tr2qf" Jul 2 00:02:08.999188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-439f5d2473ae45fda91b7c871e4c07bea0d57f321b83bec4f3b38b3e44b5d9fe-rootfs.mount: Deactivated successfully. Jul 2 00:02:09.018645 kubelet[3367]: I0702 00:02:09.018593 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/817cb3cd-9ece-480a-be6d-b9c58d6e26ca-config-volume\") pod \"coredns-76f75df574-5csz7\" (UID: \"817cb3cd-9ece-480a-be6d-b9c58d6e26ca\") " pod="kube-system/coredns-76f75df574-5csz7" Jul 2 00:02:09.021888 kubelet[3367]: I0702 00:02:09.020872 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstww\" (UniqueName: \"kubernetes.io/projected/817cb3cd-9ece-480a-be6d-b9c58d6e26ca-kube-api-access-nstww\") pod \"coredns-76f75df574-5csz7\" (UID: \"817cb3cd-9ece-480a-be6d-b9c58d6e26ca\") " pod="kube-system/coredns-76f75df574-5csz7" Jul 2 00:02:09.046176 systemd[1]: Created slice kubepods-burstable-pod817cb3cd_9ece_480a_be6d_b9c58d6e26ca.slice - libcontainer container kubepods-burstable-pod817cb3cd_9ece_480a_be6d_b9c58d6e26ca.slice. Jul 2 00:02:09.070727 systemd[1]: Created slice kubepods-burstable-pod27764a61_fb1a_445e_be33_e807bd376940.slice - libcontainer container kubepods-burstable-pod27764a61_fb1a_445e_be33_e807bd376940.slice. Jul 2 00:02:09.101894 systemd[1]: Created slice kubepods-besteffort-podba54b70a_23ca_47a5_bbe3_92ee55439155.slice - libcontainer container kubepods-besteffort-podba54b70a_23ca_47a5_bbe3_92ee55439155.slice. Jul 2 00:02:09.122308 kubelet[3367]: I0702 00:02:09.122259 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba54b70a-23ca-47a5-bbe3-92ee55439155-tigera-ca-bundle\") pod \"calico-kube-controllers-6648c847d6-tr2qf\" (UID: \"ba54b70a-23ca-47a5-bbe3-92ee55439155\") " pod="calico-system/calico-kube-controllers-6648c847d6-tr2qf" Jul 2 00:02:09.133159 kubelet[3367]: I0702 00:02:09.123159 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqww\" (UniqueName: \"kubernetes.io/projected/ba54b70a-23ca-47a5-bbe3-92ee55439155-kube-api-access-njqww\") pod \"calico-kube-controllers-6648c847d6-tr2qf\" (UID: \"ba54b70a-23ca-47a5-bbe3-92ee55439155\") " pod="calico-system/calico-kube-controllers-6648c847d6-tr2qf" Jul 2 00:02:09.133159 kubelet[3367]: I0702 00:02:09.123218 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27764a61-fb1a-445e-be33-e807bd376940-config-volume\") pod \"coredns-76f75df574-m52t5\" (UID: \"27764a61-fb1a-445e-be33-e807bd376940\") " pod="kube-system/coredns-76f75df574-m52t5" Jul 2 00:02:09.133159 kubelet[3367]: I0702 00:02:09.123343 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hwgl\" (UniqueName: \"kubernetes.io/projected/27764a61-fb1a-445e-be33-e807bd376940-kube-api-access-5hwgl\") pod \"coredns-76f75df574-m52t5\" (UID: \"27764a61-fb1a-445e-be33-e807bd376940\") " pod="kube-system/coredns-76f75df574-m52t5" Jul 2 00:02:09.202420 containerd[2015]: time="2024-07-02T00:02:09.202105632Z" level=error msg="Failed to destroy network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.202845 containerd[2015]: time="2024-07-02T00:02:09.202778735Z" level=error msg="encountered an error cleaning up failed sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.204580 containerd[2015]: time="2024-07-02T00:02:09.202872636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fcq78,Uid:8454d6f1-3a3a-451c-993f-c65deaee9160,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.204744 kubelet[3367]: E0702 00:02:09.203230 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.204744 kubelet[3367]: E0702 00:02:09.203307 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fcq78" Jul 2 00:02:09.204744 kubelet[3367]: E0702 00:02:09.203348 3367 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fcq78" Jul 2 00:02:09.204941 kubelet[3367]: E0702 00:02:09.203817 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fcq78_calico-system(8454d6f1-3a3a-451c-993f-c65deaee9160)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fcq78_calico-system(8454d6f1-3a3a-451c-993f-c65deaee9160)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fcq78" podUID="8454d6f1-3a3a-451c-993f-c65deaee9160" Jul 2 00:02:09.210073 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7-shm.mount: Deactivated successfully. Jul 2 00:02:09.363291 containerd[2015]: time="2024-07-02T00:02:09.363105652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5csz7,Uid:817cb3cd-9ece-480a-be6d-b9c58d6e26ca,Namespace:kube-system,Attempt:0,}" Jul 2 00:02:09.389540 containerd[2015]: time="2024-07-02T00:02:09.389488074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m52t5,Uid:27764a61-fb1a-445e-be33-e807bd376940,Namespace:kube-system,Attempt:0,}" Jul 2 00:02:09.413128 containerd[2015]: time="2024-07-02T00:02:09.412885226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6648c847d6-tr2qf,Uid:ba54b70a-23ca-47a5-bbe3-92ee55439155,Namespace:calico-system,Attempt:0,}" Jul 2 00:02:09.709192 containerd[2015]: time="2024-07-02T00:02:09.709049147Z" level=info msg="shim disconnected" id=439f5d2473ae45fda91b7c871e4c07bea0d57f321b83bec4f3b38b3e44b5d9fe namespace=k8s.io Jul 2 00:02:09.709192 containerd[2015]: time="2024-07-02T00:02:09.709127432Z" level=warning msg="cleaning up after shim disconnected" id=439f5d2473ae45fda91b7c871e4c07bea0d57f321b83bec4f3b38b3e44b5d9fe namespace=k8s.io Jul 2 00:02:09.709192 containerd[2015]: time="2024-07-02T00:02:09.709150784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:02:09.869006 containerd[2015]: time="2024-07-02T00:02:09.868547410Z" level=error msg="Failed to destroy network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.869856 containerd[2015]: time="2024-07-02T00:02:09.869609323Z" level=error msg="encountered an error cleaning up failed sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.869856 containerd[2015]: time="2024-07-02T00:02:09.869711812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6648c847d6-tr2qf,Uid:ba54b70a-23ca-47a5-bbe3-92ee55439155,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.870974 kubelet[3367]: E0702 00:02:09.870087 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.870974 kubelet[3367]: E0702 00:02:09.870169 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6648c847d6-tr2qf" Jul 2 00:02:09.870974 kubelet[3367]: E0702 00:02:09.870207 3367 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6648c847d6-tr2qf" Jul 2 00:02:09.871232 kubelet[3367]: E0702 00:02:09.870293 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6648c847d6-tr2qf_calico-system(ba54b70a-23ca-47a5-bbe3-92ee55439155)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6648c847d6-tr2qf_calico-system(ba54b70a-23ca-47a5-bbe3-92ee55439155)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6648c847d6-tr2qf" podUID="ba54b70a-23ca-47a5-bbe3-92ee55439155" Jul 2 00:02:09.900083 containerd[2015]: time="2024-07-02T00:02:09.899989357Z" level=error msg="Failed to destroy network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.900814 containerd[2015]: time="2024-07-02T00:02:09.900762851Z" level=error msg="encountered an error cleaning up failed sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.901471 containerd[2015]: time="2024-07-02T00:02:09.901419198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5csz7,Uid:817cb3cd-9ece-480a-be6d-b9c58d6e26ca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.902044 kubelet[3367]: E0702 00:02:09.902005 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.902193 kubelet[3367]: E0702 00:02:09.902160 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5csz7" Jul 2 00:02:09.902455 kubelet[3367]: E0702 00:02:09.902417 3367 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5csz7" Jul 2 00:02:09.902991 kubelet[3367]: E0702 00:02:09.902944 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5csz7_kube-system(817cb3cd-9ece-480a-be6d-b9c58d6e26ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5csz7_kube-system(817cb3cd-9ece-480a-be6d-b9c58d6e26ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5csz7" podUID="817cb3cd-9ece-480a-be6d-b9c58d6e26ca" Jul 2 00:02:09.903692 containerd[2015]: time="2024-07-02T00:02:09.903479204Z" level=error msg="Failed to destroy network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.905289 containerd[2015]: time="2024-07-02T00:02:09.905203953Z" level=error msg="encountered an error cleaning up failed sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.905843 containerd[2015]: time="2024-07-02T00:02:09.905549428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m52t5,Uid:27764a61-fb1a-445e-be33-e807bd376940,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.905980 kubelet[3367]: E0702 00:02:09.905954 3367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:09.906056 kubelet[3367]: E0702 00:02:09.906031 3367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m52t5" Jul 2 00:02:09.906118 kubelet[3367]: E0702 00:02:09.906068 3367 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m52t5" Jul 2 00:02:09.906180 kubelet[3367]: E0702 00:02:09.906146 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m52t5_kube-system(27764a61-fb1a-445e-be33-e807bd376940)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m52t5_kube-system(27764a61-fb1a-445e-be33-e807bd376940)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m52t5" podUID="27764a61-fb1a-445e-be33-e807bd376940" Jul 2 00:02:10.179047 containerd[2015]: time="2024-07-02T00:02:10.178876509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:02:10.183579 kubelet[3367]: I0702 00:02:10.182607 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:10.186984 containerd[2015]: time="2024-07-02T00:02:10.186156819Z" level=info msg="StopPodSandbox for \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\"" Jul 2 00:02:10.186984 containerd[2015]: time="2024-07-02T00:02:10.186576957Z" level=info msg="Ensure that sandbox 3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7 in task-service has been cleanup successfully" Jul 2 00:02:10.190642 kubelet[3367]: I0702 00:02:10.190575 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:10.194179 containerd[2015]: time="2024-07-02T00:02:10.192872452Z" level=info msg="StopPodSandbox for \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\"" Jul 2 00:02:10.198571 containerd[2015]: time="2024-07-02T00:02:10.195441806Z" level=info msg="Ensure that sandbox 21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce in task-service has been cleanup successfully" Jul 2 00:02:10.200113 kubelet[3367]: I0702 00:02:10.200053 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:10.206402 containerd[2015]: time="2024-07-02T00:02:10.203662590Z" level=info msg="StopPodSandbox for \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\"" Jul 2 00:02:10.206402 containerd[2015]: time="2024-07-02T00:02:10.204016029Z" level=info msg="Ensure that sandbox df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c in task-service has been cleanup successfully" Jul 2 00:02:10.228647 kubelet[3367]: I0702 00:02:10.228567 3367 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:10.235389 containerd[2015]: time="2024-07-02T00:02:10.234430102Z" level=info msg="StopPodSandbox for \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\"" Jul 2 00:02:10.238689 containerd[2015]: time="2024-07-02T00:02:10.236527553Z" level=info msg="Ensure that sandbox 2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae in task-service has been cleanup successfully" Jul 2 00:02:10.375521 containerd[2015]: time="2024-07-02T00:02:10.375320035Z" level=error msg="StopPodSandbox for \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\" failed" error="failed to destroy network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:10.375898 kubelet[3367]: E0702 00:02:10.375694 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:10.375898 kubelet[3367]: E0702 00:02:10.375797 3367 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7"} Jul 2 00:02:10.375898 kubelet[3367]: E0702 00:02:10.375865 3367 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8454d6f1-3a3a-451c-993f-c65deaee9160\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:02:10.375898 kubelet[3367]: E0702 00:02:10.375917 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8454d6f1-3a3a-451c-993f-c65deaee9160\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fcq78" podUID="8454d6f1-3a3a-451c-993f-c65deaee9160" Jul 2 00:02:10.428934 containerd[2015]: time="2024-07-02T00:02:10.428636663Z" level=error msg="StopPodSandbox for \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\" failed" error="failed to destroy network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:10.429532 kubelet[3367]: E0702 00:02:10.428958 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:10.429532 kubelet[3367]: E0702 00:02:10.429021 3367 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae"} Jul 2 00:02:10.429532 kubelet[3367]: E0702 00:02:10.429092 3367 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"817cb3cd-9ece-480a-be6d-b9c58d6e26ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:02:10.429532 kubelet[3367]: E0702 00:02:10.429182 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"817cb3cd-9ece-480a-be6d-b9c58d6e26ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5csz7" podUID="817cb3cd-9ece-480a-be6d-b9c58d6e26ca" Jul 2 00:02:10.432581 containerd[2015]: time="2024-07-02T00:02:10.431332015Z" level=error msg="StopPodSandbox for \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\" failed" error="failed to destroy network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:10.433410 kubelet[3367]: E0702 00:02:10.433149 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:10.433410 kubelet[3367]: E0702 00:02:10.433220 3367 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce"} Jul 2 00:02:10.433410 kubelet[3367]: E0702 00:02:10.433288 3367 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba54b70a-23ca-47a5-bbe3-92ee55439155\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:02:10.434189 kubelet[3367]: E0702 00:02:10.433341 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba54b70a-23ca-47a5-bbe3-92ee55439155\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6648c847d6-tr2qf" podUID="ba54b70a-23ca-47a5-bbe3-92ee55439155" Jul 2 00:02:10.449983 containerd[2015]: time="2024-07-02T00:02:10.449820514Z" level=error msg="StopPodSandbox for \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\" failed" error="failed to destroy network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:02:10.450316 kubelet[3367]: E0702 00:02:10.450152 3367 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:10.450316 kubelet[3367]: E0702 00:02:10.450213 3367 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c"} Jul 2 00:02:10.450316 kubelet[3367]: E0702 00:02:10.450274 3367 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27764a61-fb1a-445e-be33-e807bd376940\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:02:10.450640 kubelet[3367]: E0702 00:02:10.450325 3367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27764a61-fb1a-445e-be33-e807bd376940\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m52t5" podUID="27764a61-fb1a-445e-be33-e807bd376940" Jul 2 00:02:15.824839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471601741.mount: Deactivated successfully. Jul 2 00:02:15.901875 containerd[2015]: time="2024-07-02T00:02:15.901324834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:15.902947 containerd[2015]: time="2024-07-02T00:02:15.902764594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 00:02:15.904536 containerd[2015]: time="2024-07-02T00:02:15.904477206Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:15.908743 containerd[2015]: time="2024-07-02T00:02:15.908664935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:15.910222 containerd[2015]: time="2024-07-02T00:02:15.910022931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 5.729491322s" Jul 2 00:02:15.910222 containerd[2015]: time="2024-07-02T00:02:15.910085108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 00:02:15.942677 containerd[2015]: time="2024-07-02T00:02:15.942598048Z" level=info msg="CreateContainer within sandbox \"cd6372204a25a5827ac6dce88b16430533f24a400f8cc0b3060b56b12ea1752f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:02:15.972041 containerd[2015]: time="2024-07-02T00:02:15.971891917Z" level=info msg="CreateContainer within sandbox \"cd6372204a25a5827ac6dce88b16430533f24a400f8cc0b3060b56b12ea1752f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7f769934d1b2d03849ce5ab65dd551fe74b8af8251893318b6faa0bad361ef45\"" Jul 2 00:02:15.972980 containerd[2015]: time="2024-07-02T00:02:15.972837872Z" level=info msg="StartContainer for \"7f769934d1b2d03849ce5ab65dd551fe74b8af8251893318b6faa0bad361ef45\"" Jul 2 00:02:16.026666 systemd[1]: Started cri-containerd-7f769934d1b2d03849ce5ab65dd551fe74b8af8251893318b6faa0bad361ef45.scope - libcontainer container 7f769934d1b2d03849ce5ab65dd551fe74b8af8251893318b6faa0bad361ef45. Jul 2 00:02:16.098293 containerd[2015]: time="2024-07-02T00:02:16.098153763Z" level=info msg="StartContainer for \"7f769934d1b2d03849ce5ab65dd551fe74b8af8251893318b6faa0bad361ef45\" returns successfully" Jul 2 00:02:16.326482 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:02:16.326669 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:02:16.630943 systemd[1]: Started sshd@9-172.31.25.138:22-147.75.109.163:38638.service - OpenSSH per-connection server daemon (147.75.109.163:38638). Jul 2 00:02:16.823245 sshd[4382]: Accepted publickey for core from 147.75.109.163 port 38638 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:16.829612 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:16.844517 systemd-logind[1993]: New session 10 of user core. Jul 2 00:02:16.855099 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:02:17.132945 sshd[4382]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:17.139790 systemd[1]: sshd@9-172.31.25.138:22-147.75.109.163:38638.service: Deactivated successfully. Jul 2 00:02:17.143846 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:02:17.146250 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:02:17.148403 systemd-logind[1993]: Removed session 10. Jul 2 00:02:18.929999 systemd-networkd[1849]: vxlan.calico: Link UP Jul 2 00:02:18.930024 systemd-networkd[1849]: vxlan.calico: Gained carrier Jul 2 00:02:18.930088 (udev-worker)[4368]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:02:18.968458 (udev-worker)[4367]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:02:20.075654 systemd-networkd[1849]: vxlan.calico: Gained IPv6LL Jul 2 00:02:21.928977 containerd[2015]: time="2024-07-02T00:02:21.928429091Z" level=info msg="StopPodSandbox for \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\"" Jul 2 00:02:22.038135 kubelet[3367]: I0702 00:02:22.037919 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-v4mbz" podStartSLOduration=7.819722055 podStartE2EDuration="26.037760757s" podCreationTimestamp="2024-07-02 00:01:56 +0000 UTC" firstStartedPulling="2024-07-02 00:01:57.692657213 +0000 UTC m=+24.005331055" lastFinishedPulling="2024-07-02 00:02:15.910695903 +0000 UTC m=+42.223369757" observedRunningTime="2024-07-02 00:02:16.290267347 +0000 UTC m=+42.602941201" watchObservedRunningTime="2024-07-02 00:02:22.037760757 +0000 UTC m=+48.350434611" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.035 [INFO][4614] k8s.go 608: Cleaning up netns ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.037 [INFO][4614] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" iface="eth0" netns="/var/run/netns/cni-18bd1b5e-73c0-00a4-5ed5-95831bdbadad" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.038 [INFO][4614] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" iface="eth0" netns="/var/run/netns/cni-18bd1b5e-73c0-00a4-5ed5-95831bdbadad" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.039 [INFO][4614] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" iface="eth0" netns="/var/run/netns/cni-18bd1b5e-73c0-00a4-5ed5-95831bdbadad" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.039 [INFO][4614] k8s.go 615: Releasing IP address(es) ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.039 [INFO][4614] utils.go 188: Calico CNI releasing IP address ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.092 [INFO][4620] ipam_plugin.go 411: Releasing address using handleID ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" HandleID="k8s-pod-network.df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.092 [INFO][4620] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.092 [INFO][4620] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.105 [WARNING][4620] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" HandleID="k8s-pod-network.df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.105 [INFO][4620] ipam_plugin.go 439: Releasing address using workloadID ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" HandleID="k8s-pod-network.df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.107 [INFO][4620] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:22.113636 containerd[2015]: 2024-07-02 00:02:22.110 [INFO][4614] k8s.go 621: Teardown processing complete. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:22.118509 containerd[2015]: time="2024-07-02T00:02:22.118439181Z" level=info msg="TearDown network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\" successfully" Jul 2 00:02:22.118509 containerd[2015]: time="2024-07-02T00:02:22.118499697Z" level=info msg="StopPodSandbox for \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\" returns successfully" Jul 2 00:02:22.120825 containerd[2015]: time="2024-07-02T00:02:22.120380889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m52t5,Uid:27764a61-fb1a-445e-be33-e807bd376940,Namespace:kube-system,Attempt:1,}" Jul 2 00:02:22.121487 systemd[1]: run-netns-cni\x2d18bd1b5e\x2d73c0\x2d00a4\x2d5ed5\x2d95831bdbadad.mount: Deactivated successfully. Jul 2 00:02:22.174965 systemd[1]: Started sshd@10-172.31.25.138:22-147.75.109.163:38644.service - OpenSSH per-connection server daemon (147.75.109.163:38644). Jul 2 00:02:22.369770 sshd[4627]: Accepted publickey for core from 147.75.109.163 port 38644 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:22.375920 sshd[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:22.393191 systemd-logind[1993]: New session 11 of user core. Jul 2 00:02:22.400758 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:02:22.465746 systemd-networkd[1849]: calid77c9322c58: Link UP Jul 2 00:02:22.467517 systemd-networkd[1849]: calid77c9322c58: Gained carrier Jul 2 00:02:22.474753 (udev-worker)[4648]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.297 [INFO][4629] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0 coredns-76f75df574- kube-system 27764a61-fb1a-445e-be33-e807bd376940 771 0 2024-07-02 00:01:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-138 coredns-76f75df574-m52t5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid77c9322c58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Namespace="kube-system" Pod="coredns-76f75df574-m52t5" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.297 [INFO][4629] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Namespace="kube-system" Pod="coredns-76f75df574-m52t5" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.380 [INFO][4640] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" HandleID="k8s-pod-network.c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.404 [INFO][4640] ipam_plugin.go 264: Auto assigning IP ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" HandleID="k8s-pod-network.c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028f0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-138", "pod":"coredns-76f75df574-m52t5", "timestamp":"2024-07-02 00:02:22.380394442 +0000 UTC"}, Hostname:"ip-172-31-25-138", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.405 [INFO][4640] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.405 [INFO][4640] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.405 [INFO][4640] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-138' Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.410 [INFO][4640] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" host="ip-172-31-25-138" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.421 [INFO][4640] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-138" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.428 [INFO][4640] ipam.go 489: Trying affinity for 192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.431 [INFO][4640] ipam.go 155: Attempting to load block cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.434 [INFO][4640] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.435 [INFO][4640] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" host="ip-172-31-25-138" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.437 [INFO][4640] ipam.go 1685: Creating new handle: k8s-pod-network.c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2 Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.444 [INFO][4640] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" host="ip-172-31-25-138" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.454 [INFO][4640] ipam.go 1216: Successfully claimed IPs: [192.168.34.1/26] block=192.168.34.0/26 handle="k8s-pod-network.c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" host="ip-172-31-25-138" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.454 [INFO][4640] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.1/26] handle="k8s-pod-network.c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" host="ip-172-31-25-138" Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.454 [INFO][4640] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:22.511592 containerd[2015]: 2024-07-02 00:02:22.454 [INFO][4640] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.1/26] IPv6=[] ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" HandleID="k8s-pod-network.c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.523284 containerd[2015]: 2024-07-02 00:02:22.460 [INFO][4629] k8s.go 386: Populated endpoint ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Namespace="kube-system" Pod="coredns-76f75df574-m52t5" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"27764a61-fb1a-445e-be33-e807bd376940", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"", Pod:"coredns-76f75df574-m52t5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid77c9322c58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:22.523284 containerd[2015]: 2024-07-02 00:02:22.460 [INFO][4629] k8s.go 387: Calico CNI using IPs: [192.168.34.1/32] ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Namespace="kube-system" Pod="coredns-76f75df574-m52t5" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.523284 containerd[2015]: 2024-07-02 00:02:22.460 [INFO][4629] dataplane_linux.go 68: Setting the host side veth name to calid77c9322c58 ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Namespace="kube-system" Pod="coredns-76f75df574-m52t5" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.523284 containerd[2015]: 2024-07-02 00:02:22.468 [INFO][4629] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Namespace="kube-system" Pod="coredns-76f75df574-m52t5" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.523284 containerd[2015]: 2024-07-02 00:02:22.469 [INFO][4629] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Namespace="kube-system" Pod="coredns-76f75df574-m52t5" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"27764a61-fb1a-445e-be33-e807bd376940", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2", Pod:"coredns-76f75df574-m52t5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid77c9322c58", MAC:"a2:09:de:f9:ba:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:22.523284 containerd[2015]: 2024-07-02 00:02:22.491 [INFO][4629] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2" Namespace="kube-system" Pod="coredns-76f75df574-m52t5" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:22.593842 containerd[2015]: time="2024-07-02T00:02:22.593300628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:02:22.593842 containerd[2015]: time="2024-07-02T00:02:22.593454576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:22.593842 containerd[2015]: time="2024-07-02T00:02:22.593504400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:02:22.593842 containerd[2015]: time="2024-07-02T00:02:22.593539344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:22.661715 systemd[1]: Started cri-containerd-c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2.scope - libcontainer container c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2. Jul 2 00:02:22.678224 kubelet[3367]: I0702 00:02:22.677775 3367 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:02:22.793753 sshd[4627]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:22.806763 systemd[1]: sshd@10-172.31.25.138:22-147.75.109.163:38644.service: Deactivated successfully. Jul 2 00:02:22.812844 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:02:22.816727 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:02:22.825432 systemd-logind[1993]: Removed session 11. Jul 2 00:02:22.844592 containerd[2015]: time="2024-07-02T00:02:22.844480693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m52t5,Uid:27764a61-fb1a-445e-be33-e807bd376940,Namespace:kube-system,Attempt:1,} returns sandbox id \"c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2\"" Jul 2 00:02:22.856480 containerd[2015]: time="2024-07-02T00:02:22.856397821Z" level=info msg="CreateContainer within sandbox \"c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:02:22.907497 containerd[2015]: time="2024-07-02T00:02:22.906865129Z" level=info msg="CreateContainer within sandbox \"c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9aaa7aba94e6f536df0e50f04bc1814e7db6473853b50ba62c7d2bddf929a354\"" Jul 2 00:02:22.909297 containerd[2015]: time="2024-07-02T00:02:22.908643481Z" level=info msg="StartContainer for \"9aaa7aba94e6f536df0e50f04bc1814e7db6473853b50ba62c7d2bddf929a354\"" Jul 2 00:02:22.928318 containerd[2015]: time="2024-07-02T00:02:22.928082629Z" level=info msg="StopPodSandbox for \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\"" Jul 2 00:02:23.050679 systemd[1]: Started cri-containerd-9aaa7aba94e6f536df0e50f04bc1814e7db6473853b50ba62c7d2bddf929a354.scope - libcontainer container 9aaa7aba94e6f536df0e50f04bc1814e7db6473853b50ba62c7d2bddf929a354. Jul 2 00:02:23.195221 containerd[2015]: time="2024-07-02T00:02:23.194743438Z" level=info msg="StartContainer for \"9aaa7aba94e6f536df0e50f04bc1814e7db6473853b50ba62c7d2bddf929a354\" returns successfully" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.158 [INFO][4775] k8s.go 608: Cleaning up netns ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.159 [INFO][4775] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" iface="eth0" netns="/var/run/netns/cni-e56c168d-2cdf-0c40-29a7-3bac2c374e25" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.159 [INFO][4775] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" iface="eth0" netns="/var/run/netns/cni-e56c168d-2cdf-0c40-29a7-3bac2c374e25" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.163 [INFO][4775] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" iface="eth0" netns="/var/run/netns/cni-e56c168d-2cdf-0c40-29a7-3bac2c374e25" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.164 [INFO][4775] k8s.go 615: Releasing IP address(es) ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.165 [INFO][4775] utils.go 188: Calico CNI releasing IP address ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.255 [INFO][4809] ipam_plugin.go 411: Releasing address using handleID ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" HandleID="k8s-pod-network.2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.255 [INFO][4809] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.255 [INFO][4809] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.275 [WARNING][4809] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" HandleID="k8s-pod-network.2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.275 [INFO][4809] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" HandleID="k8s-pod-network.2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.279 [INFO][4809] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:23.296296 containerd[2015]: 2024-07-02 00:02:23.286 [INFO][4775] k8s.go 621: Teardown processing complete. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:23.301265 containerd[2015]: time="2024-07-02T00:02:23.298624211Z" level=info msg="TearDown network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\" successfully" Jul 2 00:02:23.301265 containerd[2015]: time="2024-07-02T00:02:23.298678427Z" level=info msg="StopPodSandbox for \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\" returns successfully" Jul 2 00:02:23.304591 containerd[2015]: time="2024-07-02T00:02:23.301747487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5csz7,Uid:817cb3cd-9ece-480a-be6d-b9c58d6e26ca,Namespace:kube-system,Attempt:1,}" Jul 2 00:02:23.303921 systemd[1]: run-netns-cni\x2de56c168d\x2d2cdf\x2d0c40\x2d29a7\x2d3bac2c374e25.mount: Deactivated successfully. Jul 2 00:02:23.673675 (udev-worker)[4651]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:02:23.674770 systemd-networkd[1849]: cali457f576e2b0: Link UP Jul 2 00:02:23.675207 systemd-networkd[1849]: cali457f576e2b0: Gained carrier Jul 2 00:02:23.692813 kubelet[3367]: I0702 00:02:23.691395 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-m52t5" podStartSLOduration=36.691308085 podStartE2EDuration="36.691308085s" podCreationTimestamp="2024-07-02 00:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:02:23.458863392 +0000 UTC m=+49.771537270" watchObservedRunningTime="2024-07-02 00:02:23.691308085 +0000 UTC m=+50.003981939" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.521 [INFO][4824] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0 coredns-76f75df574- kube-system 817cb3cd-9ece-480a-be6d-b9c58d6e26ca 783 0 2024-07-02 00:01:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-138 coredns-76f75df574-5csz7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali457f576e2b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Namespace="kube-system" Pod="coredns-76f75df574-5csz7" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.521 [INFO][4824] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Namespace="kube-system" Pod="coredns-76f75df574-5csz7" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.605 [INFO][4837] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" HandleID="k8s-pod-network.0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.623 [INFO][4837] ipam_plugin.go 264: Auto assigning IP ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" HandleID="k8s-pod-network.0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000114640), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-138", "pod":"coredns-76f75df574-5csz7", "timestamp":"2024-07-02 00:02:23.604981717 +0000 UTC"}, Hostname:"ip-172-31-25-138", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.623 [INFO][4837] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.623 [INFO][4837] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.623 [INFO][4837] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-138' Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.626 [INFO][4837] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" host="ip-172-31-25-138" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.633 [INFO][4837] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-138" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.640 [INFO][4837] ipam.go 489: Trying affinity for 192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.643 [INFO][4837] ipam.go 155: Attempting to load block cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.647 [INFO][4837] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.647 [INFO][4837] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" host="ip-172-31-25-138" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.650 [INFO][4837] ipam.go 1685: Creating new handle: k8s-pod-network.0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.656 [INFO][4837] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" host="ip-172-31-25-138" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.664 [INFO][4837] ipam.go 1216: Successfully claimed IPs: [192.168.34.2/26] block=192.168.34.0/26 handle="k8s-pod-network.0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" host="ip-172-31-25-138" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.664 [INFO][4837] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.2/26] handle="k8s-pod-network.0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" host="ip-172-31-25-138" Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.664 [INFO][4837] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:23.705899 containerd[2015]: 2024-07-02 00:02:23.664 [INFO][4837] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.2/26] IPv6=[] ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" HandleID="k8s-pod-network.0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.707167 containerd[2015]: 2024-07-02 00:02:23.668 [INFO][4824] k8s.go 386: Populated endpoint ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Namespace="kube-system" Pod="coredns-76f75df574-5csz7" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"817cb3cd-9ece-480a-be6d-b9c58d6e26ca", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"", Pod:"coredns-76f75df574-5csz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali457f576e2b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:23.707167 containerd[2015]: 2024-07-02 00:02:23.668 [INFO][4824] k8s.go 387: Calico CNI using IPs: [192.168.34.2/32] ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Namespace="kube-system" Pod="coredns-76f75df574-5csz7" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.707167 containerd[2015]: 2024-07-02 00:02:23.669 [INFO][4824] dataplane_linux.go 68: Setting the host side veth name to cali457f576e2b0 ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Namespace="kube-system" Pod="coredns-76f75df574-5csz7" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.707167 containerd[2015]: 2024-07-02 00:02:23.674 [INFO][4824] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Namespace="kube-system" Pod="coredns-76f75df574-5csz7" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.707167 containerd[2015]: 2024-07-02 00:02:23.675 [INFO][4824] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Namespace="kube-system" Pod="coredns-76f75df574-5csz7" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"817cb3cd-9ece-480a-be6d-b9c58d6e26ca", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea", Pod:"coredns-76f75df574-5csz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali457f576e2b0", MAC:"fe:b8:98:06:38:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:23.707167 containerd[2015]: 2024-07-02 00:02:23.695 [INFO][4824] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea" Namespace="kube-system" Pod="coredns-76f75df574-5csz7" WorkloadEndpoint="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:23.767460 containerd[2015]: time="2024-07-02T00:02:23.766968601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:02:23.767460 containerd[2015]: time="2024-07-02T00:02:23.767081545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:23.767977 containerd[2015]: time="2024-07-02T00:02:23.767835613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:02:23.767977 containerd[2015]: time="2024-07-02T00:02:23.767897581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:23.810655 systemd[1]: Started cri-containerd-0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea.scope - libcontainer container 0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea. Jul 2 00:02:23.873628 containerd[2015]: time="2024-07-02T00:02:23.873552482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5csz7,Uid:817cb3cd-9ece-480a-be6d-b9c58d6e26ca,Namespace:kube-system,Attempt:1,} returns sandbox id \"0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea\"" Jul 2 00:02:23.879911 containerd[2015]: time="2024-07-02T00:02:23.879818510Z" level=info msg="CreateContainer within sandbox \"0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:02:23.906322 containerd[2015]: time="2024-07-02T00:02:23.906245474Z" level=info msg="CreateContainer within sandbox \"0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc2312d41705d615b7d8bebafb86626d6beba1defb080aa4373c8f39d7e844f3\"" Jul 2 00:02:23.907421 containerd[2015]: time="2024-07-02T00:02:23.907163786Z" level=info msg="StartContainer for \"bc2312d41705d615b7d8bebafb86626d6beba1defb080aa4373c8f39d7e844f3\"" Jul 2 00:02:23.951795 systemd[1]: Started cri-containerd-bc2312d41705d615b7d8bebafb86626d6beba1defb080aa4373c8f39d7e844f3.scope - libcontainer container bc2312d41705d615b7d8bebafb86626d6beba1defb080aa4373c8f39d7e844f3. Jul 2 00:02:24.001710 containerd[2015]: time="2024-07-02T00:02:24.001523603Z" level=info msg="StartContainer for \"bc2312d41705d615b7d8bebafb86626d6beba1defb080aa4373c8f39d7e844f3\" returns successfully" Jul 2 00:02:24.357885 kubelet[3367]: I0702 00:02:24.357782 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5csz7" podStartSLOduration=37.356873424 podStartE2EDuration="37.356873424s" podCreationTimestamp="2024-07-02 00:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:02:24.3205878 +0000 UTC m=+50.633261666" watchObservedRunningTime="2024-07-02 00:02:24.356873424 +0000 UTC m=+50.669547278" Jul 2 00:02:24.491619 systemd-networkd[1849]: calid77c9322c58: Gained IPv6LL Jul 2 00:02:24.928349 containerd[2015]: time="2024-07-02T00:02:24.927529431Z" level=info msg="StopPodSandbox for \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\"" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.009 [INFO][4959] k8s.go 608: Cleaning up netns ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.009 [INFO][4959] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" iface="eth0" netns="/var/run/netns/cni-5e30c0c5-fd99-9ee2-7193-07fb3e3cac7e" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.010 [INFO][4959] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" iface="eth0" netns="/var/run/netns/cni-5e30c0c5-fd99-9ee2-7193-07fb3e3cac7e" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.011 [INFO][4959] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" iface="eth0" netns="/var/run/netns/cni-5e30c0c5-fd99-9ee2-7193-07fb3e3cac7e" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.011 [INFO][4959] k8s.go 615: Releasing IP address(es) ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.011 [INFO][4959] utils.go 188: Calico CNI releasing IP address ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.052 [INFO][4965] ipam_plugin.go 411: Releasing address using handleID ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" HandleID="k8s-pod-network.21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.053 [INFO][4965] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.053 [INFO][4965] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.065 [WARNING][4965] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" HandleID="k8s-pod-network.21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.065 [INFO][4965] ipam_plugin.go 439: Releasing address using workloadID ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" HandleID="k8s-pod-network.21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.068 [INFO][4965] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:25.073342 containerd[2015]: 2024-07-02 00:02:25.071 [INFO][4959] k8s.go 621: Teardown processing complete. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:25.075535 containerd[2015]: time="2024-07-02T00:02:25.075470352Z" level=info msg="TearDown network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\" successfully" Jul 2 00:02:25.075535 containerd[2015]: time="2024-07-02T00:02:25.075527220Z" level=info msg="StopPodSandbox for \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\" returns successfully" Jul 2 00:02:25.080123 containerd[2015]: time="2024-07-02T00:02:25.077544552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6648c847d6-tr2qf,Uid:ba54b70a-23ca-47a5-bbe3-92ee55439155,Namespace:calico-system,Attempt:1,}" Jul 2 00:02:25.080886 systemd[1]: run-netns-cni\x2d5e30c0c5\x2dfd99\x2d9ee2\x2d7193\x2d07fb3e3cac7e.mount: Deactivated successfully. Jul 2 00:02:25.132145 systemd-networkd[1849]: cali457f576e2b0: Gained IPv6LL Jul 2 00:02:25.319516 systemd-networkd[1849]: cali8db3a77b3e9: Link UP Jul 2 00:02:25.320984 systemd-networkd[1849]: cali8db3a77b3e9: Gained carrier Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.178 [INFO][4971] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0 calico-kube-controllers-6648c847d6- calico-system ba54b70a-23ca-47a5-bbe3-92ee55439155 810 0 2024-07-02 00:01:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6648c847d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-138 calico-kube-controllers-6648c847d6-tr2qf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8db3a77b3e9 [] []}} ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Namespace="calico-system" Pod="calico-kube-controllers-6648c847d6-tr2qf" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.179 [INFO][4971] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Namespace="calico-system" Pod="calico-kube-controllers-6648c847d6-tr2qf" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.242 [INFO][4982] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" HandleID="k8s-pod-network.7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.262 [INFO][4982] ipam_plugin.go 264: Auto assigning IP ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" HandleID="k8s-pod-network.7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c290), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-138", "pod":"calico-kube-controllers-6648c847d6-tr2qf", "timestamp":"2024-07-02 00:02:25.242606797 +0000 UTC"}, Hostname:"ip-172-31-25-138", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.262 [INFO][4982] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.263 [INFO][4982] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.263 [INFO][4982] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-138' Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.266 [INFO][4982] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" host="ip-172-31-25-138" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.273 [INFO][4982] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-138" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.281 [INFO][4982] ipam.go 489: Trying affinity for 192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.284 [INFO][4982] ipam.go 155: Attempting to load block cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.288 [INFO][4982] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.289 [INFO][4982] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" host="ip-172-31-25-138" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.291 [INFO][4982] ipam.go 1685: Creating new handle: k8s-pod-network.7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.297 [INFO][4982] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" host="ip-172-31-25-138" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.307 [INFO][4982] ipam.go 1216: Successfully claimed IPs: [192.168.34.3/26] block=192.168.34.0/26 handle="k8s-pod-network.7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" host="ip-172-31-25-138" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.308 [INFO][4982] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.3/26] handle="k8s-pod-network.7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" host="ip-172-31-25-138" Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.308 [INFO][4982] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:25.356280 containerd[2015]: 2024-07-02 00:02:25.309 [INFO][4982] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.3/26] IPv6=[] ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" HandleID="k8s-pod-network.7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.359645 containerd[2015]: 2024-07-02 00:02:25.313 [INFO][4971] k8s.go 386: Populated endpoint ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Namespace="calico-system" Pod="calico-kube-controllers-6648c847d6-tr2qf" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0", GenerateName:"calico-kube-controllers-6648c847d6-", Namespace:"calico-system", SelfLink:"", UID:"ba54b70a-23ca-47a5-bbe3-92ee55439155", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6648c847d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"", Pod:"calico-kube-controllers-6648c847d6-tr2qf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8db3a77b3e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:25.359645 containerd[2015]: 2024-07-02 00:02:25.314 [INFO][4971] k8s.go 387: Calico CNI using IPs: [192.168.34.3/32] ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Namespace="calico-system" Pod="calico-kube-controllers-6648c847d6-tr2qf" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.359645 containerd[2015]: 2024-07-02 00:02:25.314 [INFO][4971] dataplane_linux.go 68: Setting the host side veth name to cali8db3a77b3e9 ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Namespace="calico-system" Pod="calico-kube-controllers-6648c847d6-tr2qf" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.359645 containerd[2015]: 2024-07-02 00:02:25.320 [INFO][4971] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Namespace="calico-system" Pod="calico-kube-controllers-6648c847d6-tr2qf" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.359645 containerd[2015]: 2024-07-02 00:02:25.322 [INFO][4971] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Namespace="calico-system" Pod="calico-kube-controllers-6648c847d6-tr2qf" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0", GenerateName:"calico-kube-controllers-6648c847d6-", Namespace:"calico-system", SelfLink:"", UID:"ba54b70a-23ca-47a5-bbe3-92ee55439155", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6648c847d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c", Pod:"calico-kube-controllers-6648c847d6-tr2qf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8db3a77b3e9", MAC:"da:76:da:95:09:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:25.359645 containerd[2015]: 2024-07-02 00:02:25.341 [INFO][4971] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c" Namespace="calico-system" Pod="calico-kube-controllers-6648c847d6-tr2qf" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:25.424041 containerd[2015]: time="2024-07-02T00:02:25.422705774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:02:25.424041 containerd[2015]: time="2024-07-02T00:02:25.423172550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:25.424041 containerd[2015]: time="2024-07-02T00:02:25.423236798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:02:25.424041 containerd[2015]: time="2024-07-02T00:02:25.423284570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:25.488658 systemd[1]: Started cri-containerd-7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c.scope - libcontainer container 7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c. Jul 2 00:02:25.555246 containerd[2015]: time="2024-07-02T00:02:25.555194306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6648c847d6-tr2qf,Uid:ba54b70a-23ca-47a5-bbe3-92ee55439155,Namespace:calico-system,Attempt:1,} returns sandbox id \"7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c\"" Jul 2 00:02:25.559170 containerd[2015]: time="2024-07-02T00:02:25.558747902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:02:25.930253 containerd[2015]: time="2024-07-02T00:02:25.929972560Z" level=info msg="StopPodSandbox for \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\"" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.017 [INFO][5058] k8s.go 608: Cleaning up netns ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.017 [INFO][5058] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" iface="eth0" netns="/var/run/netns/cni-92d23124-910c-3a16-6316-502656c6612e" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.018 [INFO][5058] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" iface="eth0" netns="/var/run/netns/cni-92d23124-910c-3a16-6316-502656c6612e" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.018 [INFO][5058] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" iface="eth0" netns="/var/run/netns/cni-92d23124-910c-3a16-6316-502656c6612e" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.018 [INFO][5058] k8s.go 615: Releasing IP address(es) ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.018 [INFO][5058] utils.go 188: Calico CNI releasing IP address ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.067 [INFO][5064] ipam_plugin.go 411: Releasing address using handleID ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" HandleID="k8s-pod-network.3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.068 [INFO][5064] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.068 [INFO][5064] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.080 [WARNING][5064] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" HandleID="k8s-pod-network.3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.080 [INFO][5064] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" HandleID="k8s-pod-network.3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.083 [INFO][5064] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:26.087907 containerd[2015]: 2024-07-02 00:02:26.085 [INFO][5058] k8s.go 621: Teardown processing complete. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:26.090172 containerd[2015]: time="2024-07-02T00:02:26.088267009Z" level=info msg="TearDown network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\" successfully" Jul 2 00:02:26.090172 containerd[2015]: time="2024-07-02T00:02:26.088307857Z" level=info msg="StopPodSandbox for \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\" returns successfully" Jul 2 00:02:26.093663 containerd[2015]: time="2024-07-02T00:02:26.091080301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fcq78,Uid:8454d6f1-3a3a-451c-993f-c65deaee9160,Namespace:calico-system,Attempt:1,}" Jul 2 00:02:26.096150 systemd[1]: run-netns-cni\x2d92d23124\x2d910c\x2d3a16\x2d6316\x2d502656c6612e.mount: Deactivated successfully. Jul 2 00:02:26.323804 systemd-networkd[1849]: cali5ddcef43118: Link UP Jul 2 00:02:26.324247 systemd-networkd[1849]: cali5ddcef43118: Gained carrier Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.185 [INFO][5070] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0 csi-node-driver- calico-system 8454d6f1-3a3a-451c-993f-c65deaee9160 828 0 2024-07-02 00:01:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-25-138 csi-node-driver-fcq78 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali5ddcef43118 [] []}} ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Namespace="calico-system" Pod="csi-node-driver-fcq78" WorkloadEndpoint="ip--172--31--25--138-k8s-csi--node--driver--fcq78-" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.186 [INFO][5070] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Namespace="calico-system" Pod="csi-node-driver-fcq78" WorkloadEndpoint="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.237 [INFO][5081] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" HandleID="k8s-pod-network.806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.258 [INFO][5081] ipam_plugin.go 264: Auto assigning IP ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" HandleID="k8s-pod-network.806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316d20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-138", "pod":"csi-node-driver-fcq78", "timestamp":"2024-07-02 00:02:26.237608978 +0000 UTC"}, Hostname:"ip-172-31-25-138", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.259 [INFO][5081] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.259 [INFO][5081] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.259 [INFO][5081] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-138' Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.262 [INFO][5081] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" host="ip-172-31-25-138" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.269 [INFO][5081] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-138" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.277 [INFO][5081] ipam.go 489: Trying affinity for 192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.282 [INFO][5081] ipam.go 155: Attempting to load block cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.286 [INFO][5081] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.286 [INFO][5081] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" host="ip-172-31-25-138" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.289 [INFO][5081] ipam.go 1685: Creating new handle: k8s-pod-network.806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721 Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.295 [INFO][5081] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" host="ip-172-31-25-138" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.306 [INFO][5081] ipam.go 1216: Successfully claimed IPs: [192.168.34.4/26] block=192.168.34.0/26 handle="k8s-pod-network.806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" host="ip-172-31-25-138" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.306 [INFO][5081] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.4/26] handle="k8s-pod-network.806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" host="ip-172-31-25-138" Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.306 [INFO][5081] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:26.361328 containerd[2015]: 2024-07-02 00:02:26.306 [INFO][5081] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.4/26] IPv6=[] ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" HandleID="k8s-pod-network.806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.362747 containerd[2015]: 2024-07-02 00:02:26.313 [INFO][5070] k8s.go 386: Populated endpoint ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Namespace="calico-system" Pod="csi-node-driver-fcq78" WorkloadEndpoint="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8454d6f1-3a3a-451c-993f-c65deaee9160", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"", Pod:"csi-node-driver-fcq78", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5ddcef43118", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:26.362747 containerd[2015]: 2024-07-02 00:02:26.314 [INFO][5070] k8s.go 387: Calico CNI using IPs: [192.168.34.4/32] ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Namespace="calico-system" Pod="csi-node-driver-fcq78" WorkloadEndpoint="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.362747 containerd[2015]: 2024-07-02 00:02:26.314 [INFO][5070] dataplane_linux.go 68: Setting the host side veth name to cali5ddcef43118 ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Namespace="calico-system" Pod="csi-node-driver-fcq78" WorkloadEndpoint="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.362747 containerd[2015]: 2024-07-02 00:02:26.318 [INFO][5070] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Namespace="calico-system" Pod="csi-node-driver-fcq78" WorkloadEndpoint="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.362747 containerd[2015]: 2024-07-02 00:02:26.319 [INFO][5070] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Namespace="calico-system" Pod="csi-node-driver-fcq78" WorkloadEndpoint="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8454d6f1-3a3a-451c-993f-c65deaee9160", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721", Pod:"csi-node-driver-fcq78", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5ddcef43118", MAC:"4e:a4:c7:60:8a:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:26.362747 containerd[2015]: 2024-07-02 00:02:26.357 [INFO][5070] k8s.go 500: Wrote updated endpoint to datastore ContainerID="806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721" Namespace="calico-system" Pod="csi-node-driver-fcq78" WorkloadEndpoint="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:26.406614 containerd[2015]: time="2024-07-02T00:02:26.404990234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:02:26.407841 containerd[2015]: time="2024-07-02T00:02:26.407608622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:26.408219 containerd[2015]: time="2024-07-02T00:02:26.407800274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:02:26.408219 containerd[2015]: time="2024-07-02T00:02:26.408149906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:26.450717 systemd[1]: Started cri-containerd-806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721.scope - libcontainer container 806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721. Jul 2 00:02:26.524892 containerd[2015]: time="2024-07-02T00:02:26.524818683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fcq78,Uid:8454d6f1-3a3a-451c-993f-c65deaee9160,Namespace:calico-system,Attempt:1,} returns sandbox id \"806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721\"" Jul 2 00:02:26.539557 systemd-networkd[1849]: cali8db3a77b3e9: Gained IPv6LL Jul 2 00:02:27.862935 systemd[1]: Started sshd@11-172.31.25.138:22-147.75.109.163:40802.service - OpenSSH per-connection server daemon (147.75.109.163:40802). Jul 2 00:02:28.088489 sshd[5147]: Accepted publickey for core from 147.75.109.163 port 40802 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:28.093459 sshd[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:28.109098 systemd-logind[1993]: New session 12 of user core. Jul 2 00:02:28.118517 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:02:28.331840 systemd-networkd[1849]: cali5ddcef43118: Gained IPv6LL Jul 2 00:02:28.508068 sshd[5147]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:28.518073 systemd[1]: sshd@11-172.31.25.138:22-147.75.109.163:40802.service: Deactivated successfully. Jul 2 00:02:28.528053 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:02:28.538137 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:02:28.571611 systemd[1]: Started sshd@12-172.31.25.138:22-147.75.109.163:40814.service - OpenSSH per-connection server daemon (147.75.109.163:40814). Jul 2 00:02:28.574115 systemd-logind[1993]: Removed session 12. Jul 2 00:02:28.804868 sshd[5161]: Accepted publickey for core from 147.75.109.163 port 40814 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:28.811995 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:28.831685 systemd-logind[1993]: New session 13 of user core. Jul 2 00:02:28.839017 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:02:29.015751 containerd[2015]: time="2024-07-02T00:02:29.012505755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:29.016871 containerd[2015]: time="2024-07-02T00:02:29.016702227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 00:02:29.022917 containerd[2015]: time="2024-07-02T00:02:29.022330803Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:29.030322 containerd[2015]: time="2024-07-02T00:02:29.030240327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:29.036583 containerd[2015]: time="2024-07-02T00:02:29.036449476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 3.47762889s" Jul 2 00:02:29.036583 containerd[2015]: time="2024-07-02T00:02:29.036519616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 00:02:29.047379 containerd[2015]: time="2024-07-02T00:02:29.044662468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:02:29.117035 containerd[2015]: time="2024-07-02T00:02:29.115273660Z" level=info msg="CreateContainer within sandbox \"7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:02:29.152256 containerd[2015]: time="2024-07-02T00:02:29.152180740Z" level=info msg="CreateContainer within sandbox \"7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d323adaff9fe0c5156e6e53d6eebfcc8704863739dddfa861f3e2973f6421950\"" Jul 2 00:02:29.154646 containerd[2015]: time="2024-07-02T00:02:29.154580632Z" level=info msg="StartContainer for \"d323adaff9fe0c5156e6e53d6eebfcc8704863739dddfa861f3e2973f6421950\"" Jul 2 00:02:29.278718 systemd[1]: Started cri-containerd-d323adaff9fe0c5156e6e53d6eebfcc8704863739dddfa861f3e2973f6421950.scope - libcontainer container d323adaff9fe0c5156e6e53d6eebfcc8704863739dddfa861f3e2973f6421950. Jul 2 00:02:29.473207 sshd[5161]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:29.484450 systemd[1]: sshd@12-172.31.25.138:22-147.75.109.163:40814.service: Deactivated successfully. Jul 2 00:02:29.492831 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:02:29.499701 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:02:29.530527 systemd[1]: Started sshd@13-172.31.25.138:22-147.75.109.163:40828.service - OpenSSH per-connection server daemon (147.75.109.163:40828). Jul 2 00:02:29.535704 systemd-logind[1993]: Removed session 13. Jul 2 00:02:29.622800 containerd[2015]: time="2024-07-02T00:02:29.622732410Z" level=info msg="StartContainer for \"d323adaff9fe0c5156e6e53d6eebfcc8704863739dddfa861f3e2973f6421950\" returns successfully" Jul 2 00:02:29.739070 sshd[5211]: Accepted publickey for core from 147.75.109.163 port 40828 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:29.745632 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:29.763637 systemd-logind[1993]: New session 14 of user core. Jul 2 00:02:29.812027 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:02:30.166751 sshd[5211]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:30.176131 systemd[1]: sshd@13-172.31.25.138:22-147.75.109.163:40828.service: Deactivated successfully. Jul 2 00:02:30.183126 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:02:30.186200 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:02:30.189464 systemd-logind[1993]: Removed session 14. Jul 2 00:02:30.403956 kubelet[3367]: I0702 00:02:30.402414 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6648c847d6-tr2qf" podStartSLOduration=29.921997612 podStartE2EDuration="33.40232685s" podCreationTimestamp="2024-07-02 00:01:57 +0000 UTC" firstStartedPulling="2024-07-02 00:02:25.557905934 +0000 UTC m=+51.870579788" lastFinishedPulling="2024-07-02 00:02:29.03823516 +0000 UTC m=+55.350909026" observedRunningTime="2024-07-02 00:02:30.399809298 +0000 UTC m=+56.712483176" watchObservedRunningTime="2024-07-02 00:02:30.40232685 +0000 UTC m=+56.715000728" Jul 2 00:02:30.763725 containerd[2015]: time="2024-07-02T00:02:30.763663916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:30.765853 containerd[2015]: time="2024-07-02T00:02:30.765757892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 00:02:30.769949 containerd[2015]: time="2024-07-02T00:02:30.769871144Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:30.772319 ntpd[1986]: Listen normally on 7 vxlan.calico 192.168.34.0:123 Jul 2 00:02:30.774207 ntpd[1986]: 2 Jul 00:02:30 ntpd[1986]: Listen normally on 7 vxlan.calico 192.168.34.0:123 Jul 2 00:02:30.774207 ntpd[1986]: 2 Jul 00:02:30 ntpd[1986]: Listen normally on 8 vxlan.calico [fe80::64bf:cfff:febb:362d%4]:123 Jul 2 00:02:30.774207 ntpd[1986]: 2 Jul 00:02:30 ntpd[1986]: Listen normally on 9 calid77c9322c58 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:02:30.774207 ntpd[1986]: 2 Jul 00:02:30 ntpd[1986]: Listen normally on 10 cali457f576e2b0 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:02:30.774207 ntpd[1986]: 2 Jul 00:02:30 ntpd[1986]: Listen normally on 11 cali8db3a77b3e9 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:02:30.774207 ntpd[1986]: 2 Jul 00:02:30 ntpd[1986]: Listen normally on 12 cali5ddcef43118 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:02:30.772486 ntpd[1986]: Listen normally on 8 vxlan.calico [fe80::64bf:cfff:febb:362d%4]:123 Jul 2 00:02:30.772568 ntpd[1986]: Listen normally on 9 calid77c9322c58 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:02:30.772637 ntpd[1986]: Listen normally on 10 cali457f576e2b0 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:02:30.772704 ntpd[1986]: Listen normally on 11 cali8db3a77b3e9 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:02:30.772771 ntpd[1986]: Listen normally on 12 cali5ddcef43118 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:02:30.777006 containerd[2015]: time="2024-07-02T00:02:30.776752400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:30.780649 containerd[2015]: time="2024-07-02T00:02:30.779544164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.734806852s" Jul 2 00:02:30.780649 containerd[2015]: time="2024-07-02T00:02:30.779612300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 00:02:30.784637 containerd[2015]: time="2024-07-02T00:02:30.784560500Z" level=info msg="CreateContainer within sandbox \"806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:02:30.834676 containerd[2015]: time="2024-07-02T00:02:30.834574676Z" level=info msg="CreateContainer within sandbox \"806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dfb29300eb43b0a87fb260387673c98df6273a31e57f318f5eb26bea26525fac\"" Jul 2 00:02:30.835978 containerd[2015]: time="2024-07-02T00:02:30.835775660Z" level=info msg="StartContainer for \"dfb29300eb43b0a87fb260387673c98df6273a31e57f318f5eb26bea26525fac\"" Jul 2 00:02:30.909871 systemd[1]: Started cri-containerd-dfb29300eb43b0a87fb260387673c98df6273a31e57f318f5eb26bea26525fac.scope - libcontainer container dfb29300eb43b0a87fb260387673c98df6273a31e57f318f5eb26bea26525fac. Jul 2 00:02:31.004740 containerd[2015]: time="2024-07-02T00:02:31.004667057Z" level=info msg="StartContainer for \"dfb29300eb43b0a87fb260387673c98df6273a31e57f318f5eb26bea26525fac\" returns successfully" Jul 2 00:02:31.008123 containerd[2015]: time="2024-07-02T00:02:31.008070461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:02:32.620255 containerd[2015]: time="2024-07-02T00:02:32.620180937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:32.623225 containerd[2015]: time="2024-07-02T00:02:32.622385361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 00:02:32.624677 containerd[2015]: time="2024-07-02T00:02:32.624608985Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:32.632988 containerd[2015]: time="2024-07-02T00:02:32.630988329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:32.632988 containerd[2015]: time="2024-07-02T00:02:32.632763045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.624097372s" Jul 2 00:02:32.632988 containerd[2015]: time="2024-07-02T00:02:32.632832585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 00:02:32.639049 containerd[2015]: time="2024-07-02T00:02:32.638902137Z" level=info msg="CreateContainer within sandbox \"806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:02:32.670311 containerd[2015]: time="2024-07-02T00:02:32.669490330Z" level=info msg="CreateContainer within sandbox \"806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6556d40c693f733a0a492d97d32dc02d9cbf3b925ab92be284dfc682a6b242ea\"" Jul 2 00:02:32.674491 containerd[2015]: time="2024-07-02T00:02:32.674424682Z" level=info msg="StartContainer for \"6556d40c693f733a0a492d97d32dc02d9cbf3b925ab92be284dfc682a6b242ea\"" Jul 2 00:02:32.767722 systemd[1]: Started cri-containerd-6556d40c693f733a0a492d97d32dc02d9cbf3b925ab92be284dfc682a6b242ea.scope - libcontainer container 6556d40c693f733a0a492d97d32dc02d9cbf3b925ab92be284dfc682a6b242ea. Jul 2 00:02:32.890676 containerd[2015]: time="2024-07-02T00:02:32.890530811Z" level=info msg="StartContainer for \"6556d40c693f733a0a492d97d32dc02d9cbf3b925ab92be284dfc682a6b242ea\" returns successfully" Jul 2 00:02:33.160966 kubelet[3367]: I0702 00:02:33.160715 3367 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:02:33.164396 kubelet[3367]: I0702 00:02:33.161838 3367 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:02:33.959688 containerd[2015]: time="2024-07-02T00:02:33.959628552Z" level=info msg="StopPodSandbox for \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\"" Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.044 [WARNING][5360] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"817cb3cd-9ece-480a-be6d-b9c58d6e26ca", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea", Pod:"coredns-76f75df574-5csz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali457f576e2b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.044 [INFO][5360] k8s.go 608: Cleaning up netns ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.044 [INFO][5360] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" iface="eth0" netns="" Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.045 [INFO][5360] k8s.go 615: Releasing IP address(es) ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.045 [INFO][5360] utils.go 188: Calico CNI releasing IP address ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.113 [INFO][5366] ipam_plugin.go 411: Releasing address using handleID ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" HandleID="k8s-pod-network.2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.116 [INFO][5366] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.116 [INFO][5366] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.128 [WARNING][5366] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" HandleID="k8s-pod-network.2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.129 [INFO][5366] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" HandleID="k8s-pod-network.2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.131 [INFO][5366] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:34.139794 containerd[2015]: 2024-07-02 00:02:34.135 [INFO][5360] k8s.go 621: Teardown processing complete. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:34.143851 containerd[2015]: time="2024-07-02T00:02:34.139788873Z" level=info msg="TearDown network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\" successfully" Jul 2 00:02:34.143851 containerd[2015]: time="2024-07-02T00:02:34.139844277Z" level=info msg="StopPodSandbox for \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\" returns successfully" Jul 2 00:02:34.143851 containerd[2015]: time="2024-07-02T00:02:34.141904941Z" level=info msg="RemovePodSandbox for \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\"" Jul 2 00:02:34.143851 containerd[2015]: time="2024-07-02T00:02:34.141959937Z" level=info msg="Forcibly stopping sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\"" Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.248 [WARNING][5384] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"817cb3cd-9ece-480a-be6d-b9c58d6e26ca", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"0829e65d2ff36a9af713da2e0c71913d8b26f0b03308024376cfe48a796b78ea", Pod:"coredns-76f75df574-5csz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali457f576e2b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.248 [INFO][5384] k8s.go 608: Cleaning up netns ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.248 [INFO][5384] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" iface="eth0" netns="" Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.248 [INFO][5384] k8s.go 615: Releasing IP address(es) ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.248 [INFO][5384] utils.go 188: Calico CNI releasing IP address ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.293 [INFO][5390] ipam_plugin.go 411: Releasing address using handleID ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" HandleID="k8s-pod-network.2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.294 [INFO][5390] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.294 [INFO][5390] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.310 [WARNING][5390] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" HandleID="k8s-pod-network.2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.310 [INFO][5390] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" HandleID="k8s-pod-network.2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--5csz7-eth0" Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.312 [INFO][5390] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:34.318937 containerd[2015]: 2024-07-02 00:02:34.315 [INFO][5384] k8s.go 621: Teardown processing complete. ContainerID="2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae" Jul 2 00:02:34.322133 containerd[2015]: time="2024-07-02T00:02:34.318922126Z" level=info msg="TearDown network for sandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\" successfully" Jul 2 00:02:34.326515 containerd[2015]: time="2024-07-02T00:02:34.326303278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:02:34.326861 containerd[2015]: time="2024-07-02T00:02:34.326748154Z" level=info msg="RemovePodSandbox \"2930ee3d39166e0fbe34e2a77ab844b8123b47ccd8f937c8a72e99306f349aae\" returns successfully" Jul 2 00:02:34.328284 containerd[2015]: time="2024-07-02T00:02:34.328088950Z" level=info msg="StopPodSandbox for \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\"" Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.406 [WARNING][5409] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"27764a61-fb1a-445e-be33-e807bd376940", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2", Pod:"coredns-76f75df574-m52t5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid77c9322c58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.407 [INFO][5409] k8s.go 608: Cleaning up netns ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.407 [INFO][5409] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" iface="eth0" netns="" Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.407 [INFO][5409] k8s.go 615: Releasing IP address(es) ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.407 [INFO][5409] utils.go 188: Calico CNI releasing IP address ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.504 [INFO][5415] ipam_plugin.go 411: Releasing address using handleID ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" HandleID="k8s-pod-network.df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.504 [INFO][5415] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.504 [INFO][5415] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.520 [WARNING][5415] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" HandleID="k8s-pod-network.df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.520 [INFO][5415] ipam_plugin.go 439: Releasing address using workloadID ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" HandleID="k8s-pod-network.df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.522 [INFO][5415] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:34.529667 containerd[2015]: 2024-07-02 00:02:34.526 [INFO][5409] k8s.go 621: Teardown processing complete. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:34.530566 containerd[2015]: time="2024-07-02T00:02:34.529712507Z" level=info msg="TearDown network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\" successfully" Jul 2 00:02:34.530566 containerd[2015]: time="2024-07-02T00:02:34.529750895Z" level=info msg="StopPodSandbox for \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\" returns successfully" Jul 2 00:02:34.532163 containerd[2015]: time="2024-07-02T00:02:34.531537227Z" level=info msg="RemovePodSandbox for \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\"" Jul 2 00:02:34.532163 containerd[2015]: time="2024-07-02T00:02:34.531593423Z" level=info msg="Forcibly stopping sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\"" Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.632 [WARNING][5433] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"27764a61-fb1a-445e-be33-e807bd376940", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"c1eef2de4d5019d3fa2db1c000fb5cc0c7cb32943013c0d5e9b6dd8316edd7d2", Pod:"coredns-76f75df574-m52t5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid77c9322c58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.632 [INFO][5433] k8s.go 608: Cleaning up netns ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.632 [INFO][5433] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" iface="eth0" netns="" Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.632 [INFO][5433] k8s.go 615: Releasing IP address(es) ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.632 [INFO][5433] utils.go 188: Calico CNI releasing IP address ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.681 [INFO][5439] ipam_plugin.go 411: Releasing address using handleID ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" HandleID="k8s-pod-network.df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.681 [INFO][5439] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.681 [INFO][5439] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.704 [WARNING][5439] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" HandleID="k8s-pod-network.df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.704 [INFO][5439] ipam_plugin.go 439: Releasing address using workloadID ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" HandleID="k8s-pod-network.df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Workload="ip--172--31--25--138-k8s-coredns--76f75df574--m52t5-eth0" Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.711 [INFO][5439] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:34.720807 containerd[2015]: 2024-07-02 00:02:34.715 [INFO][5433] k8s.go 621: Teardown processing complete. ContainerID="df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c" Jul 2 00:02:34.723615 containerd[2015]: time="2024-07-02T00:02:34.723524208Z" level=info msg="TearDown network for sandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\" successfully" Jul 2 00:02:34.732072 containerd[2015]: time="2024-07-02T00:02:34.731932932Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:02:34.734584 containerd[2015]: time="2024-07-02T00:02:34.732116736Z" level=info msg="RemovePodSandbox \"df23601a9e25e42cf836c0ae8a1ca6a698a423d17df9a3d8bf8c6aff4ac6687c\" returns successfully" Jul 2 00:02:34.734584 containerd[2015]: time="2024-07-02T00:02:34.732742656Z" level=info msg="StopPodSandbox for \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\"" Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.876 [WARNING][5457] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0", GenerateName:"calico-kube-controllers-6648c847d6-", Namespace:"calico-system", SelfLink:"", UID:"ba54b70a-23ca-47a5-bbe3-92ee55439155", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6648c847d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c", Pod:"calico-kube-controllers-6648c847d6-tr2qf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8db3a77b3e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.876 [INFO][5457] k8s.go 608: Cleaning up netns ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.877 [INFO][5457] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" iface="eth0" netns="" Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.877 [INFO][5457] k8s.go 615: Releasing IP address(es) ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.877 [INFO][5457] utils.go 188: Calico CNI releasing IP address ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.936 [INFO][5463] ipam_plugin.go 411: Releasing address using handleID ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" HandleID="k8s-pod-network.21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.937 [INFO][5463] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.937 [INFO][5463] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.951 [WARNING][5463] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" HandleID="k8s-pod-network.21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.951 [INFO][5463] ipam_plugin.go 439: Releasing address using workloadID ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" HandleID="k8s-pod-network.21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.954 [INFO][5463] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:34.959693 containerd[2015]: 2024-07-02 00:02:34.956 [INFO][5457] k8s.go 621: Teardown processing complete. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:34.961583 containerd[2015]: time="2024-07-02T00:02:34.959740693Z" level=info msg="TearDown network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\" successfully" Jul 2 00:02:34.961583 containerd[2015]: time="2024-07-02T00:02:34.959780629Z" level=info msg="StopPodSandbox for \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\" returns successfully" Jul 2 00:02:34.961583 containerd[2015]: time="2024-07-02T00:02:34.961000753Z" level=info msg="RemovePodSandbox for \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\"" Jul 2 00:02:34.961583 containerd[2015]: time="2024-07-02T00:02:34.961076113Z" level=info msg="Forcibly stopping sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\"" Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.022 [WARNING][5481] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0", GenerateName:"calico-kube-controllers-6648c847d6-", Namespace:"calico-system", SelfLink:"", UID:"ba54b70a-23ca-47a5-bbe3-92ee55439155", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6648c847d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"7901f110fe99f02fe37a725e13438e687220cf1bb0b9cad89f912aab90a4e45c", Pod:"calico-kube-controllers-6648c847d6-tr2qf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8db3a77b3e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.022 [INFO][5481] k8s.go 608: Cleaning up netns ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.022 [INFO][5481] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" iface="eth0" netns="" Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.022 [INFO][5481] k8s.go 615: Releasing IP address(es) ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.023 [INFO][5481] utils.go 188: Calico CNI releasing IP address ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.065 [INFO][5487] ipam_plugin.go 411: Releasing address using handleID ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" HandleID="k8s-pod-network.21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.065 [INFO][5487] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.065 [INFO][5487] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.080 [WARNING][5487] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" HandleID="k8s-pod-network.21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.080 [INFO][5487] ipam_plugin.go 439: Releasing address using workloadID ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" HandleID="k8s-pod-network.21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Workload="ip--172--31--25--138-k8s-calico--kube--controllers--6648c847d6--tr2qf-eth0" Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.084 [INFO][5487] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:35.089334 containerd[2015]: 2024-07-02 00:02:35.086 [INFO][5481] k8s.go 621: Teardown processing complete. ContainerID="21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce" Jul 2 00:02:35.090216 containerd[2015]: time="2024-07-02T00:02:35.089431486Z" level=info msg="TearDown network for sandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\" successfully" Jul 2 00:02:35.096779 containerd[2015]: time="2024-07-02T00:02:35.096713722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:02:35.096939 containerd[2015]: time="2024-07-02T00:02:35.096827566Z" level=info msg="RemovePodSandbox \"21dddc3c8e37f9eaea6e182528008658a68db95b02c6daaeb8a30f26d298afce\" returns successfully" Jul 2 00:02:35.097981 containerd[2015]: time="2024-07-02T00:02:35.097535554Z" level=info msg="StopPodSandbox for \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\"" Jul 2 00:02:35.214481 systemd[1]: Started sshd@14-172.31.25.138:22-147.75.109.163:40236.service - OpenSSH per-connection server daemon (147.75.109.163:40236). Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.212 [WARNING][5505] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8454d6f1-3a3a-451c-993f-c65deaee9160", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721", Pod:"csi-node-driver-fcq78", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5ddcef43118", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.213 [INFO][5505] k8s.go 608: Cleaning up netns ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.213 [INFO][5505] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" iface="eth0" netns="" Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.213 [INFO][5505] k8s.go 615: Releasing IP address(es) ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.213 [INFO][5505] utils.go 188: Calico CNI releasing IP address ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.303 [INFO][5513] ipam_plugin.go 411: Releasing address using handleID ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" HandleID="k8s-pod-network.3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.303 [INFO][5513] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.303 [INFO][5513] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.321 [WARNING][5513] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" HandleID="k8s-pod-network.3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.321 [INFO][5513] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" HandleID="k8s-pod-network.3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.325 [INFO][5513] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:35.339574 containerd[2015]: 2024-07-02 00:02:35.332 [INFO][5505] k8s.go 621: Teardown processing complete. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:35.339574 containerd[2015]: time="2024-07-02T00:02:35.339318611Z" level=info msg="TearDown network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\" successfully" Jul 2 00:02:35.339574 containerd[2015]: time="2024-07-02T00:02:35.339400403Z" level=info msg="StopPodSandbox for \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\" returns successfully" Jul 2 00:02:35.342194 containerd[2015]: time="2024-07-02T00:02:35.340929035Z" level=info msg="RemovePodSandbox for \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\"" Jul 2 00:02:35.342194 containerd[2015]: time="2024-07-02T00:02:35.340989371Z" level=info msg="Forcibly stopping sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\"" Jul 2 00:02:35.443274 sshd[5512]: Accepted publickey for core from 147.75.109.163 port 40236 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:35.446708 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:35.458017 systemd-logind[1993]: New session 15 of user core. Jul 2 00:02:35.465688 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.445 [WARNING][5532] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8454d6f1-3a3a-451c-993f-c65deaee9160", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"806319695316de3e3949655a99d769495455043a5f746e6aa51b529194035721", Pod:"csi-node-driver-fcq78", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5ddcef43118", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.446 [INFO][5532] k8s.go 608: Cleaning up netns ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.446 [INFO][5532] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" iface="eth0" netns="" Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.446 [INFO][5532] k8s.go 615: Releasing IP address(es) ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.446 [INFO][5532] utils.go 188: Calico CNI releasing IP address ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.497 [INFO][5539] ipam_plugin.go 411: Releasing address using handleID ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" HandleID="k8s-pod-network.3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.497 [INFO][5539] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.497 [INFO][5539] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.510 [WARNING][5539] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" HandleID="k8s-pod-network.3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.510 [INFO][5539] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" HandleID="k8s-pod-network.3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Workload="ip--172--31--25--138-k8s-csi--node--driver--fcq78-eth0" Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.513 [INFO][5539] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:02:35.519181 containerd[2015]: 2024-07-02 00:02:35.516 [INFO][5532] k8s.go 621: Teardown processing complete. ContainerID="3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7" Jul 2 00:02:35.520109 containerd[2015]: time="2024-07-02T00:02:35.519292068Z" level=info msg="TearDown network for sandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\" successfully" Jul 2 00:02:35.524951 containerd[2015]: time="2024-07-02T00:02:35.524855172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:02:35.525486 containerd[2015]: time="2024-07-02T00:02:35.524956512Z" level=info msg="RemovePodSandbox \"3137a3785c7060b01480e1693a67385f75d0fa4b8dccbebf391540bc2cd613b7\" returns successfully" Jul 2 00:02:35.723151 sshd[5512]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:35.730302 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:02:35.731497 systemd[1]: sshd@14-172.31.25.138:22-147.75.109.163:40236.service: Deactivated successfully. Jul 2 00:02:35.736879 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:02:35.740894 systemd-logind[1993]: Removed session 15. Jul 2 00:02:40.761609 systemd[1]: Started sshd@15-172.31.25.138:22-147.75.109.163:40252.service - OpenSSH per-connection server daemon (147.75.109.163:40252). Jul 2 00:02:40.947669 sshd[5588]: Accepted publickey for core from 147.75.109.163 port 40252 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:40.950263 sshd[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:40.958881 systemd-logind[1993]: New session 16 of user core. Jul 2 00:02:40.966645 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:02:41.219191 sshd[5588]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:41.224866 systemd[1]: sshd@15-172.31.25.138:22-147.75.109.163:40252.service: Deactivated successfully. Jul 2 00:02:41.229112 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:02:41.232909 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:02:41.235656 systemd-logind[1993]: Removed session 16. Jul 2 00:02:46.260882 systemd[1]: Started sshd@16-172.31.25.138:22-147.75.109.163:39948.service - OpenSSH per-connection server daemon (147.75.109.163:39948). Jul 2 00:02:46.452221 sshd[5602]: Accepted publickey for core from 147.75.109.163 port 39948 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:46.454852 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:46.464086 systemd-logind[1993]: New session 17 of user core. Jul 2 00:02:46.467636 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:02:46.718457 sshd[5602]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:46.723614 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:02:46.724619 systemd[1]: sshd@16-172.31.25.138:22-147.75.109.163:39948.service: Deactivated successfully. Jul 2 00:02:46.728923 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:02:46.732941 systemd-logind[1993]: Removed session 17. Jul 2 00:02:51.765885 systemd[1]: Started sshd@17-172.31.25.138:22-147.75.109.163:39962.service - OpenSSH per-connection server daemon (147.75.109.163:39962). Jul 2 00:02:51.939607 sshd[5625]: Accepted publickey for core from 147.75.109.163 port 39962 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:51.942139 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:51.950694 systemd-logind[1993]: New session 18 of user core. Jul 2 00:02:51.962651 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:02:52.211093 sshd[5625]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:52.217966 systemd[1]: sshd@17-172.31.25.138:22-147.75.109.163:39962.service: Deactivated successfully. Jul 2 00:02:52.222067 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:02:52.224039 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:02:52.227050 systemd-logind[1993]: Removed session 18. Jul 2 00:02:52.248043 systemd[1]: Started sshd@18-172.31.25.138:22-147.75.109.163:39968.service - OpenSSH per-connection server daemon (147.75.109.163:39968). Jul 2 00:02:52.432722 sshd[5638]: Accepted publickey for core from 147.75.109.163 port 39968 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:52.435832 sshd[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:52.444425 systemd-logind[1993]: New session 19 of user core. Jul 2 00:02:52.452609 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:02:52.935051 sshd[5638]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:52.941211 systemd[1]: sshd@18-172.31.25.138:22-147.75.109.163:39968.service: Deactivated successfully. Jul 2 00:02:52.946378 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:02:52.950719 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:02:52.952969 systemd-logind[1993]: Removed session 19. Jul 2 00:02:52.975433 systemd[1]: Started sshd@19-172.31.25.138:22-147.75.109.163:36126.service - OpenSSH per-connection server daemon (147.75.109.163:36126). Jul 2 00:02:53.171985 sshd[5674]: Accepted publickey for core from 147.75.109.163 port 36126 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:53.174684 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:53.184482 systemd-logind[1993]: New session 20 of user core. Jul 2 00:02:53.191616 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:02:56.014588 sshd[5674]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:56.021701 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:02:56.028725 systemd[1]: sshd@19-172.31.25.138:22-147.75.109.163:36126.service: Deactivated successfully. Jul 2 00:02:56.034937 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:02:56.066757 systemd[1]: Started sshd@20-172.31.25.138:22-147.75.109.163:36142.service - OpenSSH per-connection server daemon (147.75.109.163:36142). Jul 2 00:02:56.070253 systemd-logind[1993]: Removed session 20. Jul 2 00:02:56.249826 sshd[5694]: Accepted publickey for core from 147.75.109.163 port 36142 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:56.252585 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:56.260690 systemd-logind[1993]: New session 21 of user core. Jul 2 00:02:56.266611 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:02:56.772446 sshd[5694]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:56.780993 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:02:56.781247 systemd[1]: sshd@20-172.31.25.138:22-147.75.109.163:36142.service: Deactivated successfully. Jul 2 00:02:56.786486 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:02:56.791836 systemd-logind[1993]: Removed session 21. Jul 2 00:02:56.812884 systemd[1]: Started sshd@21-172.31.25.138:22-147.75.109.163:36148.service - OpenSSH per-connection server daemon (147.75.109.163:36148). Jul 2 00:02:56.994775 sshd[5707]: Accepted publickey for core from 147.75.109.163 port 36148 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:02:56.997423 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:57.006854 systemd-logind[1993]: New session 22 of user core. Jul 2 00:02:57.014641 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:02:57.251807 sshd[5707]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:57.257489 systemd[1]: sshd@21-172.31.25.138:22-147.75.109.163:36148.service: Deactivated successfully. Jul 2 00:02:57.261323 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:02:57.266952 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:02:57.269474 systemd-logind[1993]: Removed session 22. Jul 2 00:03:02.303892 systemd[1]: Started sshd@22-172.31.25.138:22-147.75.109.163:36162.service - OpenSSH per-connection server daemon (147.75.109.163:36162). Jul 2 00:03:02.490488 sshd[5731]: Accepted publickey for core from 147.75.109.163 port 36162 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:03:02.493661 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:02.509252 systemd-logind[1993]: New session 23 of user core. Jul 2 00:03:02.514678 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:03:02.801701 sshd[5731]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:02.813251 systemd[1]: sshd@22-172.31.25.138:22-147.75.109.163:36162.service: Deactivated successfully. Jul 2 00:03:02.821096 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:03:02.826049 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:03:02.832488 systemd-logind[1993]: Removed session 23. Jul 2 00:03:04.033120 kubelet[3367]: I0702 00:03:04.033044 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-fcq78" podStartSLOduration=61.926472299 podStartE2EDuration="1m8.032979205s" podCreationTimestamp="2024-07-02 00:01:56 +0000 UTC" firstStartedPulling="2024-07-02 00:02:26.526941003 +0000 UTC m=+52.839614857" lastFinishedPulling="2024-07-02 00:02:32.633447921 +0000 UTC m=+58.946121763" observedRunningTime="2024-07-02 00:02:33.402707901 +0000 UTC m=+59.715381755" watchObservedRunningTime="2024-07-02 00:03:04.032979205 +0000 UTC m=+90.345653071" Jul 2 00:03:04.033877 kubelet[3367]: I0702 00:03:04.033779 3367 topology_manager.go:215] "Topology Admit Handler" podUID="57071790-93c4-4600-bed3-d71c7730af02" podNamespace="calico-apiserver" podName="calico-apiserver-6fdb5b577b-llxp2" Jul 2 00:03:04.054992 systemd[1]: Created slice kubepods-besteffort-pod57071790_93c4_4600_bed3_d71c7730af02.slice - libcontainer container kubepods-besteffort-pod57071790_93c4_4600_bed3_d71c7730af02.slice. Jul 2 00:03:04.100823 kubelet[3367]: I0702 00:03:04.100742 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sx7z\" (UniqueName: \"kubernetes.io/projected/57071790-93c4-4600-bed3-d71c7730af02-kube-api-access-4sx7z\") pod \"calico-apiserver-6fdb5b577b-llxp2\" (UID: \"57071790-93c4-4600-bed3-d71c7730af02\") " pod="calico-apiserver/calico-apiserver-6fdb5b577b-llxp2" Jul 2 00:03:04.100992 kubelet[3367]: I0702 00:03:04.100837 3367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57071790-93c4-4600-bed3-d71c7730af02-calico-apiserver-certs\") pod \"calico-apiserver-6fdb5b577b-llxp2\" (UID: \"57071790-93c4-4600-bed3-d71c7730af02\") " pod="calico-apiserver/calico-apiserver-6fdb5b577b-llxp2" Jul 2 00:03:04.201994 kubelet[3367]: E0702 00:03:04.201930 3367 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:03:04.202181 kubelet[3367]: E0702 00:03:04.202050 3367 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57071790-93c4-4600-bed3-d71c7730af02-calico-apiserver-certs podName:57071790-93c4-4600-bed3-d71c7730af02 nodeName:}" failed. No retries permitted until 2024-07-02 00:03:04.702019798 +0000 UTC m=+91.014693640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/57071790-93c4-4600-bed3-d71c7730af02-calico-apiserver-certs") pod "calico-apiserver-6fdb5b577b-llxp2" (UID: "57071790-93c4-4600-bed3-d71c7730af02") : secret "calico-apiserver-certs" not found Jul 2 00:03:04.704646 kubelet[3367]: E0702 00:03:04.704317 3367 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:03:04.705153 kubelet[3367]: E0702 00:03:04.705119 3367 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57071790-93c4-4600-bed3-d71c7730af02-calico-apiserver-certs podName:57071790-93c4-4600-bed3-d71c7730af02 nodeName:}" failed. No retries permitted until 2024-07-02 00:03:05.704887349 +0000 UTC m=+92.017561203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/57071790-93c4-4600-bed3-d71c7730af02-calico-apiserver-certs") pod "calico-apiserver-6fdb5b577b-llxp2" (UID: "57071790-93c4-4600-bed3-d71c7730af02") : secret "calico-apiserver-certs" not found Jul 2 00:03:05.861906 containerd[2015]: time="2024-07-02T00:03:05.861834726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdb5b577b-llxp2,Uid:57071790-93c4-4600-bed3-d71c7730af02,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:03:06.109520 systemd-networkd[1849]: cali56af392d2c6: Link UP Jul 2 00:03:06.109939 systemd-networkd[1849]: cali56af392d2c6: Gained carrier Jul 2 00:03:06.118325 (udev-worker)[5771]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:05.961 [INFO][5756] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0 calico-apiserver-6fdb5b577b- calico-apiserver 57071790-93c4-4600-bed3-d71c7730af02 1083 0 2024-07-02 00:03:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fdb5b577b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-138 calico-apiserver-6fdb5b577b-llxp2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56af392d2c6 [] []}} ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Namespace="calico-apiserver" Pod="calico-apiserver-6fdb5b577b-llxp2" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:05.962 [INFO][5756] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Namespace="calico-apiserver" Pod="calico-apiserver-6fdb5b577b-llxp2" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.019 [INFO][5764] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" HandleID="k8s-pod-network.aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Workload="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.043 [INFO][5764] ipam_plugin.go 264: Auto assigning IP ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" HandleID="k8s-pod-network.aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Workload="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000283460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-138", "pod":"calico-apiserver-6fdb5b577b-llxp2", "timestamp":"2024-07-02 00:03:06.019761591 +0000 UTC"}, Hostname:"ip-172-31-25-138", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.043 [INFO][5764] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.043 [INFO][5764] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.043 [INFO][5764] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-138' Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.047 [INFO][5764] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" host="ip-172-31-25-138" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.054 [INFO][5764] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-138" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.067 [INFO][5764] ipam.go 489: Trying affinity for 192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.071 [INFO][5764] ipam.go 155: Attempting to load block cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.077 [INFO][5764] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ip-172-31-25-138" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.077 [INFO][5764] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" host="ip-172-31-25-138" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.080 [INFO][5764] ipam.go 1685: Creating new handle: k8s-pod-network.aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975 Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.089 [INFO][5764] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" host="ip-172-31-25-138" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.099 [INFO][5764] ipam.go 1216: Successfully claimed IPs: [192.168.34.5/26] block=192.168.34.0/26 handle="k8s-pod-network.aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" host="ip-172-31-25-138" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.099 [INFO][5764] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.5/26] handle="k8s-pod-network.aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" host="ip-172-31-25-138" Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.099 [INFO][5764] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:06.147318 containerd[2015]: 2024-07-02 00:03:06.099 [INFO][5764] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.5/26] IPv6=[] ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" HandleID="k8s-pod-network.aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Workload="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" Jul 2 00:03:06.151796 containerd[2015]: 2024-07-02 00:03:06.104 [INFO][5756] k8s.go 386: Populated endpoint ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Namespace="calico-apiserver" Pod="calico-apiserver-6fdb5b577b-llxp2" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0", GenerateName:"calico-apiserver-6fdb5b577b-", Namespace:"calico-apiserver", SelfLink:"", UID:"57071790-93c4-4600-bed3-d71c7730af02", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdb5b577b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"", Pod:"calico-apiserver-6fdb5b577b-llxp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56af392d2c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:06.151796 containerd[2015]: 2024-07-02 00:03:06.104 [INFO][5756] k8s.go 387: Calico CNI using IPs: [192.168.34.5/32] ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Namespace="calico-apiserver" Pod="calico-apiserver-6fdb5b577b-llxp2" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" Jul 2 00:03:06.151796 containerd[2015]: 2024-07-02 00:03:06.104 [INFO][5756] dataplane_linux.go 68: Setting the host side veth name to cali56af392d2c6 ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Namespace="calico-apiserver" Pod="calico-apiserver-6fdb5b577b-llxp2" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" Jul 2 00:03:06.151796 containerd[2015]: 2024-07-02 00:03:06.108 [INFO][5756] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Namespace="calico-apiserver" Pod="calico-apiserver-6fdb5b577b-llxp2" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" Jul 2 00:03:06.151796 containerd[2015]: 2024-07-02 00:03:06.110 [INFO][5756] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Namespace="calico-apiserver" Pod="calico-apiserver-6fdb5b577b-llxp2" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0", GenerateName:"calico-apiserver-6fdb5b577b-", Namespace:"calico-apiserver", SelfLink:"", UID:"57071790-93c4-4600-bed3-d71c7730af02", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fdb5b577b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-138", ContainerID:"aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975", Pod:"calico-apiserver-6fdb5b577b-llxp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56af392d2c6", MAC:"ce:79:19:a9:86:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:06.151796 containerd[2015]: 2024-07-02 00:03:06.140 [INFO][5756] k8s.go 500: Wrote updated endpoint to datastore ContainerID="aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975" Namespace="calico-apiserver" Pod="calico-apiserver-6fdb5b577b-llxp2" WorkloadEndpoint="ip--172--31--25--138-k8s-calico--apiserver--6fdb5b577b--llxp2-eth0" Jul 2 00:03:06.211051 containerd[2015]: time="2024-07-02T00:03:06.210773308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:06.211466 containerd[2015]: time="2024-07-02T00:03:06.211321276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:06.212086 containerd[2015]: time="2024-07-02T00:03:06.212000620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:06.212857 containerd[2015]: time="2024-07-02T00:03:06.212617204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:06.295660 systemd[1]: Started cri-containerd-aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975.scope - libcontainer container aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975. Jul 2 00:03:06.407578 containerd[2015]: time="2024-07-02T00:03:06.407337653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fdb5b577b-llxp2,Uid:57071790-93c4-4600-bed3-d71c7730af02,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975\"" Jul 2 00:03:06.411857 containerd[2015]: time="2024-07-02T00:03:06.410905025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:03:07.840606 systemd[1]: Started sshd@23-172.31.25.138:22-147.75.109.163:57948.service - OpenSSH per-connection server daemon (147.75.109.163:57948). Jul 2 00:03:08.011766 systemd-networkd[1849]: cali56af392d2c6: Gained IPv6LL Jul 2 00:03:08.038448 sshd[5826]: Accepted publickey for core from 147.75.109.163 port 57948 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:03:08.041864 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:08.054047 systemd-logind[1993]: New session 24 of user core. Jul 2 00:03:08.064700 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:03:08.385873 sshd[5826]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:08.396300 systemd[1]: sshd@23-172.31.25.138:22-147.75.109.163:57948.service: Deactivated successfully. Jul 2 00:03:08.401683 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:03:08.404905 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:03:08.408469 systemd-logind[1993]: Removed session 24. Jul 2 00:03:09.463725 systemd[1]: run-containerd-runc-k8s.io-d323adaff9fe0c5156e6e53d6eebfcc8704863739dddfa861f3e2973f6421950-runc.fSQZDn.mount: Deactivated successfully. Jul 2 00:03:09.702477 containerd[2015]: time="2024-07-02T00:03:09.702412954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:09.704415 containerd[2015]: time="2024-07-02T00:03:09.704310370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 00:03:09.707694 containerd[2015]: time="2024-07-02T00:03:09.707615338Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:09.716287 containerd[2015]: time="2024-07-02T00:03:09.716229406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:09.718302 containerd[2015]: time="2024-07-02T00:03:09.718135702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 3.307169729s" Jul 2 00:03:09.718728 containerd[2015]: time="2024-07-02T00:03:09.718610962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 00:03:09.723579 containerd[2015]: time="2024-07-02T00:03:09.723507430Z" level=info msg="CreateContainer within sandbox \"aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:03:09.744692 containerd[2015]: time="2024-07-02T00:03:09.744301522Z" level=info msg="CreateContainer within sandbox \"aec30c6c4dd568155fb184b57f16067b8ba852efc7ee98a0f8c2ce59ae3ed975\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2386c684091b6b1390860eda3d89082821f909544c835aacbee2e0a9008220a5\"" Jul 2 00:03:09.747123 containerd[2015]: time="2024-07-02T00:03:09.746552566Z" level=info msg="StartContainer for \"2386c684091b6b1390860eda3d89082821f909544c835aacbee2e0a9008220a5\"" Jul 2 00:03:09.810678 systemd[1]: Started cri-containerd-2386c684091b6b1390860eda3d89082821f909544c835aacbee2e0a9008220a5.scope - libcontainer container 2386c684091b6b1390860eda3d89082821f909544c835aacbee2e0a9008220a5. Jul 2 00:03:09.905564 containerd[2015]: time="2024-07-02T00:03:09.905482943Z" level=info msg="StartContainer for \"2386c684091b6b1390860eda3d89082821f909544c835aacbee2e0a9008220a5\" returns successfully" Jul 2 00:03:10.772203 ntpd[1986]: Listen normally on 13 cali56af392d2c6 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 00:03:10.773213 ntpd[1986]: 2 Jul 00:03:10 ntpd[1986]: Listen normally on 13 cali56af392d2c6 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 2 00:03:11.386387 kubelet[3367]: I0702 00:03:11.386292 3367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fdb5b577b-llxp2" podStartSLOduration=5.076921961 podStartE2EDuration="8.386203006s" podCreationTimestamp="2024-07-02 00:03:03 +0000 UTC" firstStartedPulling="2024-07-02 00:03:06.410163437 +0000 UTC m=+92.722837291" lastFinishedPulling="2024-07-02 00:03:09.719444482 +0000 UTC m=+96.032118336" observedRunningTime="2024-07-02 00:03:10.544610542 +0000 UTC m=+96.857284420" watchObservedRunningTime="2024-07-02 00:03:11.386203006 +0000 UTC m=+97.698876872" Jul 2 00:03:13.427903 systemd[1]: Started sshd@24-172.31.25.138:22-147.75.109.163:56614.service - OpenSSH per-connection server daemon (147.75.109.163:56614). Jul 2 00:03:13.615812 sshd[5919]: Accepted publickey for core from 147.75.109.163 port 56614 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:03:13.619710 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:13.628338 systemd-logind[1993]: New session 25 of user core. Jul 2 00:03:13.633652 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:03:13.891793 sshd[5919]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:13.896592 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:03:13.897862 systemd[1]: sshd@24-172.31.25.138:22-147.75.109.163:56614.service: Deactivated successfully. Jul 2 00:03:13.902558 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:03:13.905424 systemd-logind[1993]: Removed session 25. Jul 2 00:03:18.931900 systemd[1]: Started sshd@25-172.31.25.138:22-147.75.109.163:56626.service - OpenSSH per-connection server daemon (147.75.109.163:56626). Jul 2 00:03:19.116709 sshd[5939]: Accepted publickey for core from 147.75.109.163 port 56626 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:03:19.119168 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:19.127498 systemd-logind[1993]: New session 26 of user core. Jul 2 00:03:19.132626 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:03:19.377301 sshd[5939]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:19.381928 systemd[1]: sshd@25-172.31.25.138:22-147.75.109.163:56626.service: Deactivated successfully. Jul 2 00:03:19.386122 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:03:19.392380 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:03:19.394760 systemd-logind[1993]: Removed session 26. Jul 2 00:03:24.417971 systemd[1]: Started sshd@26-172.31.25.138:22-147.75.109.163:50652.service - OpenSSH per-connection server daemon (147.75.109.163:50652). Jul 2 00:03:24.597888 sshd[6000]: Accepted publickey for core from 147.75.109.163 port 50652 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:03:24.600792 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:24.609758 systemd-logind[1993]: New session 27 of user core. Jul 2 00:03:24.617648 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:03:24.882193 sshd[6000]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:24.888436 systemd[1]: sshd@26-172.31.25.138:22-147.75.109.163:50652.service: Deactivated successfully. Jul 2 00:03:24.892035 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:03:24.893612 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:03:24.895900 systemd-logind[1993]: Removed session 27. Jul 2 00:03:29.923045 systemd[1]: Started sshd@27-172.31.25.138:22-147.75.109.163:50668.service - OpenSSH per-connection server daemon (147.75.109.163:50668). Jul 2 00:03:30.123596 sshd[6017]: Accepted publickey for core from 147.75.109.163 port 50668 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:03:30.127764 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:30.145242 systemd-logind[1993]: New session 28 of user core. Jul 2 00:03:30.152453 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:03:30.393435 sshd[6017]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:30.400270 systemd[1]: sshd@27-172.31.25.138:22-147.75.109.163:50668.service: Deactivated successfully. Jul 2 00:03:30.405374 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:03:30.410200 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:03:30.412672 systemd-logind[1993]: Removed session 28. Jul 2 00:03:35.432867 systemd[1]: Started sshd@28-172.31.25.138:22-147.75.109.163:52506.service - OpenSSH per-connection server daemon (147.75.109.163:52506). Jul 2 00:03:35.623944 sshd[6038]: Accepted publickey for core from 147.75.109.163 port 52506 ssh2: RSA SHA256:cTyknDXSkYvrYx97rKxYvjbc7fEXI6FEizCkWyPTIzM Jul 2 00:03:35.627072 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:35.635906 systemd-logind[1993]: New session 29 of user core. Jul 2 00:03:35.641630 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:03:35.891697 sshd[6038]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:35.896227 systemd[1]: sshd@28-172.31.25.138:22-147.75.109.163:52506.service: Deactivated successfully. Jul 2 00:03:35.900753 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:03:35.905221 systemd-logind[1993]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:03:35.908188 systemd-logind[1993]: Removed session 29.