Aug 5 21:54:34.345893 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 5 21:54:34.345941 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:37:57 -00 2024 Aug 5 21:54:34.345966 kernel: KASLR disabled due to lack of seed Aug 5 21:54:34.345983 kernel: efi: EFI v2.7 by EDK II Aug 5 21:54:34.345999 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Aug 5 21:54:34.346015 kernel: ACPI: Early table checksum verification disabled Aug 5 21:54:34.346032 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 5 21:54:34.346048 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 5 21:54:34.346064 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 5 21:54:34.346080 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 5 21:54:34.346100 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 5 21:54:34.346116 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 5 21:54:34.346131 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 5 21:54:34.346147 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 5 21:54:34.348237 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 5 21:54:34.348315 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 5 21:54:34.348335 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 5 21:54:34.348353 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 5 21:54:34.348370 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 5 21:54:34.348387 kernel: printk: bootconsole [uart0] enabled Aug 5 21:54:34.348404 kernel: NUMA: Failed to initialise from firmware Aug 5 21:54:34.348422 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 5 21:54:34.348438 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Aug 5 21:54:34.348455 kernel: Zone ranges: Aug 5 21:54:34.348472 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 5 21:54:34.348488 kernel: DMA32 empty Aug 5 21:54:34.348511 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 5 21:54:34.348527 kernel: Movable zone start for each node Aug 5 21:54:34.348545 kernel: Early memory node ranges Aug 5 21:54:34.348561 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 5 21:54:34.348578 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 5 21:54:34.348595 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 5 21:54:34.348612 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 5 21:54:34.348628 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 5 21:54:34.348647 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 5 21:54:34.348664 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 5 21:54:34.348681 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 5 21:54:34.348697 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 5 21:54:34.348719 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 5 21:54:34.348736 kernel: psci: probing for conduit method from ACPI. Aug 5 21:54:34.348760 kernel: psci: PSCIv1.0 detected in firmware. Aug 5 21:54:34.348777 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 21:54:34.348795 kernel: psci: Trusted OS migration not required Aug 5 21:54:34.348818 kernel: psci: SMC Calling Convention v1.1 Aug 5 21:54:34.348835 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 21:54:34.348853 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 21:54:34.348871 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 5 21:54:34.348888 kernel: Detected PIPT I-cache on CPU0 Aug 5 21:54:34.348906 kernel: CPU features: detected: GIC system register CPU interface Aug 5 21:54:34.348923 kernel: CPU features: detected: Spectre-v2 Aug 5 21:54:34.348940 kernel: CPU features: detected: Spectre-v3a Aug 5 21:54:34.348958 kernel: CPU features: detected: Spectre-BHB Aug 5 21:54:34.348975 kernel: CPU features: detected: ARM erratum 1742098 Aug 5 21:54:34.348993 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 5 21:54:34.349015 kernel: alternatives: applying boot alternatives Aug 5 21:54:34.349036 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4052403b8e39e55d48e6afcca927358798017aa0d33c868bc3038260a8d9be90 Aug 5 21:54:34.349055 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 21:54:34.349073 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 21:54:34.349090 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 21:54:34.349108 kernel: Fallback order for Node 0: 0 Aug 5 21:54:34.349125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Aug 5 21:54:34.349143 kernel: Policy zone: Normal Aug 5 21:54:34.351288 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 21:54:34.351334 kernel: software IO TLB: area num 2. Aug 5 21:54:34.351353 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Aug 5 21:54:34.351387 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Aug 5 21:54:34.351406 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 21:54:34.351424 kernel: trace event string verifier disabled Aug 5 21:54:34.351441 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 21:54:34.351459 kernel: rcu: RCU event tracing is enabled. Aug 5 21:54:34.351478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 21:54:34.351496 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 21:54:34.351514 kernel: Tracing variant of Tasks RCU enabled. Aug 5 21:54:34.351532 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 21:54:34.351550 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 21:54:34.351568 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 21:54:34.351592 kernel: GICv3: 96 SPIs implemented Aug 5 21:54:34.351610 kernel: GICv3: 0 Extended SPIs implemented Aug 5 21:54:34.351627 kernel: Root IRQ handler: gic_handle_irq Aug 5 21:54:34.351644 kernel: GICv3: GICv3 features: 16 PPIs Aug 5 21:54:34.351661 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 5 21:54:34.351679 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 5 21:54:34.351697 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 21:54:34.351714 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Aug 5 21:54:34.351732 kernel: GICv3: using LPI property table @0x00000004000e0000 Aug 5 21:54:34.351749 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 5 21:54:34.351766 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Aug 5 21:54:34.351784 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 21:54:34.351807 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 5 21:54:34.351825 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 5 21:54:34.351843 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 5 21:54:34.351864 kernel: Console: colour dummy device 80x25 Aug 5 21:54:34.351886 kernel: printk: console [tty1] enabled Aug 5 21:54:34.351903 kernel: ACPI: Core revision 20230628 Aug 5 21:54:34.351921 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 5 21:54:34.351939 kernel: pid_max: default: 32768 minimum: 301 Aug 5 21:54:34.351957 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 21:54:34.351980 kernel: SELinux: Initializing. Aug 5 21:54:34.351998 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:54:34.352015 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:54:34.352033 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:54:34.352051 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:54:34.352069 kernel: rcu: Hierarchical SRCU implementation. Aug 5 21:54:34.352087 kernel: rcu: Max phase no-delay instances is 400. Aug 5 21:54:34.352104 kernel: Platform MSI: ITS@0x10080000 domain created Aug 5 21:54:34.352122 kernel: PCI/MSI: ITS@0x10080000 domain created Aug 5 21:54:34.352144 kernel: Remapping and enabling EFI services. Aug 5 21:54:34.352224 kernel: smp: Bringing up secondary CPUs ... Aug 5 21:54:34.352245 kernel: Detected PIPT I-cache on CPU1 Aug 5 21:54:34.352264 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 5 21:54:34.352283 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Aug 5 21:54:34.352301 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 5 21:54:34.352319 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 21:54:34.352337 kernel: SMP: Total of 2 processors activated. Aug 5 21:54:34.352355 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 21:54:34.352373 kernel: CPU features: detected: 32-bit EL1 Support Aug 5 21:54:34.352398 kernel: CPU features: detected: CRC32 instructions Aug 5 21:54:34.352416 kernel: CPU: All CPU(s) started at EL1 Aug 5 21:54:34.352446 kernel: alternatives: applying system-wide alternatives Aug 5 21:54:34.352470 kernel: devtmpfs: initialized Aug 5 21:54:34.352489 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 21:54:34.352507 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 21:54:34.352525 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 21:54:34.352543 kernel: SMBIOS 3.0.0 present. Aug 5 21:54:34.352562 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 5 21:54:34.352585 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 21:54:34.352603 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 21:54:34.352622 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 21:54:34.352641 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 21:54:34.352659 kernel: audit: initializing netlink subsys (disabled) Aug 5 21:54:34.352678 kernel: audit: type=2000 audit(0.372:1): state=initialized audit_enabled=0 res=1 Aug 5 21:54:34.352696 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 21:54:34.352719 kernel: cpuidle: using governor menu Aug 5 21:54:34.352738 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 21:54:34.352756 kernel: ASID allocator initialised with 65536 entries Aug 5 21:54:34.352774 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 21:54:34.352793 kernel: Serial: AMBA PL011 UART driver Aug 5 21:54:34.352811 kernel: Modules: 17600 pages in range for non-PLT usage Aug 5 21:54:34.352829 kernel: Modules: 509120 pages in range for PLT usage Aug 5 21:54:34.352847 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 21:54:34.352865 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 21:54:34.352888 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 21:54:34.352907 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 21:54:34.352925 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 21:54:34.352943 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 21:54:34.352961 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 21:54:34.352980 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 21:54:34.352999 kernel: ACPI: Added _OSI(Module Device) Aug 5 21:54:34.353017 kernel: ACPI: Added _OSI(Processor Device) Aug 5 21:54:34.353035 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 21:54:34.353058 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 21:54:34.353077 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 21:54:34.353096 kernel: ACPI: Interpreter enabled Aug 5 21:54:34.353114 kernel: ACPI: Using GIC for interrupt routing Aug 5 21:54:34.353132 kernel: ACPI: MCFG table detected, 1 entries Aug 5 21:54:34.353151 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 5 21:54:34.353556 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 21:54:34.353771 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 21:54:34.353977 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 21:54:34.356264 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 5 21:54:34.356522 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 5 21:54:34.356548 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 5 21:54:34.356568 kernel: acpiphp: Slot [1] registered Aug 5 21:54:34.356587 kernel: acpiphp: Slot [2] registered Aug 5 21:54:34.356606 kernel: acpiphp: Slot [3] registered Aug 5 21:54:34.356625 kernel: acpiphp: Slot [4] registered Aug 5 21:54:34.356653 kernel: acpiphp: Slot [5] registered Aug 5 21:54:34.356672 kernel: acpiphp: Slot [6] registered Aug 5 21:54:34.356690 kernel: acpiphp: Slot [7] registered Aug 5 21:54:34.356708 kernel: acpiphp: Slot [8] registered Aug 5 21:54:34.356726 kernel: acpiphp: Slot [9] registered Aug 5 21:54:34.356744 kernel: acpiphp: Slot [10] registered Aug 5 21:54:34.356762 kernel: acpiphp: Slot [11] registered Aug 5 21:54:34.356780 kernel: acpiphp: Slot [12] registered Aug 5 21:54:34.356799 kernel: acpiphp: Slot [13] registered Aug 5 21:54:34.356817 kernel: acpiphp: Slot [14] registered Aug 5 21:54:34.356840 kernel: acpiphp: Slot [15] registered Aug 5 21:54:34.356858 kernel: acpiphp: Slot [16] registered Aug 5 21:54:34.356877 kernel: acpiphp: Slot [17] registered Aug 5 21:54:34.356894 kernel: acpiphp: Slot [18] registered Aug 5 21:54:34.356913 kernel: acpiphp: Slot [19] registered Aug 5 21:54:34.356931 kernel: acpiphp: Slot [20] registered Aug 5 21:54:34.356948 kernel: acpiphp: Slot [21] registered Aug 5 21:54:34.356967 kernel: acpiphp: Slot [22] registered Aug 5 21:54:34.356985 kernel: acpiphp: Slot [23] registered Aug 5 21:54:34.357008 kernel: acpiphp: Slot [24] registered Aug 5 21:54:34.357026 kernel: acpiphp: Slot [25] registered Aug 5 21:54:34.357044 kernel: acpiphp: Slot [26] registered Aug 5 21:54:34.357063 kernel: acpiphp: Slot [27] registered Aug 5 21:54:34.357081 kernel: acpiphp: Slot [28] registered Aug 5 21:54:34.357099 kernel: acpiphp: Slot [29] registered Aug 5 21:54:34.357117 kernel: acpiphp: Slot [30] registered Aug 5 21:54:34.357135 kernel: acpiphp: Slot [31] registered Aug 5 21:54:34.357153 kernel: PCI host bridge to bus 0000:00 Aug 5 21:54:34.357402 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 5 21:54:34.357596 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 21:54:34.357781 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 5 21:54:34.357964 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 5 21:54:34.360265 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Aug 5 21:54:34.360535 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Aug 5 21:54:34.360746 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Aug 5 21:54:34.360979 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 5 21:54:34.361223 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Aug 5 21:54:34.361440 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 5 21:54:34.361701 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 5 21:54:34.361916 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Aug 5 21:54:34.362123 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Aug 5 21:54:34.364490 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Aug 5 21:54:34.364721 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 5 21:54:34.364925 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Aug 5 21:54:34.365129 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Aug 5 21:54:34.365374 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Aug 5 21:54:34.365588 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Aug 5 21:54:34.365802 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Aug 5 21:54:34.365996 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 5 21:54:34.368276 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 21:54:34.368484 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 5 21:54:34.368512 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 21:54:34.368532 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 21:54:34.368551 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 21:54:34.368570 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 21:54:34.368588 kernel: iommu: Default domain type: Translated Aug 5 21:54:34.368606 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 21:54:34.368637 kernel: efivars: Registered efivars operations Aug 5 21:54:34.368656 kernel: vgaarb: loaded Aug 5 21:54:34.368675 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 21:54:34.368694 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 21:54:34.368712 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 21:54:34.368731 kernel: pnp: PnP ACPI init Aug 5 21:54:34.368969 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 5 21:54:34.368998 kernel: pnp: PnP ACPI: found 1 devices Aug 5 21:54:34.369023 kernel: NET: Registered PF_INET protocol family Aug 5 21:54:34.369043 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 21:54:34.369061 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 21:54:34.369080 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 21:54:34.369099 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 21:54:34.369118 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 21:54:34.369137 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 21:54:34.369201 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:54:34.369228 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:54:34.369255 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 21:54:34.369274 kernel: PCI: CLS 0 bytes, default 64 Aug 5 21:54:34.369292 kernel: kvm [1]: HYP mode not available Aug 5 21:54:34.369311 kernel: Initialise system trusted keyrings Aug 5 21:54:34.369330 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 21:54:34.369348 kernel: Key type asymmetric registered Aug 5 21:54:34.369367 kernel: Asymmetric key parser 'x509' registered Aug 5 21:54:34.369385 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 21:54:34.369404 kernel: io scheduler mq-deadline registered Aug 5 21:54:34.369429 kernel: io scheduler kyber registered Aug 5 21:54:34.369448 kernel: io scheduler bfq registered Aug 5 21:54:34.369692 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 5 21:54:34.369723 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 21:54:34.369743 kernel: ACPI: button: Power Button [PWRB] Aug 5 21:54:34.369762 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 5 21:54:34.369781 kernel: ACPI: button: Sleep Button [SLPB] Aug 5 21:54:34.369800 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 21:54:34.369826 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 5 21:54:34.370035 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 5 21:54:34.370063 kernel: printk: console [ttyS0] disabled Aug 5 21:54:34.370083 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 5 21:54:34.370102 kernel: printk: console [ttyS0] enabled Aug 5 21:54:34.370121 kernel: printk: bootconsole [uart0] disabled Aug 5 21:54:34.370140 kernel: thunder_xcv, ver 1.0 Aug 5 21:54:34.373290 kernel: thunder_bgx, ver 1.0 Aug 5 21:54:34.373343 kernel: nicpf, ver 1.0 Aug 5 21:54:34.373374 kernel: nicvf, ver 1.0 Aug 5 21:54:34.373672 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 21:54:34.373875 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T21:54:33 UTC (1722894873) Aug 5 21:54:34.373903 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 21:54:34.373925 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Aug 5 21:54:34.373945 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 21:54:34.373964 kernel: watchdog: Hard watchdog permanently disabled Aug 5 21:54:34.373983 kernel: NET: Registered PF_INET6 protocol family Aug 5 21:54:34.374013 kernel: Segment Routing with IPv6 Aug 5 21:54:34.374033 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 21:54:34.374053 kernel: NET: Registered PF_PACKET protocol family Aug 5 21:54:34.374072 kernel: Key type dns_resolver registered Aug 5 21:54:34.374091 kernel: registered taskstats version 1 Aug 5 21:54:34.374110 kernel: Loading compiled-in X.509 certificates Aug 5 21:54:34.374130 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 99cab5c9e2f0f3a5ca972c2df7b3d6ed64d627d4' Aug 5 21:54:34.374150 kernel: Key type .fscrypt registered Aug 5 21:54:34.374202 kernel: Key type fscrypt-provisioning registered Aug 5 21:54:34.374222 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 21:54:34.374249 kernel: ima: Allocated hash algorithm: sha1 Aug 5 21:54:34.374268 kernel: ima: No architecture policies found Aug 5 21:54:34.374287 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 21:54:34.374305 kernel: clk: Disabling unused clocks Aug 5 21:54:34.374323 kernel: Freeing unused kernel memory: 39040K Aug 5 21:54:34.374342 kernel: Run /init as init process Aug 5 21:54:34.374360 kernel: with arguments: Aug 5 21:54:34.374380 kernel: /init Aug 5 21:54:34.374398 kernel: with environment: Aug 5 21:54:34.374422 kernel: HOME=/ Aug 5 21:54:34.374441 kernel: TERM=linux Aug 5 21:54:34.374459 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 21:54:34.374484 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:54:34.374508 systemd[1]: Detected virtualization amazon. Aug 5 21:54:34.374529 systemd[1]: Detected architecture arm64. Aug 5 21:54:34.374549 systemd[1]: Running in initrd. Aug 5 21:54:34.374574 systemd[1]: No hostname configured, using default hostname. Aug 5 21:54:34.374594 systemd[1]: Hostname set to . Aug 5 21:54:34.374614 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:54:34.374634 systemd[1]: Queued start job for default target initrd.target. Aug 5 21:54:34.374654 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:54:34.374674 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:54:34.374696 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 21:54:34.374717 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:54:34.374743 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 21:54:34.374764 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 21:54:34.374787 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 21:54:34.374808 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 21:54:34.374829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:54:34.374849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:54:34.374870 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:54:34.374895 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:54:34.374916 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:54:34.374936 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:54:34.374956 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:54:34.374976 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:54:34.374997 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:54:34.375017 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:54:34.375037 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:54:34.375057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:54:34.375082 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:54:34.375102 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:54:34.375122 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 21:54:34.375143 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:54:34.377247 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 21:54:34.377286 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 21:54:34.377308 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:54:34.377330 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:54:34.377364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:54:34.377386 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 21:54:34.377407 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:54:34.377428 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 21:54:34.377515 systemd-journald[251]: Collecting audit messages is disabled. Aug 5 21:54:34.377567 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:54:34.377589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:54:34.377610 systemd-journald[251]: Journal started Aug 5 21:54:34.377654 systemd-journald[251]: Runtime Journal (/run/log/journal/ec23c63ab44c1eff6f50e6c18d88ce0e) is 8.0M, max 75.3M, 67.3M free. Aug 5 21:54:34.384325 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 21:54:34.343316 systemd-modules-load[252]: Inserted module 'overlay' Aug 5 21:54:34.389923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:54:34.389970 kernel: Bridge firewalling registered Aug 5 21:54:34.386455 systemd-modules-load[252]: Inserted module 'br_netfilter' Aug 5 21:54:34.396523 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:54:34.400204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:54:34.407833 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:54:34.417450 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:54:34.420241 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:54:34.427676 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:54:34.488385 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:54:34.498305 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:54:34.515630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:54:34.520030 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:54:34.541946 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 21:54:34.553633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:54:34.601349 dracut-cmdline[287]: dracut-dracut-053 Aug 5 21:54:34.614818 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4052403b8e39e55d48e6afcca927358798017aa0d33c868bc3038260a8d9be90 Aug 5 21:54:34.675684 systemd-resolved[288]: Positive Trust Anchors: Aug 5 21:54:34.676408 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:54:34.676479 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:54:34.873278 kernel: SCSI subsystem initialized Aug 5 21:54:34.883237 kernel: Loading iSCSI transport class v2.0-870. Aug 5 21:54:34.898251 kernel: iscsi: registered transport (tcp) Aug 5 21:54:34.926267 kernel: iscsi: registered transport (qla4xxx) Aug 5 21:54:34.926390 kernel: QLogic iSCSI HBA Driver Aug 5 21:54:34.945211 kernel: random: crng init done Aug 5 21:54:34.945653 systemd-resolved[288]: Defaulting to hostname 'linux'. Aug 5 21:54:34.948873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:54:34.953024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:54:35.051819 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 21:54:35.062483 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 21:54:35.107609 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 21:54:35.107766 kernel: device-mapper: uevent: version 1.0.3 Aug 5 21:54:35.109590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 21:54:35.189398 kernel: raid6: neonx8 gen() 6565 MB/s Aug 5 21:54:35.206263 kernel: raid6: neonx4 gen() 6407 MB/s Aug 5 21:54:35.223236 kernel: raid6: neonx2 gen() 5324 MB/s Aug 5 21:54:35.240299 kernel: raid6: neonx1 gen() 3859 MB/s Aug 5 21:54:35.257224 kernel: raid6: int64x8 gen() 3758 MB/s Aug 5 21:54:35.274285 kernel: raid6: int64x4 gen() 3672 MB/s Aug 5 21:54:35.291238 kernel: raid6: int64x2 gen() 3565 MB/s Aug 5 21:54:35.309057 kernel: raid6: int64x1 gen() 2736 MB/s Aug 5 21:54:35.309219 kernel: raid6: using algorithm neonx8 gen() 6565 MB/s Aug 5 21:54:35.327149 kernel: raid6: .... xor() 4824 MB/s, rmw enabled Aug 5 21:54:35.327344 kernel: raid6: using neon recovery algorithm Aug 5 21:54:35.338274 kernel: xor: measuring software checksum speed Aug 5 21:54:35.338474 kernel: 8regs : 11092 MB/sec Aug 5 21:54:35.341060 kernel: 32regs : 11997 MB/sec Aug 5 21:54:35.344181 kernel: arm64_neon : 9264 MB/sec Aug 5 21:54:35.344278 kernel: xor: using function: 32regs (11997 MB/sec) Aug 5 21:54:35.442242 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 21:54:35.466748 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:54:35.477662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:54:35.528196 systemd-udevd[470]: Using default interface naming scheme 'v255'. Aug 5 21:54:35.538752 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:54:35.552662 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 21:54:35.610841 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Aug 5 21:54:35.678244 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:54:35.688684 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:54:35.840727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:54:35.853465 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 21:54:35.921793 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 21:54:35.931149 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:54:35.935814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:54:35.938314 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:54:35.964108 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 21:54:36.022809 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:54:36.116454 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 21:54:36.116631 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 5 21:54:36.152448 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 5 21:54:36.157724 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 5 21:54:36.162484 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:5a:24:2b:18:05 Aug 5 21:54:36.138987 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:54:36.139326 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:54:36.144989 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:54:36.151751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:54:36.152550 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:54:36.157721 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:54:36.178355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:54:36.185453 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:54:36.218822 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 5 21:54:36.218866 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 5 21:54:36.234213 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 5 21:54:36.238516 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:54:36.245952 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 21:54:36.246059 kernel: GPT:9289727 != 16777215 Aug 5 21:54:36.247711 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 21:54:36.249057 kernel: GPT:9289727 != 16777215 Aug 5 21:54:36.251864 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 21:54:36.251978 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:54:36.257635 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:54:36.304462 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:54:36.392206 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (542) Aug 5 21:54:36.406332 kernel: BTRFS: device fsid 278882ec-4175-45f0-a12b-7fddc0d6d9a3 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (519) Aug 5 21:54:36.421741 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 5 21:54:36.602142 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 5 21:54:36.622440 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 5 21:54:36.625106 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 5 21:54:36.642556 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 21:54:36.659565 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 21:54:36.681462 disk-uuid[661]: Primary Header is updated. Aug 5 21:54:36.681462 disk-uuid[661]: Secondary Entries is updated. Aug 5 21:54:36.681462 disk-uuid[661]: Secondary Header is updated. Aug 5 21:54:36.693052 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:54:36.702521 kernel: GPT:disk_guids don't match. Aug 5 21:54:36.702677 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 21:54:36.702720 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:54:36.717212 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:54:37.715439 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:54:37.717273 disk-uuid[662]: The operation has completed successfully. Aug 5 21:54:37.947648 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 21:54:37.949766 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 21:54:38.007695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 21:54:38.018346 sh[1005]: Success Aug 5 21:54:38.056217 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 21:54:38.189254 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 21:54:38.209633 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 21:54:38.222351 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 21:54:38.256586 kernel: BTRFS info (device dm-0): first mount of filesystem 278882ec-4175-45f0-a12b-7fddc0d6d9a3 Aug 5 21:54:38.256666 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:54:38.258334 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 21:54:38.259614 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 21:54:38.260657 kernel: BTRFS info (device dm-0): using free space tree Aug 5 21:54:38.343243 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 5 21:54:38.368549 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 21:54:38.372817 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 21:54:38.382767 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 21:54:38.400194 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 21:54:38.433678 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 21:54:38.433782 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:54:38.433810 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 21:54:38.440370 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 21:54:38.466814 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 21:54:38.469244 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 21:54:38.496231 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 21:54:38.508707 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 21:54:38.652297 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:54:38.678743 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:54:38.758241 systemd-networkd[1198]: lo: Link UP Aug 5 21:54:38.758266 systemd-networkd[1198]: lo: Gained carrier Aug 5 21:54:38.765808 systemd-networkd[1198]: Enumeration completed Aug 5 21:54:38.768360 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:54:38.768373 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:54:38.770743 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:54:38.778901 systemd[1]: Reached target network.target - Network. Aug 5 21:54:38.783835 systemd-networkd[1198]: eth0: Link UP Aug 5 21:54:38.783846 systemd-networkd[1198]: eth0: Gained carrier Aug 5 21:54:38.783868 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:54:38.802410 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.26.69/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 21:54:38.984511 ignition[1118]: Ignition 2.18.0 Aug 5 21:54:38.985318 ignition[1118]: Stage: fetch-offline Aug 5 21:54:38.986447 ignition[1118]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:54:38.986478 ignition[1118]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:54:38.987136 ignition[1118]: Ignition finished successfully Aug 5 21:54:38.997102 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:54:39.008609 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 21:54:39.052863 ignition[1209]: Ignition 2.18.0 Aug 5 21:54:39.052897 ignition[1209]: Stage: fetch Aug 5 21:54:39.054832 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:54:39.054863 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:54:39.055109 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:54:39.066225 ignition[1209]: PUT result: OK Aug 5 21:54:39.069987 ignition[1209]: parsed url from cmdline: "" Aug 5 21:54:39.070026 ignition[1209]: no config URL provided Aug 5 21:54:39.070071 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:54:39.070117 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:54:39.070277 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:54:39.072417 ignition[1209]: PUT result: OK Aug 5 21:54:39.072578 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 5 21:54:39.077857 ignition[1209]: GET result: OK Aug 5 21:54:39.078086 ignition[1209]: parsing config with SHA512: 5a63ab5167bb9d8d47229638e6ca3b4f0ef9ff42716a6f229e7d9384aba40ea9833fb0cc32d49d7a8b5faeee67b5b4efaa1cd847b727c2d7ae285298a8810a48 Aug 5 21:54:39.092409 unknown[1209]: fetched base config from "system" Aug 5 21:54:39.093262 unknown[1209]: fetched base config from "system" Aug 5 21:54:39.093294 unknown[1209]: fetched user config from "aws" Aug 5 21:54:39.094913 ignition[1209]: fetch: fetch complete Aug 5 21:54:39.094925 ignition[1209]: fetch: fetch passed Aug 5 21:54:39.095038 ignition[1209]: Ignition finished successfully Aug 5 21:54:39.104152 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 21:54:39.126067 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 21:54:39.155817 ignition[1216]: Ignition 2.18.0 Aug 5 21:54:39.155861 ignition[1216]: Stage: kargs Aug 5 21:54:39.156725 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:54:39.156757 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:54:39.156928 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:54:39.159846 ignition[1216]: PUT result: OK Aug 5 21:54:39.171598 ignition[1216]: kargs: kargs passed Aug 5 21:54:39.171904 ignition[1216]: Ignition finished successfully Aug 5 21:54:39.178693 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 21:54:39.196537 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 21:54:39.233669 ignition[1223]: Ignition 2.18.0 Aug 5 21:54:39.233702 ignition[1223]: Stage: disks Aug 5 21:54:39.237126 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:54:39.237263 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:54:39.241117 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:54:39.244728 ignition[1223]: PUT result: OK Aug 5 21:54:39.250320 ignition[1223]: disks: disks passed Aug 5 21:54:39.250497 ignition[1223]: Ignition finished successfully Aug 5 21:54:39.257291 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 21:54:39.260916 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 21:54:39.263354 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:54:39.265699 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:54:39.267886 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:54:39.272313 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:54:39.297024 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 21:54:39.357132 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 21:54:39.366339 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 21:54:39.381485 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 21:54:39.485254 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 44c9fced-dca5-4347-a15f-96911c2e5e61 r/w with ordered data mode. Quota mode: none. Aug 5 21:54:39.488045 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 21:54:39.490139 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 21:54:39.514678 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:54:39.529226 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 21:54:39.532100 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 21:54:39.536828 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 21:54:39.537649 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:54:39.564218 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1251) Aug 5 21:54:39.565067 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 21:54:39.570876 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 21:54:39.570929 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:54:39.570959 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 21:54:39.580913 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 21:54:39.581568 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 21:54:39.594900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:54:40.071584 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 21:54:40.092265 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Aug 5 21:54:40.101914 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 21:54:40.111282 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 21:54:40.499379 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 21:54:40.510489 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 21:54:40.530507 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 21:54:40.544213 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 21:54:40.544962 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 21:54:40.609737 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 21:54:40.616567 ignition[1364]: INFO : Ignition 2.18.0 Aug 5 21:54:40.616567 ignition[1364]: INFO : Stage: mount Aug 5 21:54:40.623433 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:54:40.623433 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:54:40.623433 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:54:40.623433 ignition[1364]: INFO : PUT result: OK Aug 5 21:54:40.633689 ignition[1364]: INFO : mount: mount passed Aug 5 21:54:40.633689 ignition[1364]: INFO : Ignition finished successfully Aug 5 21:54:40.639721 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 21:54:40.653483 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 21:54:40.696773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:54:40.711415 systemd-networkd[1198]: eth0: Gained IPv6LL Aug 5 21:54:40.723511 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1376) Aug 5 21:54:40.728967 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 21:54:40.729117 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:54:40.729151 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 21:54:40.735256 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 21:54:40.741509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:54:40.797155 ignition[1393]: INFO : Ignition 2.18.0 Aug 5 21:54:40.797155 ignition[1393]: INFO : Stage: files Aug 5 21:54:40.800979 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:54:40.800979 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:54:40.800979 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:54:40.809898 ignition[1393]: INFO : PUT result: OK Aug 5 21:54:40.813141 ignition[1393]: DEBUG : files: compiled without relabeling support, skipping Aug 5 21:54:40.815262 ignition[1393]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 21:54:40.815262 ignition[1393]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 21:54:40.855800 ignition[1393]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 21:54:40.858426 ignition[1393]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 21:54:40.861328 unknown[1393]: wrote ssh authorized keys file for user: core Aug 5 21:54:40.863591 ignition[1393]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 21:54:40.874706 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:54:40.874706 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 21:54:40.947378 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 21:54:41.086325 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:54:41.086325 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:54:41.094575 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Aug 5 21:54:41.562037 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 21:54:42.011871 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:54:42.016799 ignition[1393]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 21:54:42.026223 ignition[1393]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:54:42.029780 ignition[1393]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:54:42.029780 ignition[1393]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 21:54:42.029780 ignition[1393]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 21:54:42.038303 ignition[1393]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 21:54:42.038303 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:54:42.038303 ignition[1393]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:54:42.038303 ignition[1393]: INFO : files: files passed Aug 5 21:54:42.038303 ignition[1393]: INFO : Ignition finished successfully Aug 5 21:54:42.054095 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 21:54:42.065889 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 21:54:42.072458 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 21:54:42.104482 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 21:54:42.106658 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 21:54:42.127393 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:54:42.127393 initrd-setup-root-after-ignition[1422]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:54:42.137902 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:54:42.145276 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:54:42.147896 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 21:54:42.172747 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 21:54:42.259727 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 21:54:42.261810 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 21:54:42.265973 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 21:54:42.268623 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 21:54:42.271999 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 21:54:42.288570 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 21:54:42.337395 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:54:42.349588 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 21:54:42.387431 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:54:42.390203 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:54:42.395084 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 21:54:42.398229 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 21:54:42.398594 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:54:42.406897 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 21:54:42.412882 systemd[1]: Stopped target basic.target - Basic System. Aug 5 21:54:42.416625 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 21:54:42.420566 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:54:42.425018 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 21:54:42.430813 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 21:54:42.434475 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:54:42.442786 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 21:54:42.446937 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 21:54:42.450902 systemd[1]: Stopped target swap.target - Swaps. Aug 5 21:54:42.458496 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 21:54:42.460232 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:54:42.467419 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:54:42.470090 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:54:42.474666 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 21:54:42.477893 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:54:42.481882 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 21:54:42.483094 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 21:54:42.494567 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 21:54:42.495261 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:54:42.503352 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 21:54:42.503678 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 21:54:42.518809 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 21:54:42.528453 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 21:54:42.537414 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 21:54:42.537921 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:54:42.546666 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 21:54:42.547023 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:54:42.583541 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 21:54:42.590558 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 21:54:42.608064 ignition[1446]: INFO : Ignition 2.18.0 Aug 5 21:54:42.608064 ignition[1446]: INFO : Stage: umount Aug 5 21:54:42.616693 ignition[1446]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:54:42.616693 ignition[1446]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:54:42.616693 ignition[1446]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:54:42.626766 ignition[1446]: INFO : PUT result: OK Aug 5 21:54:42.630733 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 21:54:42.638993 ignition[1446]: INFO : umount: umount passed Aug 5 21:54:42.642854 ignition[1446]: INFO : Ignition finished successfully Aug 5 21:54:42.646393 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 21:54:42.648397 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 21:54:42.655068 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 21:54:42.655416 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 21:54:42.662199 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 21:54:42.662408 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 21:54:42.670625 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 21:54:42.670782 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 21:54:42.672924 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 21:54:42.673024 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 21:54:42.675051 systemd[1]: Stopped target network.target - Network. Aug 5 21:54:42.676858 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 21:54:42.677115 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:54:42.679726 systemd[1]: Stopped target paths.target - Path Units. Aug 5 21:54:42.681531 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 21:54:42.683582 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:54:42.686733 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 21:54:42.694525 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 21:54:42.696447 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 21:54:42.696544 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:54:42.698560 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 21:54:42.698640 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:54:42.700731 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 21:54:42.700829 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 21:54:42.703552 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 21:54:42.703694 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 21:54:42.708425 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 21:54:42.708537 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 21:54:42.712375 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 21:54:42.714510 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 21:54:42.722823 systemd-networkd[1198]: eth0: DHCPv6 lease lost Aug 5 21:54:42.728677 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 21:54:42.732328 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 21:54:42.758693 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 21:54:42.758926 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 21:54:42.770775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 21:54:42.772058 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:54:42.790603 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 21:54:42.793058 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 21:54:42.793254 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:54:42.797978 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:54:42.798127 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:54:42.802796 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 21:54:42.802922 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 21:54:42.805151 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 21:54:42.805254 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:54:42.807628 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:54:42.844107 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 21:54:42.847270 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 21:54:42.851850 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 21:54:42.853335 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:54:42.861219 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 21:54:42.861419 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 21:54:42.865503 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 21:54:42.865625 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:54:42.867731 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 21:54:42.867862 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:54:42.881800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 21:54:42.881966 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 21:54:42.888303 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:54:42.888558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:54:42.913490 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 21:54:42.916044 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 21:54:42.916323 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:54:42.919826 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 21:54:42.920006 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:54:42.924711 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 21:54:42.924952 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:54:42.927629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:54:42.927823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:54:42.973932 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 21:54:42.976893 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 21:54:42.984942 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 21:54:42.994582 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 21:54:43.027901 systemd[1]: Switching root. Aug 5 21:54:43.090475 systemd-journald[251]: Journal stopped Aug 5 21:54:46.249473 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Aug 5 21:54:46.249680 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 21:54:46.249735 kernel: SELinux: policy capability open_perms=1 Aug 5 21:54:46.249769 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 21:54:46.249802 kernel: SELinux: policy capability always_check_network=0 Aug 5 21:54:46.249844 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 21:54:46.249878 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 21:54:46.249911 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 21:54:46.249944 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 21:54:46.249983 kernel: audit: type=1403 audit(1722894884.237:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 21:54:46.250019 systemd[1]: Successfully loaded SELinux policy in 85.241ms. Aug 5 21:54:46.250073 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.164ms. Aug 5 21:54:46.250111 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:54:46.250146 systemd[1]: Detected virtualization amazon. Aug 5 21:54:46.253682 systemd[1]: Detected architecture arm64. Aug 5 21:54:46.253766 systemd[1]: Detected first boot. Aug 5 21:54:46.253831 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:54:46.253885 zram_generator::config[1488]: No configuration found. Aug 5 21:54:46.253933 systemd[1]: Populated /etc with preset unit settings. Aug 5 21:54:46.253966 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 21:54:46.254018 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 21:54:46.254063 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 21:54:46.254104 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 21:54:46.254143 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 21:54:46.254251 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 21:54:46.254313 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 21:54:46.254360 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 21:54:46.254400 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 21:54:46.254432 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 21:54:46.254549 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 21:54:46.257447 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:54:46.257613 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:54:46.257651 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 21:54:46.257685 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 21:54:46.257744 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 21:54:46.257780 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:54:46.257812 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 21:54:46.257847 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:54:46.257884 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 21:54:46.257920 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 21:54:46.257951 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 21:54:46.257985 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 21:54:46.258031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:54:46.258073 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:54:46.258107 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:54:46.258146 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:54:46.258244 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 21:54:46.258289 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 21:54:46.258343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:54:46.258377 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:54:46.258417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:54:46.258463 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 21:54:46.258497 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 21:54:46.258530 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 21:54:46.258566 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 21:54:46.258602 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 21:54:46.258637 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 21:54:46.258670 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 21:54:46.258705 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 21:54:46.258739 systemd[1]: Reached target machines.target - Containers. Aug 5 21:54:46.258785 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 21:54:46.258827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:54:46.258863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:54:46.258899 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 21:54:46.258939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:54:46.258975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:54:46.259010 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:54:46.259047 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 21:54:46.259082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:54:46.259113 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 21:54:46.263078 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 21:54:46.263652 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 21:54:46.263704 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 21:54:46.263738 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 21:54:46.263770 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:54:46.263803 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:54:46.263844 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 21:54:46.263899 kernel: fuse: init (API version 7.39) Aug 5 21:54:46.263937 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 21:54:46.263971 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:54:46.264006 kernel: loop: module loaded Aug 5 21:54:46.264045 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 21:54:46.264078 systemd[1]: Stopped verity-setup.service. Aug 5 21:54:46.264116 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 21:54:46.264234 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 21:54:46.264289 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 21:54:46.264342 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 21:54:46.264384 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 21:54:46.264417 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 21:54:46.264448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:54:46.264478 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 21:54:46.264517 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 21:54:46.264548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:54:46.264578 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:54:46.264654 systemd-journald[1565]: Collecting audit messages is disabled. Aug 5 21:54:46.264710 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:54:46.264742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:54:46.264773 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 21:54:46.264813 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 21:54:46.264844 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:54:46.264875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:54:46.264912 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:54:46.264948 systemd-journald[1565]: Journal started Aug 5 21:54:46.265004 systemd-journald[1565]: Runtime Journal (/run/log/journal/ec23c63ab44c1eff6f50e6c18d88ce0e) is 8.0M, max 75.3M, 67.3M free. Aug 5 21:54:45.547376 systemd[1]: Queued start job for default target multi-user.target. Aug 5 21:54:45.635247 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 5 21:54:45.636493 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 21:54:46.272288 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 21:54:46.279304 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:54:46.344289 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 21:54:46.350318 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 21:54:46.355342 kernel: ACPI: bus type drm_connector registered Aug 5 21:54:46.364590 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 21:54:46.385447 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 21:54:46.387937 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 21:54:46.388039 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:54:46.394577 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 21:54:46.409603 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 21:54:46.419430 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 21:54:46.422589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:54:46.449822 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 21:54:46.458970 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 21:54:46.461459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:54:46.465314 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 21:54:46.467504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:54:46.475613 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:54:46.485550 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 21:54:46.497531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:54:46.508332 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 21:54:46.511837 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:54:46.512249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:54:46.514722 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 21:54:46.517274 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 21:54:46.521382 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 21:54:46.601710 systemd-journald[1565]: Time spent on flushing to /var/log/journal/ec23c63ab44c1eff6f50e6c18d88ce0e is 194.454ms for 910 entries. Aug 5 21:54:46.601710 systemd-journald[1565]: System Journal (/var/log/journal/ec23c63ab44c1eff6f50e6c18d88ce0e) is 8.0M, max 195.6M, 187.6M free. Aug 5 21:54:46.818853 systemd-journald[1565]: Received client request to flush runtime journal. Aug 5 21:54:46.819074 kernel: loop0: detected capacity change from 0 to 193208 Aug 5 21:54:46.821704 kernel: block loop0: the capability attribute has been deprecated. Aug 5 21:54:46.834907 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 21:54:46.625469 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 21:54:46.628120 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 21:54:46.640721 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 21:54:46.683300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:54:46.697863 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 21:54:46.751004 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Aug 5 21:54:46.751035 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Aug 5 21:54:46.754590 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:54:46.786494 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:54:46.807680 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 21:54:46.845825 udevadm[1628]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 21:54:46.851673 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 21:54:46.867464 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 21:54:46.872380 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 21:54:46.882652 kernel: loop1: detected capacity change from 0 to 113672 Aug 5 21:54:46.992546 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 21:54:47.009816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:54:47.035366 kernel: loop2: detected capacity change from 0 to 51896 Aug 5 21:54:47.086258 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Aug 5 21:54:47.086299 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Aug 5 21:54:47.111424 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:54:47.175100 kernel: loop3: detected capacity change from 0 to 59688 Aug 5 21:54:47.322019 kernel: loop4: detected capacity change from 0 to 193208 Aug 5 21:54:47.351218 kernel: loop5: detected capacity change from 0 to 113672 Aug 5 21:54:47.368368 kernel: loop6: detected capacity change from 0 to 51896 Aug 5 21:54:47.398145 kernel: loop7: detected capacity change from 0 to 59688 Aug 5 21:54:47.411960 (sd-merge)[1647]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 5 21:54:47.419292 (sd-merge)[1647]: Merged extensions into '/usr'. Aug 5 21:54:47.432062 systemd[1]: Reloading requested from client PID 1614 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 21:54:47.432937 systemd[1]: Reloading... Aug 5 21:54:47.713396 zram_generator::config[1668]: No configuration found. Aug 5 21:54:48.142219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:54:48.290691 systemd[1]: Reloading finished in 854 ms. Aug 5 21:54:48.346318 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 21:54:48.350366 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 21:54:48.377841 systemd[1]: Starting ensure-sysext.service... Aug 5 21:54:48.391776 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:54:48.404962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:54:48.462891 systemd[1]: Reloading requested from client PID 1723 ('systemctl') (unit ensure-sysext.service)... Aug 5 21:54:48.466312 systemd[1]: Reloading... Aug 5 21:54:48.498779 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 21:54:48.503781 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 21:54:48.506938 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 21:54:48.507693 systemd-tmpfiles[1724]: ACLs are not supported, ignoring. Aug 5 21:54:48.507882 systemd-tmpfiles[1724]: ACLs are not supported, ignoring. Aug 5 21:54:48.521803 systemd-tmpfiles[1724]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:54:48.521841 systemd-tmpfiles[1724]: Skipping /boot Aug 5 21:54:48.534756 ldconfig[1609]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 21:54:48.554882 systemd-tmpfiles[1724]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:54:48.554925 systemd-tmpfiles[1724]: Skipping /boot Aug 5 21:54:48.597624 systemd-udevd[1725]: Using default interface naming scheme 'v255'. Aug 5 21:54:48.811239 zram_generator::config[1775]: No configuration found. Aug 5 21:54:48.959690 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1759) Aug 5 21:54:48.972219 (udev-worker)[1761]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:54:49.269264 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1774) Aug 5 21:54:49.315572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:54:49.498384 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 21:54:49.498951 systemd[1]: Reloading finished in 1031 ms. Aug 5 21:54:49.533614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:54:49.538928 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 21:54:49.545284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:54:49.656099 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:54:49.688785 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 21:54:49.691251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:54:49.697415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:54:49.710392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:54:49.725420 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:54:49.728763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:54:49.739342 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 21:54:49.785337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:54:49.801817 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:54:49.813001 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 21:54:49.893335 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 21:54:49.976154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 21:54:49.985634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:54:49.986308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:54:49.991921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:54:49.992857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:54:49.999043 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:54:50.001073 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:54:50.006421 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 21:54:50.048673 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 21:54:50.052639 systemd[1]: Finished ensure-sysext.service. Aug 5 21:54:50.056907 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 21:54:50.067852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:54:50.074206 augenrules[1950]: No rules Aug 5 21:54:50.079041 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 21:54:50.090852 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:54:50.093411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:54:50.096678 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 21:54:50.101510 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:54:50.101686 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:54:50.101773 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 21:54:50.116258 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 21:54:50.125623 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 21:54:50.135803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:54:50.138097 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:54:50.139638 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:54:50.147193 lvm[1951]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:54:50.166149 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:54:50.168598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:54:50.205395 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 21:54:50.219421 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 21:54:50.227111 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 21:54:50.233596 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:54:50.245698 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 21:54:50.275596 lvm[1968]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:54:50.322337 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 21:54:50.333648 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 21:54:50.486931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:54:50.496246 systemd-networkd[1930]: lo: Link UP Aug 5 21:54:50.496877 systemd-networkd[1930]: lo: Gained carrier Aug 5 21:54:50.501269 systemd-networkd[1930]: Enumeration completed Aug 5 21:54:50.501772 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:54:50.504205 systemd-networkd[1930]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:54:50.504472 systemd-networkd[1930]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:54:50.508056 systemd-networkd[1930]: eth0: Link UP Aug 5 21:54:50.508887 systemd-networkd[1930]: eth0: Gained carrier Aug 5 21:54:50.509089 systemd-networkd[1930]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:54:50.515611 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 21:54:50.522452 systemd-networkd[1930]: eth0: DHCPv4 address 172.31.26.69/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 21:54:50.567489 systemd-resolved[1932]: Positive Trust Anchors: Aug 5 21:54:50.567531 systemd-resolved[1932]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:54:50.567594 systemd-resolved[1932]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:54:50.577828 systemd-resolved[1932]: Defaulting to hostname 'linux'. Aug 5 21:54:50.581977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:54:50.584423 systemd[1]: Reached target network.target - Network. Aug 5 21:54:50.586249 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:54:50.588539 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:54:50.590879 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 21:54:50.593349 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 21:54:50.596375 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 21:54:50.599655 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 21:54:50.602414 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 21:54:50.605106 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 21:54:50.605308 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:54:50.607344 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:54:50.611850 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 21:54:50.618280 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 21:54:50.630293 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 21:54:50.634499 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 21:54:50.637580 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:54:50.639665 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:54:50.642071 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:54:50.642138 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:54:50.650506 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 21:54:50.669676 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 21:54:50.677529 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 21:54:50.685086 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 21:54:50.695986 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 21:54:50.700297 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 21:54:50.712995 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 21:54:50.723774 systemd[1]: Started ntpd.service - Network Time Service. Aug 5 21:54:50.732776 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 21:54:50.743680 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 5 21:54:50.752637 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 21:54:50.763938 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 21:54:50.787827 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 21:54:50.793003 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 21:54:50.797492 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 21:54:50.801065 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 21:54:50.834756 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 21:54:50.868857 jq[1987]: false Aug 5 21:54:50.879177 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 21:54:50.879629 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 21:54:50.891470 jq[1997]: true Aug 5 21:54:50.947775 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 21:54:50.945891 dbus-daemon[1986]: [system] SELinux support is enabled Aug 5 21:54:50.960846 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 21:54:50.982558 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1930 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 5 21:54:50.960929 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 21:54:50.966493 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 21:54:50.966552 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 21:54:51.013921 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 5 21:54:51.023111 update_engine[1996]: I0805 21:54:51.013729 1996 main.cc:92] Flatcar Update Engine starting Aug 5 21:54:51.044218 update_engine[1996]: I0805 21:54:51.033596 1996 update_check_scheduler.cc:74] Next update check in 7m9s Aug 5 21:54:51.036947 systemd[1]: Started update-engine.service - Update Engine. Aug 5 21:54:51.051716 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 21:54:51.065751 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 21:54:51.078464 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 21:54:51.125661 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:32 UTC 2024 (1): Starting Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:32 UTC 2024 (1): Starting Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: ---------------------------------------------------- Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: corporation. Support and training for ntp-4 are Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: available at https://www.nwtime.org/support Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: ---------------------------------------------------- Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: proto: precision = 0.096 usec (-23) Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: basedate set to 2024-07-24 Aug 5 21:54:51.131520 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: gps base set to 2024-07-28 (week 2325) Aug 5 21:54:51.125769 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 21:54:51.125804 ntpd[1990]: ---------------------------------------------------- Aug 5 21:54:51.125828 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Aug 5 21:54:51.148002 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: Listen normally on 3 eth0 172.31.26.69:123 Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: Listen normally on 4 lo [::1]:123 Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: bind(21) AF_INET6 fe80::45a:24ff:fe2b:1805%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: unable to create socket on eth0 (5) for fe80::45a:24ff:fe2b:1805%2#123 Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: failed to init interface for address fe80::45a:24ff:fe2b:1805%2 Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:54:51.161081 ntpd[1990]: 5 Aug 21:54:51 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:54:51.125853 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 21:54:51.170078 jq[2006]: true Aug 5 21:54:51.125877 ntpd[1990]: corporation. Support and training for ntp-4 are Aug 5 21:54:51.197996 extend-filesystems[1988]: Found loop4 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found loop5 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found loop6 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found loop7 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found nvme0n1 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found nvme0n1p1 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found nvme0n1p2 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found nvme0n1p3 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found usr Aug 5 21:54:51.197996 extend-filesystems[1988]: Found nvme0n1p4 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found nvme0n1p6 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found nvme0n1p7 Aug 5 21:54:51.197996 extend-filesystems[1988]: Found nvme0n1p9 Aug 5 21:54:51.186945 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 5 21:54:51.125899 ntpd[1990]: available at https://www.nwtime.org/support Aug 5 21:54:51.320415 tar[2008]: linux-arm64/helm Aug 5 21:54:51.349852 extend-filesystems[1988]: Checking size of /dev/nvme0n1p9 Aug 5 21:54:51.219211 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 21:54:51.125918 ntpd[1990]: ---------------------------------------------------- Aug 5 21:54:51.222362 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 21:54:51.128621 ntpd[1990]: proto: precision = 0.096 usec (-23) Aug 5 21:54:51.131422 ntpd[1990]: basedate set to 2024-07-24 Aug 5 21:54:51.131459 ntpd[1990]: gps base set to 2024-07-28 (week 2325) Aug 5 21:54:51.137483 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 21:54:51.137598 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 21:54:51.137928 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 21:54:51.137993 ntpd[1990]: Listen normally on 3 eth0 172.31.26.69:123 Aug 5 21:54:51.138061 ntpd[1990]: Listen normally on 4 lo [::1]:123 Aug 5 21:54:51.138140 ntpd[1990]: bind(21) AF_INET6 fe80::45a:24ff:fe2b:1805%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 21:54:51.138230 ntpd[1990]: unable to create socket on eth0 (5) for fe80::45a:24ff:fe2b:1805%2#123 Aug 5 21:54:51.138261 ntpd[1990]: failed to init interface for address fe80::45a:24ff:fe2b:1805%2 Aug 5 21:54:51.138316 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Aug 5 21:54:51.141032 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:54:51.141108 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:54:51.400416 extend-filesystems[1988]: Resized partition /dev/nvme0n1p9 Aug 5 21:54:51.417511 extend-filesystems[2047]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 21:54:51.437411 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 5 21:54:51.539330 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 21:54:51.556579 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 5 21:54:51.596398 coreos-metadata[1985]: Aug 05 21:54:51.596 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 21:54:51.598610 extend-filesystems[2047]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 5 21:54:51.598610 extend-filesystems[2047]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 21:54:51.598610 extend-filesystems[2047]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 5 21:54:51.614448 extend-filesystems[1988]: Resized filesystem in /dev/nvme0n1p9 Aug 5 21:54:51.619059 coreos-metadata[1985]: Aug 05 21:54:51.609 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 5 21:54:51.620328 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 21:54:51.621118 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 21:54:51.621288 coreos-metadata[1985]: Aug 05 21:54:51.621 INFO Fetch successful Aug 5 21:54:51.621434 coreos-metadata[1985]: Aug 05 21:54:51.621 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 5 21:54:51.630374 coreos-metadata[1985]: Aug 05 21:54:51.630 INFO Fetch successful Aug 5 21:54:51.631024 coreos-metadata[1985]: Aug 05 21:54:51.630 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 5 21:54:51.634906 locksmithd[2020]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 21:54:51.641718 coreos-metadata[1985]: Aug 05 21:54:51.637 INFO Fetch successful Aug 5 21:54:51.641718 coreos-metadata[1985]: Aug 05 21:54:51.637 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 5 21:54:51.643264 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1759) Aug 5 21:54:51.647787 coreos-metadata[1985]: Aug 05 21:54:51.647 INFO Fetch successful Aug 5 21:54:51.647787 coreos-metadata[1985]: Aug 05 21:54:51.647 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 5 21:54:51.649101 coreos-metadata[1985]: Aug 05 21:54:51.648 INFO Fetch failed with 404: resource not found Aug 5 21:54:51.649101 coreos-metadata[1985]: Aug 05 21:54:51.648 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 5 21:54:51.650858 coreos-metadata[1985]: Aug 05 21:54:51.650 INFO Fetch successful Aug 5 21:54:51.650858 coreos-metadata[1985]: Aug 05 21:54:51.650 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 5 21:54:51.652362 coreos-metadata[1985]: Aug 05 21:54:51.652 INFO Fetch successful Aug 5 21:54:51.652811 coreos-metadata[1985]: Aug 05 21:54:51.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 5 21:54:51.661145 coreos-metadata[1985]: Aug 05 21:54:51.660 INFO Fetch successful Aug 5 21:54:51.661145 coreos-metadata[1985]: Aug 05 21:54:51.661 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 5 21:54:51.665248 coreos-metadata[1985]: Aug 05 21:54:51.662 INFO Fetch successful Aug 5 21:54:51.665248 coreos-metadata[1985]: Aug 05 21:54:51.662 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 5 21:54:51.671215 coreos-metadata[1985]: Aug 05 21:54:51.669 INFO Fetch successful Aug 5 21:54:51.704405 bash[2086]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:54:51.705521 systemd-logind[1995]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 21:54:51.706233 systemd-logind[1995]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 5 21:54:51.709761 systemd-logind[1995]: New seat seat0. Aug 5 21:54:51.712785 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 21:54:51.721420 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 21:54:51.745696 systemd[1]: Starting sshkeys.service... Aug 5 21:54:51.978384 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 21:54:52.014406 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 21:54:52.025412 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 21:54:52.037438 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 21:54:52.117862 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 5 21:54:52.119414 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 5 21:54:52.130350 ntpd[1990]: bind(24) AF_INET6 fe80::45a:24ff:fe2b:1805%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 21:54:52.133714 ntpd[1990]: 5 Aug 21:54:52 ntpd[1990]: bind(24) AF_INET6 fe80::45a:24ff:fe2b:1805%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 21:54:52.133714 ntpd[1990]: 5 Aug 21:54:52 ntpd[1990]: unable to create socket on eth0 (6) for fe80::45a:24ff:fe2b:1805%2#123 Aug 5 21:54:52.133714 ntpd[1990]: 5 Aug 21:54:52 ntpd[1990]: failed to init interface for address fe80::45a:24ff:fe2b:1805%2 Aug 5 21:54:52.130469 ntpd[1990]: unable to create socket on eth0 (6) for fe80::45a:24ff:fe2b:1805%2#123 Aug 5 21:54:52.130510 ntpd[1990]: failed to init interface for address fe80::45a:24ff:fe2b:1805%2 Aug 5 21:54:52.135551 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2013 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 5 21:54:52.152356 containerd[2015]: time="2024-08-05T21:54:52.147362640Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 21:54:52.168902 systemd[1]: Starting polkit.service - Authorization Manager... Aug 5 21:54:52.273855 polkitd[2138]: Started polkitd version 121 Aug 5 21:54:52.295885 systemd-networkd[1930]: eth0: Gained IPv6LL Aug 5 21:54:52.320416 containerd[2015]: time="2024-08-05T21:54:52.309036205Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 21:54:52.320416 containerd[2015]: time="2024-08-05T21:54:52.309192349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:54:52.316884 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 21:54:52.324927 containerd[2015]: time="2024-08-05T21:54:52.323659081Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:54:52.324927 containerd[2015]: time="2024-08-05T21:54:52.323764981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:54:52.324927 containerd[2015]: time="2024-08-05T21:54:52.324480397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:54:52.324927 containerd[2015]: time="2024-08-05T21:54:52.324547393Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 21:54:52.324927 containerd[2015]: time="2024-08-05T21:54:52.324812449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 21:54:52.325276 containerd[2015]: time="2024-08-05T21:54:52.324936241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:54:52.325276 containerd[2015]: time="2024-08-05T21:54:52.324965857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 21:54:52.325276 containerd[2015]: time="2024-08-05T21:54:52.325125757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:54:52.325790 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 21:54:52.337419 containerd[2015]: time="2024-08-05T21:54:52.332088481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 21:54:52.337419 containerd[2015]: time="2024-08-05T21:54:52.332217973Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 21:54:52.337419 containerd[2015]: time="2024-08-05T21:54:52.332247037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:54:52.337419 containerd[2015]: time="2024-08-05T21:54:52.334093117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:54:52.337419 containerd[2015]: time="2024-08-05T21:54:52.334199173Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 21:54:52.337419 containerd[2015]: time="2024-08-05T21:54:52.334462717Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 21:54:52.337419 containerd[2015]: time="2024-08-05T21:54:52.334523917Z" level=info msg="metadata content store policy set" policy=shared Aug 5 21:54:52.347588 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.352839433Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.352960957Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353004781Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353107609Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353191861Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353230081Z" level=info msg="NRI interface is disabled by configuration." Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353275645Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353622781Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353666185Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353698381Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353732317Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353768725Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353810605Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 21:54:52.359002 containerd[2015]: time="2024-08-05T21:54:52.353841109Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.353874469Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.353907265Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.353944105Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.353976289Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.354004393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.354318685Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.354796069Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.354856309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.354900721Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 21:54:52.359979 containerd[2015]: time="2024-08-05T21:54:52.354973237Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.361733221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.361865641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.361927501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.361988425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.362144749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.362281813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.362495893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.362540977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.362588245Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.363028189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.366882 containerd[2015]: time="2024-08-05T21:54:52.363086617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.367740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:54:52.380940 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 21:54:52.404355 containerd[2015]: time="2024-08-05T21:54:52.394241377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.404355 containerd[2015]: time="2024-08-05T21:54:52.402463981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.404355 containerd[2015]: time="2024-08-05T21:54:52.402807505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.404355 containerd[2015]: time="2024-08-05T21:54:52.403010581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.404355 containerd[2015]: time="2024-08-05T21:54:52.403382593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.405062 polkitd[2138]: Loading rules from directory /etc/polkit-1/rules.d Aug 5 21:54:52.410906 polkitd[2138]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 5 21:54:52.412562 containerd[2015]: time="2024-08-05T21:54:52.403593061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 21:54:52.423327 polkitd[2138]: Finished loading, compiling and executing 2 rules Aug 5 21:54:52.430353 containerd[2015]: time="2024-08-05T21:54:52.428027654Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 21:54:52.430353 containerd[2015]: time="2024-08-05T21:54:52.428404766Z" level=info msg="Connect containerd service" Aug 5 21:54:52.430353 containerd[2015]: time="2024-08-05T21:54:52.428551862Z" level=info msg="using legacy CRI server" Aug 5 21:54:52.430353 containerd[2015]: time="2024-08-05T21:54:52.428582318Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 21:54:52.430353 containerd[2015]: time="2024-08-05T21:54:52.428956874Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 21:54:52.434880 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 5 21:54:52.435451 systemd[1]: Started polkit.service - Authorization Manager. Aug 5 21:54:52.444319 polkitd[2138]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 5 21:54:52.471273 containerd[2015]: time="2024-08-05T21:54:52.470207954Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:54:52.477388 containerd[2015]: time="2024-08-05T21:54:52.475860134Z" level=info msg="Start subscribing containerd event" Aug 5 21:54:52.477388 containerd[2015]: time="2024-08-05T21:54:52.475985198Z" level=info msg="Start recovering state" Aug 5 21:54:52.477388 containerd[2015]: time="2024-08-05T21:54:52.476206154Z" level=info msg="Start event monitor" Aug 5 21:54:52.477388 containerd[2015]: time="2024-08-05T21:54:52.476249774Z" level=info msg="Start snapshots syncer" Aug 5 21:54:52.477388 containerd[2015]: time="2024-08-05T21:54:52.476279918Z" level=info msg="Start cni network conf syncer for default" Aug 5 21:54:52.477388 containerd[2015]: time="2024-08-05T21:54:52.476303666Z" level=info msg="Start streaming server" Aug 5 21:54:52.485251 containerd[2015]: time="2024-08-05T21:54:52.484194590Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 21:54:52.485251 containerd[2015]: time="2024-08-05T21:54:52.484329146Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 21:54:52.485251 containerd[2015]: time="2024-08-05T21:54:52.484386338Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 21:54:52.485251 containerd[2015]: time="2024-08-05T21:54:52.484423742Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 21:54:52.498335 containerd[2015]: time="2024-08-05T21:54:52.496423718Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 21:54:52.498335 containerd[2015]: time="2024-08-05T21:54:52.496757486Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 21:54:52.505305 containerd[2015]: time="2024-08-05T21:54:52.504699998Z" level=info msg="containerd successfully booted in 0.364780s" Aug 5 21:54:52.507840 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 21:54:52.617586 systemd-hostnamed[2013]: Hostname set to (transient) Aug 5 21:54:52.619286 systemd-resolved[1932]: System hostname changed to 'ip-172-31-26-69'. Aug 5 21:54:52.738319 coreos-metadata[2118]: Aug 05 21:54:52.738 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 21:54:52.749093 coreos-metadata[2118]: Aug 05 21:54:52.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 5 21:54:52.749093 coreos-metadata[2118]: Aug 05 21:54:52.748 INFO Fetch successful Aug 5 21:54:52.749093 coreos-metadata[2118]: Aug 05 21:54:52.748 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 5 21:54:52.750058 coreos-metadata[2118]: Aug 05 21:54:52.749 INFO Fetch successful Aug 5 21:54:52.759064 unknown[2118]: wrote ssh authorized keys file for user: core Aug 5 21:54:52.761310 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 21:54:52.857205 amazon-ssm-agent[2168]: Initializing new seelog logger Aug 5 21:54:52.857205 amazon-ssm-agent[2168]: New Seelog Logger Creation Complete Aug 5 21:54:52.857205 amazon-ssm-agent[2168]: 2024/08/05 21:54:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:54:52.857205 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:54:52.862212 amazon-ssm-agent[2168]: 2024/08/05 21:54:52 processing appconfig overrides Aug 5 21:54:52.862212 amazon-ssm-agent[2168]: 2024/08/05 21:54:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:54:52.862212 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:54:52.862212 amazon-ssm-agent[2168]: 2024/08/05 21:54:52 processing appconfig overrides Aug 5 21:54:52.862212 amazon-ssm-agent[2168]: 2024-08-05 21:54:52 INFO Proxy environment variables: Aug 5 21:54:52.862212 amazon-ssm-agent[2168]: 2024/08/05 21:54:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:54:52.862212 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:54:52.862212 amazon-ssm-agent[2168]: 2024/08/05 21:54:52 processing appconfig overrides Aug 5 21:54:52.890389 update-ssh-keys[2202]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:54:52.892140 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 21:54:52.907664 amazon-ssm-agent[2168]: 2024/08/05 21:54:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:54:52.907664 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:54:52.907975 amazon-ssm-agent[2168]: 2024/08/05 21:54:52 processing appconfig overrides Aug 5 21:54:52.908667 systemd[1]: Finished sshkeys.service. Aug 5 21:54:52.962653 sshd_keygen[2032]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 21:54:52.990030 amazon-ssm-agent[2168]: 2024-08-05 21:54:52 INFO https_proxy: Aug 5 21:54:53.089224 amazon-ssm-agent[2168]: 2024-08-05 21:54:52 INFO http_proxy: Aug 5 21:54:53.105622 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 21:54:53.128566 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 21:54:53.143841 systemd[1]: Started sshd@0-172.31.26.69:22-147.75.109.163:38366.service - OpenSSH per-connection server daemon (147.75.109.163:38366). Aug 5 21:54:53.203249 amazon-ssm-agent[2168]: 2024-08-05 21:54:52 INFO no_proxy: Aug 5 21:54:53.220330 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 21:54:53.221558 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 21:54:53.243533 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 21:54:53.300213 amazon-ssm-agent[2168]: 2024-08-05 21:54:52 INFO Checking if agent identity type OnPrem can be assumed Aug 5 21:54:53.349689 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 21:54:53.366034 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 21:54:53.379396 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 21:54:53.382805 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 21:54:53.399361 amazon-ssm-agent[2168]: 2024-08-05 21:54:52 INFO Checking if agent identity type EC2 can be assumed Aug 5 21:54:53.498543 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO Agent will take identity from EC2 Aug 5 21:54:53.542214 sshd[2217]: Accepted publickey for core from 147.75.109.163 port 38366 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:54:53.552850 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:54:53.585899 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 21:54:53.598719 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 21:54:53.603449 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 21:54:53.610233 systemd-logind[1995]: New session 1 of user core. Aug 5 21:54:53.643748 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 21:54:53.662084 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 21:54:53.693863 (systemd)[2229]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:54:53.705315 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 21:54:53.802643 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 21:54:53.903215 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 5 21:54:53.977403 tar[2008]: linux-arm64/LICENSE Aug 5 21:54:53.977403 tar[2008]: linux-arm64/README.md Aug 5 21:54:54.007198 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 5 21:54:54.064754 systemd[2229]: Queued start job for default target default.target. Aug 5 21:54:54.072900 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 21:54:54.078240 systemd[2229]: Created slice app.slice - User Application Slice. Aug 5 21:54:54.078340 systemd[2229]: Reached target paths.target - Paths. Aug 5 21:54:54.078378 systemd[2229]: Reached target timers.target - Timers. Aug 5 21:54:54.088503 systemd[2229]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 21:54:54.106786 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO [amazon-ssm-agent] Starting Core Agent Aug 5 21:54:54.132699 systemd[2229]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 21:54:54.133318 systemd[2229]: Reached target sockets.target - Sockets. Aug 5 21:54:54.133638 systemd[2229]: Reached target basic.target - Basic System. Aug 5 21:54:54.134085 systemd[2229]: Reached target default.target - Main User Target. Aug 5 21:54:54.134243 systemd[2229]: Startup finished in 411ms. Aug 5 21:54:54.134609 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 21:54:54.145479 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 21:54:54.206882 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 5 21:54:54.308011 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO [Registrar] Starting registrar module Aug 5 21:54:54.332226 amazon-ssm-agent[2168]: 2024-08-05 21:54:53 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 5 21:54:54.329050 systemd[1]: Started sshd@1-172.31.26.69:22-147.75.109.163:60548.service - OpenSSH per-connection server daemon (147.75.109.163:60548). Aug 5 21:54:54.339244 amazon-ssm-agent[2168]: 2024-08-05 21:54:54 INFO [EC2Identity] EC2 registration was successful. Aug 5 21:54:54.339244 amazon-ssm-agent[2168]: 2024-08-05 21:54:54 INFO [CredentialRefresher] credentialRefresher has started Aug 5 21:54:54.339244 amazon-ssm-agent[2168]: 2024-08-05 21:54:54 INFO [CredentialRefresher] Starting credentials refresher loop Aug 5 21:54:54.339244 amazon-ssm-agent[2168]: 2024-08-05 21:54:54 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 5 21:54:54.409070 amazon-ssm-agent[2168]: 2024-08-05 21:54:54 INFO [CredentialRefresher] Next credential rotation will be in 31.8998110191 minutes Aug 5 21:54:54.518216 sshd[2244]: Accepted publickey for core from 147.75.109.163 port 60548 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:54:54.522888 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:54:54.539343 systemd-logind[1995]: New session 2 of user core. Aug 5 21:54:54.543834 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 21:54:54.629649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:54:54.633358 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 21:54:54.639709 systemd[1]: Startup finished in 1.429s (kernel) + 10.364s (initrd) + 10.485s (userspace) = 22.279s. Aug 5 21:54:54.647790 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:54:54.700425 sshd[2244]: pam_unix(sshd:session): session closed for user core Aug 5 21:54:54.713679 systemd[1]: sshd@1-172.31.26.69:22-147.75.109.163:60548.service: Deactivated successfully. Aug 5 21:54:54.721839 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 21:54:54.746887 systemd-logind[1995]: Session 2 logged out. Waiting for processes to exit. Aug 5 21:54:54.754961 systemd[1]: Started sshd@2-172.31.26.69:22-147.75.109.163:60554.service - OpenSSH per-connection server daemon (147.75.109.163:60554). Aug 5 21:54:54.761733 systemd-logind[1995]: Removed session 2. Aug 5 21:54:54.969298 sshd[2262]: Accepted publickey for core from 147.75.109.163 port 60554 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:54:54.972941 sshd[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:54:54.995345 systemd-logind[1995]: New session 3 of user core. Aug 5 21:54:55.003739 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 21:54:55.126862 ntpd[1990]: Listen normally on 7 eth0 [fe80::45a:24ff:fe2b:1805%2]:123 Aug 5 21:54:55.127948 ntpd[1990]: 5 Aug 21:54:55 ntpd[1990]: Listen normally on 7 eth0 [fe80::45a:24ff:fe2b:1805%2]:123 Aug 5 21:54:55.143185 sshd[2262]: pam_unix(sshd:session): session closed for user core Aug 5 21:54:55.151749 systemd[1]: sshd@2-172.31.26.69:22-147.75.109.163:60554.service: Deactivated successfully. Aug 5 21:54:55.158665 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 21:54:55.163485 systemd-logind[1995]: Session 3 logged out. Waiting for processes to exit. Aug 5 21:54:55.186633 systemd[1]: Started sshd@3-172.31.26.69:22-147.75.109.163:60570.service - OpenSSH per-connection server daemon (147.75.109.163:60570). Aug 5 21:54:55.189847 systemd-logind[1995]: Removed session 3. Aug 5 21:54:55.375779 sshd[2273]: Accepted publickey for core from 147.75.109.163 port 60570 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:54:55.380933 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:54:55.399504 amazon-ssm-agent[2168]: 2024-08-05 21:54:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 5 21:54:55.409417 systemd-logind[1995]: New session 4 of user core. Aug 5 21:54:55.415744 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 21:54:55.502486 amazon-ssm-agent[2168]: 2024-08-05 21:54:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2277) started Aug 5 21:54:55.574101 sshd[2273]: pam_unix(sshd:session): session closed for user core Aug 5 21:54:55.590674 systemd[1]: sshd@3-172.31.26.69:22-147.75.109.163:60570.service: Deactivated successfully. Aug 5 21:54:55.603695 amazon-ssm-agent[2168]: 2024-08-05 21:54:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 5 21:54:55.605711 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 21:54:55.612306 systemd-logind[1995]: Session 4 logged out. Waiting for processes to exit. Aug 5 21:54:55.650260 systemd[1]: Started sshd@4-172.31.26.69:22-147.75.109.163:60572.service - OpenSSH per-connection server daemon (147.75.109.163:60572). Aug 5 21:54:55.653456 systemd-logind[1995]: Removed session 4. Aug 5 21:54:55.713070 kubelet[2253]: E0805 21:54:55.712858 2253 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:54:55.726380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:54:55.729857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:54:55.732770 systemd[1]: kubelet.service: Consumed 1.536s CPU time. Aug 5 21:54:55.889558 sshd[2289]: Accepted publickey for core from 147.75.109.163 port 60572 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:54:55.893588 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:54:55.902961 systemd-logind[1995]: New session 5 of user core. Aug 5 21:54:55.914715 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 21:54:56.071536 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 21:54:56.072094 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:54:56.094850 sudo[2296]: pam_unix(sudo:session): session closed for user root Aug 5 21:54:56.120639 sshd[2289]: pam_unix(sshd:session): session closed for user core Aug 5 21:54:56.131653 systemd[1]: sshd@4-172.31.26.69:22-147.75.109.163:60572.service: Deactivated successfully. Aug 5 21:54:56.131727 systemd-logind[1995]: Session 5 logged out. Waiting for processes to exit. Aug 5 21:54:56.138737 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 21:54:56.161481 systemd-logind[1995]: Removed session 5. Aug 5 21:54:56.170869 systemd[1]: Started sshd@5-172.31.26.69:22-147.75.109.163:60580.service - OpenSSH per-connection server daemon (147.75.109.163:60580). Aug 5 21:54:56.361038 sshd[2301]: Accepted publickey for core from 147.75.109.163 port 60580 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:54:56.365374 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:54:56.380973 systemd-logind[1995]: New session 6 of user core. Aug 5 21:54:56.394096 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 21:54:56.510057 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 21:54:56.511020 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:54:56.518094 sudo[2305]: pam_unix(sudo:session): session closed for user root Aug 5 21:54:56.530272 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 21:54:56.530908 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:54:56.556898 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 21:54:56.581764 auditctl[2308]: No rules Aug 5 21:54:56.585301 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 21:54:56.587296 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 21:54:56.596055 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:54:56.686007 augenrules[2326]: No rules Aug 5 21:54:56.689759 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:54:56.693353 sudo[2304]: pam_unix(sudo:session): session closed for user root Aug 5 21:54:56.718813 sshd[2301]: pam_unix(sshd:session): session closed for user core Aug 5 21:54:56.726561 systemd[1]: sshd@5-172.31.26.69:22-147.75.109.163:60580.service: Deactivated successfully. Aug 5 21:54:56.731906 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 21:54:56.736534 systemd-logind[1995]: Session 6 logged out. Waiting for processes to exit. Aug 5 21:54:56.756074 systemd-logind[1995]: Removed session 6. Aug 5 21:54:56.764989 systemd[1]: Started sshd@6-172.31.26.69:22-147.75.109.163:60594.service - OpenSSH per-connection server daemon (147.75.109.163:60594). Aug 5 21:54:56.963248 sshd[2334]: Accepted publickey for core from 147.75.109.163 port 60594 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:54:56.965868 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:54:56.994421 systemd-logind[1995]: New session 7 of user core. Aug 5 21:54:57.002610 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 21:54:57.125403 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 21:54:57.127531 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:54:57.392945 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 21:54:57.413237 (dockerd)[2346]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 21:54:57.916624 dockerd[2346]: time="2024-08-05T21:54:57.916489917Z" level=info msg="Starting up" Aug 5 21:54:58.426721 systemd-resolved[1932]: Clock change detected. Flushing caches. Aug 5 21:54:58.863810 dockerd[2346]: time="2024-08-05T21:54:58.863720766Z" level=info msg="Loading containers: start." Aug 5 21:54:59.082258 kernel: Initializing XFRM netlink socket Aug 5 21:54:59.185385 (udev-worker)[2360]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:54:59.311230 systemd-networkd[1930]: docker0: Link UP Aug 5 21:54:59.344745 dockerd[2346]: time="2024-08-05T21:54:59.344676364Z" level=info msg="Loading containers: done." Aug 5 21:54:59.528731 dockerd[2346]: time="2024-08-05T21:54:59.528648053Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 21:54:59.530627 dockerd[2346]: time="2024-08-05T21:54:59.529573529Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 21:54:59.530627 dockerd[2346]: time="2024-08-05T21:54:59.530059901Z" level=info msg="Daemon has completed initialization" Aug 5 21:54:59.619385 dockerd[2346]: time="2024-08-05T21:54:59.619246301Z" level=info msg="API listen on /run/docker.sock" Aug 5 21:54:59.630744 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 21:55:01.268842 containerd[2015]: time="2024-08-05T21:55:01.268769778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 21:55:02.079433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471390115.mount: Deactivated successfully. Aug 5 21:55:04.984787 containerd[2015]: time="2024-08-05T21:55:04.984608832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:04.987567 containerd[2015]: time="2024-08-05T21:55:04.987422748Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=31601516" Aug 5 21:55:04.989899 containerd[2015]: time="2024-08-05T21:55:04.989765352Z" level=info msg="ImageCreate event name:\"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:04.996723 containerd[2015]: time="2024-08-05T21:55:04.996574140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:05.001553 containerd[2015]: time="2024-08-05T21:55:05.000561284Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"31598316\" in 3.731710482s" Aug 5 21:55:05.001553 containerd[2015]: time="2024-08-05T21:55:05.000648536Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\"" Aug 5 21:55:05.061573 containerd[2015]: time="2024-08-05T21:55:05.061189208Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 21:55:06.195934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 21:55:06.208101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:55:08.240301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:55:08.260151 (kubelet)[2552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:55:08.424359 kubelet[2552]: E0805 21:55:08.424275 2552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:55:08.433390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:55:08.433819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:55:09.123313 containerd[2015]: time="2024-08-05T21:55:09.123158737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:09.127850 containerd[2015]: time="2024-08-05T21:55:09.127684549Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=29018270" Aug 5 21:55:09.130173 containerd[2015]: time="2024-08-05T21:55:09.129881557Z" level=info msg="ImageCreate event name:\"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:09.137892 containerd[2015]: time="2024-08-05T21:55:09.137652853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:09.141438 containerd[2015]: time="2024-08-05T21:55:09.141024457Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"30505537\" in 4.079708325s" Aug 5 21:55:09.141438 containerd[2015]: time="2024-08-05T21:55:09.141154477Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\"" Aug 5 21:55:09.215589 containerd[2015]: time="2024-08-05T21:55:09.215403901Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 21:55:11.583561 containerd[2015]: time="2024-08-05T21:55:11.583279481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:11.585816 containerd[2015]: time="2024-08-05T21:55:11.585737333Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=15534520" Aug 5 21:55:11.587242 containerd[2015]: time="2024-08-05T21:55:11.587145905Z" level=info msg="ImageCreate event name:\"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:11.593276 containerd[2015]: time="2024-08-05T21:55:11.593138909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:11.596865 containerd[2015]: time="2024-08-05T21:55:11.596089157Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"17021805\" in 2.380170648s" Aug 5 21:55:11.596865 containerd[2015]: time="2024-08-05T21:55:11.596172605Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\"" Aug 5 21:55:11.640029 containerd[2015]: time="2024-08-05T21:55:11.639965345Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 21:55:13.414010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426629025.mount: Deactivated successfully. Aug 5 21:55:14.484115 containerd[2015]: time="2024-08-05T21:55:14.483974719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:14.487401 containerd[2015]: time="2024-08-05T21:55:14.486746059Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=24977919" Aug 5 21:55:14.489846 containerd[2015]: time="2024-08-05T21:55:14.489647995Z" level=info msg="ImageCreate event name:\"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:14.496604 containerd[2015]: time="2024-08-05T21:55:14.496405327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:14.499507 containerd[2015]: time="2024-08-05T21:55:14.498724627Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"24976938\" in 2.858684246s" Aug 5 21:55:14.499507 containerd[2015]: time="2024-08-05T21:55:14.498836551Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\"" Aug 5 21:55:14.555819 containerd[2015]: time="2024-08-05T21:55:14.555705668Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 21:55:15.133144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503969823.mount: Deactivated successfully. Aug 5 21:55:15.145603 containerd[2015]: time="2024-08-05T21:55:15.145513867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:15.147233 containerd[2015]: time="2024-08-05T21:55:15.147183715Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Aug 5 21:55:15.148870 containerd[2015]: time="2024-08-05T21:55:15.148759855Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:15.155631 containerd[2015]: time="2024-08-05T21:55:15.155560675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:15.157837 containerd[2015]: time="2024-08-05T21:55:15.157583707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 601.756983ms" Aug 5 21:55:15.157837 containerd[2015]: time="2024-08-05T21:55:15.157655611Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 21:55:15.200946 containerd[2015]: time="2024-08-05T21:55:15.200800927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 21:55:15.889000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376833078.mount: Deactivated successfully. Aug 5 21:55:18.445891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 21:55:18.452928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:55:20.107979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:55:20.132230 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:55:20.377608 kubelet[2646]: E0805 21:55:20.377227 2646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:55:20.390241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:55:20.391226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:55:21.281596 containerd[2015]: time="2024-08-05T21:55:21.280934305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:21.284215 containerd[2015]: time="2024-08-05T21:55:21.284076733Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Aug 5 21:55:21.286186 containerd[2015]: time="2024-08-05T21:55:21.286070581Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:21.298605 containerd[2015]: time="2024-08-05T21:55:21.298252225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:21.302173 containerd[2015]: time="2024-08-05T21:55:21.302081557Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 6.101216286s" Aug 5 21:55:21.302619 containerd[2015]: time="2024-08-05T21:55:21.302386069Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 21:55:21.372593 containerd[2015]: time="2024-08-05T21:55:21.372540469Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 21:55:22.025061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021388851.mount: Deactivated successfully. Aug 5 21:55:22.607638 containerd[2015]: time="2024-08-05T21:55:22.606264388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:22.610704 containerd[2015]: time="2024-08-05T21:55:22.610432288Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Aug 5 21:55:22.613197 containerd[2015]: time="2024-08-05T21:55:22.612962764Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:22.620144 containerd[2015]: time="2024-08-05T21:55:22.619982236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:55:22.623433 containerd[2015]: time="2024-08-05T21:55:22.622687168Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.249627039s" Aug 5 21:55:22.623433 containerd[2015]: time="2024-08-05T21:55:22.622789912Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Aug 5 21:55:22.954619 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 5 21:55:30.445407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 21:55:30.457408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:55:31.205211 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 21:55:31.205830 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 21:55:31.207050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:55:31.224722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:55:31.272223 systemd[1]: Reloading requested from client PID 2735 ('systemctl') (unit session-7.scope)... Aug 5 21:55:31.272261 systemd[1]: Reloading... Aug 5 21:55:31.580591 zram_generator::config[2785]: No configuration found. Aug 5 21:55:31.861867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:55:32.066793 systemd[1]: Reloading finished in 793 ms. Aug 5 21:55:32.196786 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 21:55:32.197139 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 21:55:32.199083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:55:32.220707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:55:33.379857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:55:33.395531 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:55:33.529674 kubelet[2835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:55:33.529674 kubelet[2835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:55:33.529674 kubelet[2835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:55:33.530718 kubelet[2835]: I0805 21:55:33.529839 2835 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:55:34.455609 kubelet[2835]: I0805 21:55:34.454283 2835 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:55:34.455609 kubelet[2835]: I0805 21:55:34.454391 2835 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:55:34.455609 kubelet[2835]: I0805 21:55:34.455080 2835 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:55:34.498848 kubelet[2835]: I0805 21:55:34.498736 2835 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:55:34.501520 kubelet[2835]: E0805 21:55:34.501206 2835 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:34.533543 kubelet[2835]: W0805 21:55:34.533334 2835 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:55:34.534989 kubelet[2835]: I0805 21:55:34.534907 2835 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:55:34.535745 kubelet[2835]: I0805 21:55:34.535688 2835 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:55:34.536099 kubelet[2835]: I0805 21:55:34.536046 2835 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:55:34.536290 kubelet[2835]: I0805 21:55:34.536119 2835 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:55:34.536290 kubelet[2835]: I0805 21:55:34.536142 2835 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:55:34.536405 kubelet[2835]: I0805 21:55:34.536377 2835 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:55:34.539415 kubelet[2835]: I0805 21:55:34.539019 2835 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:55:34.539415 kubelet[2835]: I0805 21:55:34.539109 2835 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:55:34.539415 kubelet[2835]: I0805 21:55:34.539209 2835 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:55:34.539415 kubelet[2835]: I0805 21:55:34.539241 2835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:55:34.547493 kubelet[2835]: W0805 21:55:34.546558 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.26.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:34.547493 kubelet[2835]: E0805 21:55:34.546768 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:34.547493 kubelet[2835]: I0805 21:55:34.547018 2835 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 21:55:34.551391 kubelet[2835]: W0805 21:55:34.551345 2835 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 21:55:34.555007 kubelet[2835]: I0805 21:55:34.554936 2835 server.go:1232] "Started kubelet" Aug 5 21:55:34.558848 kubelet[2835]: W0805 21:55:34.558772 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.26.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-69&limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:34.559492 kubelet[2835]: E0805 21:55:34.559147 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-69&limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:34.560938 kubelet[2835]: I0805 21:55:34.560910 2835 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:55:34.564505 kubelet[2835]: I0805 21:55:34.563957 2835 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:55:34.566127 kubelet[2835]: I0805 21:55:34.566081 2835 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:55:34.567992 kubelet[2835]: I0805 21:55:34.567946 2835 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:55:34.571293 kubelet[2835]: E0805 21:55:34.570107 2835 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-26-69.17e8f3d08f75a2bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-26-69", UID:"ip-172-31-26-69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-26-69"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 55, 34, 554813115, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 55, 34, 554813115, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-26-69"}': 'Post "https://172.31.26.69:6443/api/v1/namespaces/default/events": dial tcp 172.31.26.69:6443: connect: connection refused'(may retry after sleeping) Aug 5 21:55:34.573855 kubelet[2835]: I0805 21:55:34.573493 2835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:55:34.575295 kubelet[2835]: E0805 21:55:34.574743 2835 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:55:34.575295 kubelet[2835]: E0805 21:55:34.574800 2835 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:55:34.581543 kubelet[2835]: I0805 21:55:34.581129 2835 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:55:34.582206 kubelet[2835]: I0805 21:55:34.582056 2835 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:55:34.582355 kubelet[2835]: I0805 21:55:34.582227 2835 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:55:34.583259 kubelet[2835]: W0805 21:55:34.583081 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.26.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:34.583259 kubelet[2835]: E0805 21:55:34.583197 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:34.585784 kubelet[2835]: E0805 21:55:34.583335 2835 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-69?timeout=10s\": dial tcp 172.31.26.69:6443: connect: connection refused" interval="200ms" Aug 5 21:55:34.632513 kubelet[2835]: I0805 21:55:34.632413 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:55:34.645768 kubelet[2835]: I0805 21:55:34.645703 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:55:34.645768 kubelet[2835]: I0805 21:55:34.645762 2835 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:55:34.646208 kubelet[2835]: I0805 21:55:34.645808 2835 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:55:34.646208 kubelet[2835]: E0805 21:55:34.645970 2835 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:55:34.668428 kubelet[2835]: W0805 21:55:34.668349 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.26.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:34.668428 kubelet[2835]: E0805 21:55:34.668437 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:34.688763 kubelet[2835]: I0805 21:55:34.688716 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-69" Aug 5 21:55:34.690027 kubelet[2835]: E0805 21:55:34.689978 2835 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.26.69:6443/api/v1/nodes\": dial tcp 172.31.26.69:6443: connect: connection refused" node="ip-172-31-26-69" Aug 5 21:55:34.692922 kubelet[2835]: I0805 21:55:34.692886 2835 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:55:34.693373 kubelet[2835]: I0805 21:55:34.693337 2835 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:55:34.693755 kubelet[2835]: I0805 21:55:34.693712 2835 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:55:34.697590 kubelet[2835]: I0805 21:55:34.697529 2835 policy_none.go:49] "None policy: Start" Aug 5 21:55:34.699858 kubelet[2835]: I0805 21:55:34.699803 2835 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:55:34.700514 kubelet[2835]: I0805 21:55:34.700425 2835 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:55:34.718262 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 21:55:34.735946 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 21:55:34.747023 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 21:55:34.748212 kubelet[2835]: E0805 21:55:34.748098 2835 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:55:34.759610 kubelet[2835]: I0805 21:55:34.759547 2835 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:55:34.761937 kubelet[2835]: I0805 21:55:34.761349 2835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:55:34.763296 kubelet[2835]: E0805 21:55:34.763256 2835 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-69\" not found" Aug 5 21:55:34.784864 kubelet[2835]: E0805 21:55:34.784815 2835 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-69?timeout=10s\": dial tcp 172.31.26.69:6443: connect: connection refused" interval="400ms" Aug 5 21:55:34.893956 kubelet[2835]: I0805 21:55:34.893898 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-69" Aug 5 21:55:34.895120 kubelet[2835]: E0805 21:55:34.895045 2835 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.26.69:6443/api/v1/nodes\": dial tcp 172.31.26.69:6443: connect: connection refused" node="ip-172-31-26-69" Aug 5 21:55:34.949584 kubelet[2835]: I0805 21:55:34.949324 2835 topology_manager.go:215] "Topology Admit Handler" podUID="b16ac151f88963e4e48a6bd70c5e60b3" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-69" Aug 5 21:55:34.952947 kubelet[2835]: I0805 21:55:34.952728 2835 topology_manager.go:215] "Topology Admit Handler" podUID="29ad52e9963b24bafb3fb103487234a5" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:34.956844 kubelet[2835]: I0805 21:55:34.956144 2835 topology_manager.go:215] "Topology Admit Handler" podUID="796f0894346e4dcf9be2b48df6be7346" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-69" Aug 5 21:55:34.976155 systemd[1]: Created slice kubepods-burstable-podb16ac151f88963e4e48a6bd70c5e60b3.slice - libcontainer container kubepods-burstable-podb16ac151f88963e4e48a6bd70c5e60b3.slice. Aug 5 21:55:34.985113 kubelet[2835]: I0805 21:55:34.984978 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:34.985377 kubelet[2835]: I0805 21:55:34.985199 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:34.985596 kubelet[2835]: I0805 21:55:34.985372 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:34.985596 kubelet[2835]: I0805 21:55:34.985572 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b16ac151f88963e4e48a6bd70c5e60b3-ca-certs\") pod \"kube-apiserver-ip-172-31-26-69\" (UID: \"b16ac151f88963e4e48a6bd70c5e60b3\") " pod="kube-system/kube-apiserver-ip-172-31-26-69" Aug 5 21:55:34.985723 kubelet[2835]: I0805 21:55:34.985674 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b16ac151f88963e4e48a6bd70c5e60b3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-69\" (UID: \"b16ac151f88963e4e48a6bd70c5e60b3\") " pod="kube-system/kube-apiserver-ip-172-31-26-69" Aug 5 21:55:34.985799 kubelet[2835]: I0805 21:55:34.985729 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:34.985860 kubelet[2835]: I0805 21:55:34.985806 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b16ac151f88963e4e48a6bd70c5e60b3-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-69\" (UID: \"b16ac151f88963e4e48a6bd70c5e60b3\") " pod="kube-system/kube-apiserver-ip-172-31-26-69" Aug 5 21:55:34.985860 kubelet[2835]: I0805 21:55:34.985854 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:34.985974 kubelet[2835]: I0805 21:55:34.985899 2835 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/796f0894346e4dcf9be2b48df6be7346-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-69\" (UID: \"796f0894346e4dcf9be2b48df6be7346\") " pod="kube-system/kube-scheduler-ip-172-31-26-69" Aug 5 21:55:34.998894 systemd[1]: Created slice kubepods-burstable-pod29ad52e9963b24bafb3fb103487234a5.slice - libcontainer container kubepods-burstable-pod29ad52e9963b24bafb3fb103487234a5.slice. Aug 5 21:55:35.018598 systemd[1]: Created slice kubepods-burstable-pod796f0894346e4dcf9be2b48df6be7346.slice - libcontainer container kubepods-burstable-pod796f0894346e4dcf9be2b48df6be7346.slice. Aug 5 21:55:35.187311 kubelet[2835]: E0805 21:55:35.187239 2835 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-69?timeout=10s\": dial tcp 172.31.26.69:6443: connect: connection refused" interval="800ms" Aug 5 21:55:35.293333 containerd[2015]: time="2024-08-05T21:55:35.292498899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-69,Uid:b16ac151f88963e4e48a6bd70c5e60b3,Namespace:kube-system,Attempt:0,}" Aug 5 21:55:35.306870 kubelet[2835]: I0805 21:55:35.305921 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-69" Aug 5 21:55:35.307113 kubelet[2835]: E0805 21:55:35.307051 2835 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.26.69:6443/api/v1/nodes\": dial tcp 172.31.26.69:6443: connect: connection refused" node="ip-172-31-26-69" Aug 5 21:55:35.313123 containerd[2015]: time="2024-08-05T21:55:35.312807543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-69,Uid:29ad52e9963b24bafb3fb103487234a5,Namespace:kube-system,Attempt:0,}" Aug 5 21:55:35.325661 containerd[2015]: time="2024-08-05T21:55:35.325554159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-69,Uid:796f0894346e4dcf9be2b48df6be7346,Namespace:kube-system,Attempt:0,}" Aug 5 21:55:35.649026 kubelet[2835]: W0805 21:55:35.647995 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.26.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:35.649026 kubelet[2835]: E0805 21:55:35.648145 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:35.882988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2787703079.mount: Deactivated successfully. Aug 5 21:55:35.889336 kubelet[2835]: W0805 21:55:35.889205 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.26.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:35.891387 kubelet[2835]: E0805 21:55:35.889352 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:35.903097 containerd[2015]: time="2024-08-05T21:55:35.902785374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:55:35.906309 containerd[2015]: time="2024-08-05T21:55:35.906191310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:55:35.908833 containerd[2015]: time="2024-08-05T21:55:35.908678262Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:55:35.910595 containerd[2015]: time="2024-08-05T21:55:35.910399362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 5 21:55:35.913615 containerd[2015]: time="2024-08-05T21:55:35.913205526Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:55:35.916017 containerd[2015]: time="2024-08-05T21:55:35.915884742Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:55:35.917815 containerd[2015]: time="2024-08-05T21:55:35.917736486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:55:35.928252 containerd[2015]: time="2024-08-05T21:55:35.928053126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:55:35.934157 containerd[2015]: time="2024-08-05T21:55:35.933969198Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 620.904219ms" Aug 5 21:55:35.940579 containerd[2015]: time="2024-08-05T21:55:35.940345278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 614.633547ms" Aug 5 21:55:35.952421 containerd[2015]: time="2024-08-05T21:55:35.952260462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 659.481963ms" Aug 5 21:55:35.989174 kubelet[2835]: E0805 21:55:35.989099 2835 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-69?timeout=10s\": dial tcp 172.31.26.69:6443: connect: connection refused" interval="1.6s" Aug 5 21:55:35.997808 kubelet[2835]: W0805 21:55:35.995522 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.26.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-69&limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:35.997808 kubelet[2835]: E0805 21:55:35.995680 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-69&limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:36.112859 kubelet[2835]: I0805 21:55:36.112748 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-69" Aug 5 21:55:36.114956 kubelet[2835]: E0805 21:55:36.114882 2835 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.26.69:6443/api/v1/nodes\": dial tcp 172.31.26.69:6443: connect: connection refused" node="ip-172-31-26-69" Aug 5 21:55:36.224215 kubelet[2835]: W0805 21:55:36.224102 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.26.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:36.224800 kubelet[2835]: E0805 21:55:36.224770 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:36.257639 containerd[2015]: time="2024-08-05T21:55:36.257004603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:55:36.257639 containerd[2015]: time="2024-08-05T21:55:36.257317059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:55:36.257639 containerd[2015]: time="2024-08-05T21:55:36.257370771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:55:36.257639 containerd[2015]: time="2024-08-05T21:55:36.257408031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:55:36.265130 containerd[2015]: time="2024-08-05T21:55:36.264218871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:55:36.265130 containerd[2015]: time="2024-08-05T21:55:36.264400779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:55:36.266220 containerd[2015]: time="2024-08-05T21:55:36.264499131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:55:36.266220 containerd[2015]: time="2024-08-05T21:55:36.265791639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:55:36.268607 containerd[2015]: time="2024-08-05T21:55:36.268118079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:55:36.268607 containerd[2015]: time="2024-08-05T21:55:36.268314711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:55:36.268607 containerd[2015]: time="2024-08-05T21:55:36.268362519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:55:36.268607 containerd[2015]: time="2024-08-05T21:55:36.268397979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:55:36.325051 systemd[1]: Started cri-containerd-f6123fb6911594d1f1e936819dea1fde02bb93681620d28e9ef1bbfd7a891402.scope - libcontainer container f6123fb6911594d1f1e936819dea1fde02bb93681620d28e9ef1bbfd7a891402. Aug 5 21:55:36.376002 systemd[1]: Started cri-containerd-af18780cd5b999c5cdb19f69841560b8d0cfef27613848735f2900cbe72e009a.scope - libcontainer container af18780cd5b999c5cdb19f69841560b8d0cfef27613848735f2900cbe72e009a. Aug 5 21:55:36.380373 systemd[1]: Started cri-containerd-fd0415ffcb8c4b73bd034936db06ced09a57ba55112e38f37f7f9f62ee96f510.scope - libcontainer container fd0415ffcb8c4b73bd034936db06ced09a57ba55112e38f37f7f9f62ee96f510. Aug 5 21:55:36.521162 containerd[2015]: time="2024-08-05T21:55:36.519555857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-69,Uid:29ad52e9963b24bafb3fb103487234a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6123fb6911594d1f1e936819dea1fde02bb93681620d28e9ef1bbfd7a891402\"" Aug 5 21:55:36.536170 containerd[2015]: time="2024-08-05T21:55:36.536038541Z" level=info msg="CreateContainer within sandbox \"f6123fb6911594d1f1e936819dea1fde02bb93681620d28e9ef1bbfd7a891402\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 21:55:36.557630 containerd[2015]: time="2024-08-05T21:55:36.556864601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-69,Uid:796f0894346e4dcf9be2b48df6be7346,Namespace:kube-system,Attempt:0,} returns sandbox id \"af18780cd5b999c5cdb19f69841560b8d0cfef27613848735f2900cbe72e009a\"" Aug 5 21:55:36.562298 containerd[2015]: time="2024-08-05T21:55:36.557135765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-69,Uid:b16ac151f88963e4e48a6bd70c5e60b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd0415ffcb8c4b73bd034936db06ced09a57ba55112e38f37f7f9f62ee96f510\"" Aug 5 21:55:36.606969 containerd[2015]: time="2024-08-05T21:55:36.606897413Z" level=info msg="CreateContainer within sandbox \"af18780cd5b999c5cdb19f69841560b8d0cfef27613848735f2900cbe72e009a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 21:55:36.613186 containerd[2015]: time="2024-08-05T21:55:36.613090397Z" level=info msg="CreateContainer within sandbox \"fd0415ffcb8c4b73bd034936db06ced09a57ba55112e38f37f7f9f62ee96f510\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 21:55:36.625805 kubelet[2835]: E0805 21:55:36.625723 2835 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:36.635722 containerd[2015]: time="2024-08-05T21:55:36.635620373Z" level=info msg="CreateContainer within sandbox \"f6123fb6911594d1f1e936819dea1fde02bb93681620d28e9ef1bbfd7a891402\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2\"" Aug 5 21:55:36.638895 containerd[2015]: time="2024-08-05T21:55:36.638057633Z" level=info msg="StartContainer for \"4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2\"" Aug 5 21:55:36.659736 update_engine[1996]: I0805 21:55:36.659308 1996 update_attempter.cc:509] Updating boot flags... Aug 5 21:55:36.670771 containerd[2015]: time="2024-08-05T21:55:36.669300677Z" level=info msg="CreateContainer within sandbox \"fd0415ffcb8c4b73bd034936db06ced09a57ba55112e38f37f7f9f62ee96f510\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c1dd3c8d3e6f79f89a6cd5a648fda2ff2d49c2745b89e4f5230693af5c004f63\"" Aug 5 21:55:36.671699 containerd[2015]: time="2024-08-05T21:55:36.670790801Z" level=info msg="StartContainer for \"c1dd3c8d3e6f79f89a6cd5a648fda2ff2d49c2745b89e4f5230693af5c004f63\"" Aug 5 21:55:36.734734 containerd[2015]: time="2024-08-05T21:55:36.734020362Z" level=info msg="CreateContainer within sandbox \"af18780cd5b999c5cdb19f69841560b8d0cfef27613848735f2900cbe72e009a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc\"" Aug 5 21:55:36.742237 containerd[2015]: time="2024-08-05T21:55:36.741101886Z" level=info msg="StartContainer for \"92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc\"" Aug 5 21:55:36.777876 systemd[1]: Started cri-containerd-4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2.scope - libcontainer container 4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2. Aug 5 21:55:36.929121 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3061) Aug 5 21:55:36.959996 systemd[1]: Started cri-containerd-c1dd3c8d3e6f79f89a6cd5a648fda2ff2d49c2745b89e4f5230693af5c004f63.scope - libcontainer container c1dd3c8d3e6f79f89a6cd5a648fda2ff2d49c2745b89e4f5230693af5c004f63. Aug 5 21:55:37.018966 systemd[1]: run-containerd-runc-k8s.io-92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc-runc.Nupsuf.mount: Deactivated successfully. Aug 5 21:55:37.056096 systemd[1]: Started cri-containerd-92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc.scope - libcontainer container 92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc. Aug 5 21:55:37.122733 containerd[2015]: time="2024-08-05T21:55:37.122337340Z" level=info msg="StartContainer for \"4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2\" returns successfully" Aug 5 21:55:37.334798 containerd[2015]: time="2024-08-05T21:55:37.333533597Z" level=info msg="StartContainer for \"c1dd3c8d3e6f79f89a6cd5a648fda2ff2d49c2745b89e4f5230693af5c004f63\" returns successfully" Aug 5 21:55:37.472679 containerd[2015]: time="2024-08-05T21:55:37.471871889Z" level=info msg="StartContainer for \"92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc\" returns successfully" Aug 5 21:55:37.595547 kubelet[2835]: E0805 21:55:37.591365 2835 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-69?timeout=10s\": dial tcp 172.31.26.69:6443: connect: connection refused" interval="3.2s" Aug 5 21:55:37.723235 kubelet[2835]: I0805 21:55:37.722820 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-69" Aug 5 21:55:37.730213 kubelet[2835]: E0805 21:55:37.728312 2835 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.26.69:6443/api/v1/nodes\": dial tcp 172.31.26.69:6443: connect: connection refused" node="ip-172-31-26-69" Aug 5 21:55:37.783703 kubelet[2835]: W0805 21:55:37.782796 2835 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.26.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-69&limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:37.783703 kubelet[2835]: E0805 21:55:37.783045 2835 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-69&limit=500&resourceVersion=0": dial tcp 172.31.26.69:6443: connect: connection refused Aug 5 21:55:40.939011 kubelet[2835]: I0805 21:55:40.936033 2835 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-69" Aug 5 21:55:42.549019 kubelet[2835]: I0805 21:55:42.548443 2835 apiserver.go:52] "Watching apiserver" Aug 5 21:55:42.582850 kubelet[2835]: I0805 21:55:42.582707 2835 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:55:42.620924 kubelet[2835]: E0805 21:55:42.620725 2835 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-69\" not found" node="ip-172-31-26-69" Aug 5 21:55:42.694686 kubelet[2835]: I0805 21:55:42.694273 2835 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-26-69" Aug 5 21:55:42.818581 kubelet[2835]: E0805 21:55:42.817110 2835 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-26-69.17e8f3d08f75a2bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-26-69", UID:"ip-172-31-26-69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-26-69"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 55, 34, 554813115, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 55, 34, 554813115, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-26-69"}': 'namespaces "default" not found' (will not retry!) Aug 5 21:55:42.930544 kubelet[2835]: E0805 21:55:42.930316 2835 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-26-69.17e8f3d090a64043", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-26-69", UID:"ip-172-31-26-69", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-26-69"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 55, 34, 574776387, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 55, 34, 574776387, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-26-69"}': 'namespaces "default" not found' (will not retry!) Aug 5 21:55:46.132949 systemd[1]: Reloading requested from client PID 3217 ('systemctl') (unit session-7.scope)... Aug 5 21:55:46.133030 systemd[1]: Reloading... Aug 5 21:55:46.363526 zram_generator::config[3258]: No configuration found. Aug 5 21:55:46.393134 kubelet[2835]: I0805 21:55:46.392976 2835 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-69" podStartSLOduration=2.392877938 podCreationTimestamp="2024-08-05 21:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:55:44.695359441 +0000 UTC m=+11.288826729" watchObservedRunningTime="2024-08-05 21:55:46.392877938 +0000 UTC m=+12.986345226" Aug 5 21:55:46.764292 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:55:47.039009 systemd[1]: Reloading finished in 904 ms. Aug 5 21:55:47.131981 kubelet[2835]: I0805 21:55:47.131577 2835 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:55:47.134429 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:55:47.155406 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:55:47.156204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:55:47.156361 systemd[1]: kubelet.service: Consumed 2.416s CPU time, 114.7M memory peak, 0B memory swap peak. Aug 5 21:55:47.171685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:55:47.825260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:55:47.851531 (kubelet)[3315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:55:48.044430 kubelet[3315]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:55:48.044430 kubelet[3315]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:55:48.044430 kubelet[3315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:55:48.045493 kubelet[3315]: I0805 21:55:48.044605 3315 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:55:48.061220 kubelet[3315]: I0805 21:55:48.061134 3315 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:55:48.061220 kubelet[3315]: I0805 21:55:48.061201 3315 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:55:48.062586 kubelet[3315]: I0805 21:55:48.061746 3315 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:55:48.067429 kubelet[3315]: I0805 21:55:48.067323 3315 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 21:55:48.073402 kubelet[3315]: I0805 21:55:48.071681 3315 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:55:48.104656 kubelet[3315]: W0805 21:55:48.104194 3315 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:55:48.115234 kubelet[3315]: I0805 21:55:48.114742 3315 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:55:48.117950 kubelet[3315]: I0805 21:55:48.116887 3315 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:55:48.117950 kubelet[3315]: I0805 21:55:48.117779 3315 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:55:48.119688 kubelet[3315]: I0805 21:55:48.118656 3315 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:55:48.119688 kubelet[3315]: I0805 21:55:48.118732 3315 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:55:48.119688 kubelet[3315]: I0805 21:55:48.118866 3315 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:55:48.119688 kubelet[3315]: I0805 21:55:48.119125 3315 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:55:48.119688 kubelet[3315]: I0805 21:55:48.119178 3315 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:55:48.119688 kubelet[3315]: I0805 21:55:48.119250 3315 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:55:48.119688 kubelet[3315]: I0805 21:55:48.119278 3315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:55:48.124351 kubelet[3315]: I0805 21:55:48.124309 3315 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 21:55:48.126586 kubelet[3315]: I0805 21:55:48.126514 3315 server.go:1232] "Started kubelet" Aug 5 21:55:48.132619 kubelet[3315]: I0805 21:55:48.132549 3315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:55:48.148977 kubelet[3315]: I0805 21:55:48.148133 3315 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:55:48.156774 kubelet[3315]: I0805 21:55:48.156731 3315 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:55:48.163581 kubelet[3315]: I0805 21:55:48.163404 3315 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:55:48.165525 kubelet[3315]: I0805 21:55:48.165116 3315 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:55:48.174981 kubelet[3315]: I0805 21:55:48.174920 3315 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:55:48.176709 kubelet[3315]: I0805 21:55:48.176261 3315 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:55:48.176709 kubelet[3315]: I0805 21:55:48.176576 3315 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:55:48.188133 kubelet[3315]: E0805 21:55:48.188040 3315 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:55:48.188133 kubelet[3315]: E0805 21:55:48.188142 3315 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:55:48.219616 kubelet[3315]: I0805 21:55:48.215380 3315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:55:48.228424 kubelet[3315]: I0805 21:55:48.227275 3315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:55:48.228424 kubelet[3315]: I0805 21:55:48.227430 3315 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:55:48.228424 kubelet[3315]: I0805 21:55:48.227559 3315 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:55:48.228424 kubelet[3315]: E0805 21:55:48.227678 3315 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:55:48.304419 kubelet[3315]: E0805 21:55:48.304161 3315 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Aug 5 21:55:48.328093 kubelet[3315]: E0805 21:55:48.327790 3315 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:55:48.328895 kubelet[3315]: I0805 21:55:48.328858 3315 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-69" Aug 5 21:55:48.372480 kubelet[3315]: I0805 21:55:48.367431 3315 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-26-69" Aug 5 21:55:48.372480 kubelet[3315]: I0805 21:55:48.369403 3315 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-26-69" Aug 5 21:55:48.529832 kubelet[3315]: E0805 21:55:48.529686 3315 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:55:48.549642 kubelet[3315]: I0805 21:55:48.548563 3315 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:55:48.549642 kubelet[3315]: I0805 21:55:48.548608 3315 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:55:48.549642 kubelet[3315]: I0805 21:55:48.548646 3315 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:55:48.549642 kubelet[3315]: I0805 21:55:48.549043 3315 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 21:55:48.549642 kubelet[3315]: I0805 21:55:48.549130 3315 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 21:55:48.549642 kubelet[3315]: I0805 21:55:48.549155 3315 policy_none.go:49] "None policy: Start" Aug 5 21:55:48.552101 kubelet[3315]: I0805 21:55:48.552029 3315 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:55:48.552101 kubelet[3315]: I0805 21:55:48.552099 3315 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:55:48.552499 kubelet[3315]: I0805 21:55:48.552418 3315 state_mem.go:75] "Updated machine memory state" Aug 5 21:55:48.570122 kubelet[3315]: I0805 21:55:48.569952 3315 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:55:48.576369 kubelet[3315]: I0805 21:55:48.575360 3315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:55:48.930969 kubelet[3315]: I0805 21:55:48.930783 3315 topology_manager.go:215] "Topology Admit Handler" podUID="b16ac151f88963e4e48a6bd70c5e60b3" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-69" Aug 5 21:55:48.933882 kubelet[3315]: I0805 21:55:48.931099 3315 topology_manager.go:215] "Topology Admit Handler" podUID="29ad52e9963b24bafb3fb103487234a5" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:48.933882 kubelet[3315]: I0805 21:55:48.931238 3315 topology_manager.go:215] "Topology Admit Handler" podUID="796f0894346e4dcf9be2b48df6be7346" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-69" Aug 5 21:55:48.969522 kubelet[3315]: E0805 21:55:48.968204 3315 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-26-69\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-69" Aug 5 21:55:48.973921 kubelet[3315]: E0805 21:55:48.973679 3315 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-26-69\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-69" Aug 5 21:55:48.989164 kubelet[3315]: I0805 21:55:48.988320 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:48.989164 kubelet[3315]: I0805 21:55:48.988451 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:48.989164 kubelet[3315]: I0805 21:55:48.988603 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:48.989164 kubelet[3315]: I0805 21:55:48.988664 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/796f0894346e4dcf9be2b48df6be7346-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-69\" (UID: \"796f0894346e4dcf9be2b48df6be7346\") " pod="kube-system/kube-scheduler-ip-172-31-26-69" Aug 5 21:55:48.989164 kubelet[3315]: I0805 21:55:48.988716 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b16ac151f88963e4e48a6bd70c5e60b3-ca-certs\") pod \"kube-apiserver-ip-172-31-26-69\" (UID: \"b16ac151f88963e4e48a6bd70c5e60b3\") " pod="kube-system/kube-apiserver-ip-172-31-26-69" Aug 5 21:55:48.989613 kubelet[3315]: I0805 21:55:48.988760 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b16ac151f88963e4e48a6bd70c5e60b3-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-69\" (UID: \"b16ac151f88963e4e48a6bd70c5e60b3\") " pod="kube-system/kube-apiserver-ip-172-31-26-69" Aug 5 21:55:48.989613 kubelet[3315]: I0805 21:55:48.988807 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b16ac151f88963e4e48a6bd70c5e60b3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-69\" (UID: \"b16ac151f88963e4e48a6bd70c5e60b3\") " pod="kube-system/kube-apiserver-ip-172-31-26-69" Aug 5 21:55:48.989613 kubelet[3315]: I0805 21:55:48.988871 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:48.989613 kubelet[3315]: I0805 21:55:48.988937 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29ad52e9963b24bafb3fb103487234a5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-69\" (UID: \"29ad52e9963b24bafb3fb103487234a5\") " pod="kube-system/kube-controller-manager-ip-172-31-26-69" Aug 5 21:55:49.122444 kubelet[3315]: I0805 21:55:49.122365 3315 apiserver.go:52] "Watching apiserver" Aug 5 21:55:49.179597 kubelet[3315]: I0805 21:55:49.177570 3315 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:55:49.359558 kubelet[3315]: I0805 21:55:49.359344 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-69" podStartSLOduration=3.3591782439999998 podCreationTimestamp="2024-08-05 21:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:55:49.3231928 +0000 UTC m=+1.456750484" watchObservedRunningTime="2024-08-05 21:55:49.359178244 +0000 UTC m=+1.492735892" Aug 5 21:55:49.385029 kubelet[3315]: I0805 21:55:49.384869 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-69" podStartSLOduration=1.384737609 podCreationTimestamp="2024-08-05 21:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:55:49.3596863 +0000 UTC m=+1.493243948" watchObservedRunningTime="2024-08-05 21:55:49.384737609 +0000 UTC m=+1.518295257" Aug 5 21:55:57.489023 sudo[2337]: pam_unix(sudo:session): session closed for user root Aug 5 21:55:57.515739 sshd[2334]: pam_unix(sshd:session): session closed for user core Aug 5 21:55:57.525954 systemd[1]: sshd@6-172.31.26.69:22-147.75.109.163:60594.service: Deactivated successfully. Aug 5 21:55:57.533778 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 21:55:57.535187 systemd[1]: session-7.scope: Consumed 13.218s CPU time, 133.5M memory peak, 0B memory swap peak. Aug 5 21:55:57.541939 systemd-logind[1995]: Session 7 logged out. Waiting for processes to exit. Aug 5 21:55:57.546438 systemd-logind[1995]: Removed session 7. Aug 5 21:56:00.845648 kubelet[3315]: I0805 21:56:00.845388 3315 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 21:56:00.848623 kubelet[3315]: I0805 21:56:00.847832 3315 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 21:56:00.848832 containerd[2015]: time="2024-08-05T21:56:00.847027050Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 21:56:00.908176 kubelet[3315]: I0805 21:56:00.907134 3315 topology_manager.go:215] "Topology Admit Handler" podUID="342e450e-b3bf-4f26-9875-43456017b8be" podNamespace="kube-system" podName="kube-proxy-59vtd" Aug 5 21:56:00.936107 systemd[1]: Created slice kubepods-besteffort-pod342e450e_b3bf_4f26_9875_43456017b8be.slice - libcontainer container kubepods-besteffort-pod342e450e_b3bf_4f26_9875_43456017b8be.slice. Aug 5 21:56:01.078288 kubelet[3315]: I0805 21:56:01.078162 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/342e450e-b3bf-4f26-9875-43456017b8be-lib-modules\") pod \"kube-proxy-59vtd\" (UID: \"342e450e-b3bf-4f26-9875-43456017b8be\") " pod="kube-system/kube-proxy-59vtd" Aug 5 21:56:01.078288 kubelet[3315]: I0805 21:56:01.078305 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/342e450e-b3bf-4f26-9875-43456017b8be-kube-proxy\") pod \"kube-proxy-59vtd\" (UID: \"342e450e-b3bf-4f26-9875-43456017b8be\") " pod="kube-system/kube-proxy-59vtd" Aug 5 21:56:01.078938 kubelet[3315]: I0805 21:56:01.078361 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/342e450e-b3bf-4f26-9875-43456017b8be-xtables-lock\") pod \"kube-proxy-59vtd\" (UID: \"342e450e-b3bf-4f26-9875-43456017b8be\") " pod="kube-system/kube-proxy-59vtd" Aug 5 21:56:01.078938 kubelet[3315]: I0805 21:56:01.078421 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6bb\" (UniqueName: \"kubernetes.io/projected/342e450e-b3bf-4f26-9875-43456017b8be-kube-api-access-zl6bb\") pod \"kube-proxy-59vtd\" (UID: \"342e450e-b3bf-4f26-9875-43456017b8be\") " pod="kube-system/kube-proxy-59vtd" Aug 5 21:56:01.143915 kubelet[3315]: I0805 21:56:01.143679 3315 topology_manager.go:215] "Topology Admit Handler" podUID="d85fbb10-f9ed-4df2-8e3c-061202c550a2" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-qqw4c" Aug 5 21:56:01.174373 systemd[1]: Created slice kubepods-besteffort-podd85fbb10_f9ed_4df2_8e3c_061202c550a2.slice - libcontainer container kubepods-besteffort-podd85fbb10_f9ed_4df2_8e3c_061202c550a2.slice. Aug 5 21:56:01.281349 kubelet[3315]: I0805 21:56:01.281154 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp7t2\" (UniqueName: \"kubernetes.io/projected/d85fbb10-f9ed-4df2-8e3c-061202c550a2-kube-api-access-rp7t2\") pod \"tigera-operator-76c4974c85-qqw4c\" (UID: \"d85fbb10-f9ed-4df2-8e3c-061202c550a2\") " pod="tigera-operator/tigera-operator-76c4974c85-qqw4c" Aug 5 21:56:01.281349 kubelet[3315]: I0805 21:56:01.281239 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d85fbb10-f9ed-4df2-8e3c-061202c550a2-var-lib-calico\") pod \"tigera-operator-76c4974c85-qqw4c\" (UID: \"d85fbb10-f9ed-4df2-8e3c-061202c550a2\") " pod="tigera-operator/tigera-operator-76c4974c85-qqw4c" Aug 5 21:56:01.515187 containerd[2015]: time="2024-08-05T21:56:01.515111717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-qqw4c,Uid:d85fbb10-f9ed-4df2-8e3c-061202c550a2,Namespace:tigera-operator,Attempt:0,}" Aug 5 21:56:01.563149 containerd[2015]: time="2024-08-05T21:56:01.562065773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59vtd,Uid:342e450e-b3bf-4f26-9875-43456017b8be,Namespace:kube-system,Attempt:0,}" Aug 5 21:56:01.610250 containerd[2015]: time="2024-08-05T21:56:01.609657401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:56:01.610250 containerd[2015]: time="2024-08-05T21:56:01.609767741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:01.610250 containerd[2015]: time="2024-08-05T21:56:01.609808769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:56:01.610250 containerd[2015]: time="2024-08-05T21:56:01.609841985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:01.644663 containerd[2015]: time="2024-08-05T21:56:01.644389710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:56:01.644663 containerd[2015]: time="2024-08-05T21:56:01.644583126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:01.645919 containerd[2015]: time="2024-08-05T21:56:01.645082590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:56:01.646668 containerd[2015]: time="2024-08-05T21:56:01.646256298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:01.678360 systemd[1]: Started cri-containerd-a729e1cf29a1a60fac8db0dc23f87a39d51220f5db64cb507a7323d6825781ca.scope - libcontainer container a729e1cf29a1a60fac8db0dc23f87a39d51220f5db64cb507a7323d6825781ca. Aug 5 21:56:01.713738 systemd[1]: Started cri-containerd-95ad5eb2598dbfdbeb4484e1cb66c5c9c1f22651726f0de45190da4c8099d595.scope - libcontainer container 95ad5eb2598dbfdbeb4484e1cb66c5c9c1f22651726f0de45190da4c8099d595. Aug 5 21:56:01.814457 containerd[2015]: time="2024-08-05T21:56:01.814202610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-qqw4c,Uid:d85fbb10-f9ed-4df2-8e3c-061202c550a2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a729e1cf29a1a60fac8db0dc23f87a39d51220f5db64cb507a7323d6825781ca\"" Aug 5 21:56:01.824277 containerd[2015]: time="2024-08-05T21:56:01.824001414Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 21:56:01.826139 containerd[2015]: time="2024-08-05T21:56:01.825736602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59vtd,Uid:342e450e-b3bf-4f26-9875-43456017b8be,Namespace:kube-system,Attempt:0,} returns sandbox id \"95ad5eb2598dbfdbeb4484e1cb66c5c9c1f22651726f0de45190da4c8099d595\"" Aug 5 21:56:01.839587 containerd[2015]: time="2024-08-05T21:56:01.839117934Z" level=info msg="CreateContainer within sandbox \"95ad5eb2598dbfdbeb4484e1cb66c5c9c1f22651726f0de45190da4c8099d595\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 21:56:01.874445 containerd[2015]: time="2024-08-05T21:56:01.874269343Z" level=info msg="CreateContainer within sandbox \"95ad5eb2598dbfdbeb4484e1cb66c5c9c1f22651726f0de45190da4c8099d595\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"164dfd269dc650cc4852938f6b3d64e8ea365230b2daea2e9698430b07b095d1\"" Aug 5 21:56:01.877542 containerd[2015]: time="2024-08-05T21:56:01.876785779Z" level=info msg="StartContainer for \"164dfd269dc650cc4852938f6b3d64e8ea365230b2daea2e9698430b07b095d1\"" Aug 5 21:56:01.953134 systemd[1]: Started cri-containerd-164dfd269dc650cc4852938f6b3d64e8ea365230b2daea2e9698430b07b095d1.scope - libcontainer container 164dfd269dc650cc4852938f6b3d64e8ea365230b2daea2e9698430b07b095d1. Aug 5 21:56:02.043131 containerd[2015]: time="2024-08-05T21:56:02.042998367Z" level=info msg="StartContainer for \"164dfd269dc650cc4852938f6b3d64e8ea365230b2daea2e9698430b07b095d1\" returns successfully" Aug 5 21:56:03.353628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919591479.mount: Deactivated successfully. Aug 5 21:56:04.101427 containerd[2015]: time="2024-08-05T21:56:04.100681926Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:04.103979 containerd[2015]: time="2024-08-05T21:56:04.103797870Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473606" Aug 5 21:56:04.105028 containerd[2015]: time="2024-08-05T21:56:04.104853834Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:04.110311 containerd[2015]: time="2024-08-05T21:56:04.110229750Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:04.112764 containerd[2015]: time="2024-08-05T21:56:04.112523850Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.288404656s" Aug 5 21:56:04.112764 containerd[2015]: time="2024-08-05T21:56:04.112590150Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Aug 5 21:56:04.119824 containerd[2015]: time="2024-08-05T21:56:04.119749458Z" level=info msg="CreateContainer within sandbox \"a729e1cf29a1a60fac8db0dc23f87a39d51220f5db64cb507a7323d6825781ca\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 21:56:04.151388 containerd[2015]: time="2024-08-05T21:56:04.151037058Z" level=info msg="CreateContainer within sandbox \"a729e1cf29a1a60fac8db0dc23f87a39d51220f5db64cb507a7323d6825781ca\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a\"" Aug 5 21:56:04.155750 containerd[2015]: time="2024-08-05T21:56:04.155261106Z" level=info msg="StartContainer for \"a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a\"" Aug 5 21:56:04.242997 systemd[1]: Started cri-containerd-a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a.scope - libcontainer container a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a. Aug 5 21:56:04.320759 containerd[2015]: time="2024-08-05T21:56:04.319067167Z" level=info msg="StartContainer for \"a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a\" returns successfully" Aug 5 21:56:04.484669 kubelet[3315]: I0805 21:56:04.484608 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-59vtd" podStartSLOduration=4.483996368 podCreationTimestamp="2024-08-05 21:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:56:02.480810042 +0000 UTC m=+14.614367714" watchObservedRunningTime="2024-08-05 21:56:04.483996368 +0000 UTC m=+16.617554028" Aug 5 21:56:04.487511 kubelet[3315]: I0805 21:56:04.486692 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-qqw4c" podStartSLOduration=1.191543932 podCreationTimestamp="2024-08-05 21:56:01 +0000 UTC" firstStartedPulling="2024-08-05 21:56:01.818745138 +0000 UTC m=+13.952302786" lastFinishedPulling="2024-08-05 21:56:04.113203878 +0000 UTC m=+16.246761538" observedRunningTime="2024-08-05 21:56:04.483965348 +0000 UTC m=+16.617523032" watchObservedRunningTime="2024-08-05 21:56:04.486002684 +0000 UTC m=+16.619560332" Aug 5 21:56:09.824156 kubelet[3315]: I0805 21:56:09.824049 3315 topology_manager.go:215] "Topology Admit Handler" podUID="54efaf5e-b06a-43dd-a0fb-36c447e8c8a5" podNamespace="calico-system" podName="calico-typha-5c87d4bb7-2bvbm" Aug 5 21:56:09.861545 systemd[1]: Created slice kubepods-besteffort-pod54efaf5e_b06a_43dd_a0fb_36c447e8c8a5.slice - libcontainer container kubepods-besteffort-pod54efaf5e_b06a_43dd_a0fb_36c447e8c8a5.slice. Aug 5 21:56:09.946522 kubelet[3315]: I0805 21:56:09.946381 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54efaf5e-b06a-43dd-a0fb-36c447e8c8a5-tigera-ca-bundle\") pod \"calico-typha-5c87d4bb7-2bvbm\" (UID: \"54efaf5e-b06a-43dd-a0fb-36c447e8c8a5\") " pod="calico-system/calico-typha-5c87d4bb7-2bvbm" Aug 5 21:56:09.946768 kubelet[3315]: I0805 21:56:09.946576 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/54efaf5e-b06a-43dd-a0fb-36c447e8c8a5-typha-certs\") pod \"calico-typha-5c87d4bb7-2bvbm\" (UID: \"54efaf5e-b06a-43dd-a0fb-36c447e8c8a5\") " pod="calico-system/calico-typha-5c87d4bb7-2bvbm" Aug 5 21:56:09.946768 kubelet[3315]: I0805 21:56:09.946652 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b9dl\" (UniqueName: \"kubernetes.io/projected/54efaf5e-b06a-43dd-a0fb-36c447e8c8a5-kube-api-access-9b9dl\") pod \"calico-typha-5c87d4bb7-2bvbm\" (UID: \"54efaf5e-b06a-43dd-a0fb-36c447e8c8a5\") " pod="calico-system/calico-typha-5c87d4bb7-2bvbm" Aug 5 21:56:10.162559 kubelet[3315]: I0805 21:56:10.162353 3315 topology_manager.go:215] "Topology Admit Handler" podUID="22f22430-5ae1-4c03-8137-d7e8a0477a78" podNamespace="calico-system" podName="calico-node-c5lld" Aug 5 21:56:10.186198 containerd[2015]: time="2024-08-05T21:56:10.184074192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c87d4bb7-2bvbm,Uid:54efaf5e-b06a-43dd-a0fb-36c447e8c8a5,Namespace:calico-system,Attempt:0,}" Aug 5 21:56:10.192799 systemd[1]: Created slice kubepods-besteffort-pod22f22430_5ae1_4c03_8137_d7e8a0477a78.slice - libcontainer container kubepods-besteffort-pod22f22430_5ae1_4c03_8137_d7e8a0477a78.slice. Aug 5 21:56:10.326109 containerd[2015]: time="2024-08-05T21:56:10.322039549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:56:10.326109 containerd[2015]: time="2024-08-05T21:56:10.324819229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:10.326109 containerd[2015]: time="2024-08-05T21:56:10.324882889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:56:10.326109 containerd[2015]: time="2024-08-05T21:56:10.324921529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:10.352791 kubelet[3315]: I0805 21:56:10.352709 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/22f22430-5ae1-4c03-8137-d7e8a0477a78-policysync\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.352944 kubelet[3315]: I0805 21:56:10.352850 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22f22430-5ae1-4c03-8137-d7e8a0477a78-tigera-ca-bundle\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.352944 kubelet[3315]: I0805 21:56:10.352911 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22f22430-5ae1-4c03-8137-d7e8a0477a78-xtables-lock\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.353162 kubelet[3315]: I0805 21:56:10.352981 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/22f22430-5ae1-4c03-8137-d7e8a0477a78-cni-log-dir\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.353162 kubelet[3315]: I0805 21:56:10.353038 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jr2x\" (UniqueName: \"kubernetes.io/projected/22f22430-5ae1-4c03-8137-d7e8a0477a78-kube-api-access-4jr2x\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.353162 kubelet[3315]: I0805 21:56:10.353096 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/22f22430-5ae1-4c03-8137-d7e8a0477a78-cni-bin-dir\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.353162 kubelet[3315]: I0805 21:56:10.353142 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/22f22430-5ae1-4c03-8137-d7e8a0477a78-node-certs\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.353381 kubelet[3315]: I0805 21:56:10.353188 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/22f22430-5ae1-4c03-8137-d7e8a0477a78-flexvol-driver-host\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.353381 kubelet[3315]: I0805 21:56:10.353243 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22f22430-5ae1-4c03-8137-d7e8a0477a78-lib-modules\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.353381 kubelet[3315]: I0805 21:56:10.353290 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/22f22430-5ae1-4c03-8137-d7e8a0477a78-var-run-calico\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.353381 kubelet[3315]: I0805 21:56:10.353336 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22f22430-5ae1-4c03-8137-d7e8a0477a78-var-lib-calico\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.353659 kubelet[3315]: I0805 21:56:10.353390 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/22f22430-5ae1-4c03-8137-d7e8a0477a78-cni-net-dir\") pod \"calico-node-c5lld\" (UID: \"22f22430-5ae1-4c03-8137-d7e8a0477a78\") " pod="calico-system/calico-node-c5lld" Aug 5 21:56:10.427627 kubelet[3315]: I0805 21:56:10.426712 3315 topology_manager.go:215] "Topology Admit Handler" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" podNamespace="calico-system" podName="csi-node-driver-k4s94" Aug 5 21:56:10.429196 kubelet[3315]: E0805 21:56:10.427856 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4s94" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" Aug 5 21:56:10.431829 systemd[1]: Started cri-containerd-b0c097e9f482e481cf53ab9181962d9bdda39fb4fee0f70976620996fd7e2924.scope - libcontainer container b0c097e9f482e481cf53ab9181962d9bdda39fb4fee0f70976620996fd7e2924. Aug 5 21:56:10.464817 kubelet[3315]: E0805 21:56:10.464659 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.466543 kubelet[3315]: W0805 21:56:10.465790 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.466543 kubelet[3315]: E0805 21:56:10.465911 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.468701 kubelet[3315]: E0805 21:56:10.468655 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.470508 kubelet[3315]: W0805 21:56:10.468993 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.471298 kubelet[3315]: E0805 21:56:10.471255 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.472553 kubelet[3315]: E0805 21:56:10.472377 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.472553 kubelet[3315]: W0805 21:56:10.472436 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.474413 kubelet[3315]: E0805 21:56:10.474100 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.476612 kubelet[3315]: E0805 21:56:10.475748 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.476612 kubelet[3315]: W0805 21:56:10.475807 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.477484 kubelet[3315]: E0805 21:56:10.477262 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.480899 kubelet[3315]: E0805 21:56:10.480610 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.480899 kubelet[3315]: W0805 21:56:10.480665 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.480899 kubelet[3315]: E0805 21:56:10.480759 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.482104 kubelet[3315]: E0805 21:56:10.482036 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.483562 kubelet[3315]: W0805 21:56:10.482341 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.483562 kubelet[3315]: E0805 21:56:10.482549 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.486189 kubelet[3315]: E0805 21:56:10.485224 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.486189 kubelet[3315]: W0805 21:56:10.485274 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.486945 kubelet[3315]: E0805 21:56:10.486861 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.487362 kubelet[3315]: E0805 21:56:10.487148 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.487362 kubelet[3315]: W0805 21:56:10.487177 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.488990 kubelet[3315]: E0805 21:56:10.488679 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.489779 kubelet[3315]: E0805 21:56:10.489630 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.489779 kubelet[3315]: W0805 21:56:10.489662 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.490312 kubelet[3315]: E0805 21:56:10.490103 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.491687 kubelet[3315]: E0805 21:56:10.491371 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.491687 kubelet[3315]: W0805 21:56:10.491426 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.492879 kubelet[3315]: E0805 21:56:10.492778 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.493789 kubelet[3315]: E0805 21:56:10.493582 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.493789 kubelet[3315]: W0805 21:56:10.493636 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.495048 kubelet[3315]: E0805 21:56:10.494972 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.496768 kubelet[3315]: E0805 21:56:10.496269 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.497187 kubelet[3315]: W0805 21:56:10.496783 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.498270 kubelet[3315]: E0805 21:56:10.497879 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.501054 kubelet[3315]: E0805 21:56:10.500995 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.501593 kubelet[3315]: W0805 21:56:10.501269 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.504072 kubelet[3315]: E0805 21:56:10.503435 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.504072 kubelet[3315]: W0805 21:56:10.503594 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.504571 kubelet[3315]: E0805 21:56:10.503620 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.505559 kubelet[3315]: E0805 21:56:10.504854 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.505559 kubelet[3315]: E0805 21:56:10.505041 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.505559 kubelet[3315]: W0805 21:56:10.505083 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.505559 kubelet[3315]: E0805 21:56:10.505146 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.506581 kubelet[3315]: E0805 21:56:10.506499 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.506816 kubelet[3315]: W0805 21:56:10.506773 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.507063 kubelet[3315]: E0805 21:56:10.507035 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.507941 kubelet[3315]: E0805 21:56:10.507892 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.507941 kubelet[3315]: W0805 21:56:10.507936 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.508224 kubelet[3315]: E0805 21:56:10.507997 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.509760 kubelet[3315]: E0805 21:56:10.509363 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.509760 kubelet[3315]: W0805 21:56:10.509442 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.509760 kubelet[3315]: E0805 21:56:10.509654 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.557202 kubelet[3315]: E0805 21:56:10.556997 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.557202 kubelet[3315]: W0805 21:56:10.557053 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.557202 kubelet[3315]: E0805 21:56:10.557098 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.562388 kubelet[3315]: E0805 21:56:10.561687 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.562388 kubelet[3315]: W0805 21:56:10.561795 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.562388 kubelet[3315]: E0805 21:56:10.561884 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.562388 kubelet[3315]: I0805 21:56:10.561979 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d27212b5-d6e4-4e2e-9f51-5d83c596a8d0-varrun\") pod \"csi-node-driver-k4s94\" (UID: \"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0\") " pod="calico-system/csi-node-driver-k4s94" Aug 5 21:56:10.566408 kubelet[3315]: E0805 21:56:10.566351 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.567906 kubelet[3315]: W0805 21:56:10.567805 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.568564 kubelet[3315]: E0805 21:56:10.568193 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.568564 kubelet[3315]: I0805 21:56:10.568286 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d27212b5-d6e4-4e2e-9f51-5d83c596a8d0-socket-dir\") pod \"csi-node-driver-k4s94\" (UID: \"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0\") " pod="calico-system/csi-node-driver-k4s94" Aug 5 21:56:10.570634 kubelet[3315]: E0805 21:56:10.570002 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.570943 kubelet[3315]: W0805 21:56:10.570877 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.573559 kubelet[3315]: E0805 21:56:10.571197 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.573559 kubelet[3315]: I0805 21:56:10.571327 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d27212b5-d6e4-4e2e-9f51-5d83c596a8d0-registration-dir\") pod \"csi-node-driver-k4s94\" (UID: \"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0\") " pod="calico-system/csi-node-driver-k4s94" Aug 5 21:56:10.575513 kubelet[3315]: E0805 21:56:10.574922 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.575513 kubelet[3315]: W0805 21:56:10.574992 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.575513 kubelet[3315]: E0805 21:56:10.575125 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.576129 kubelet[3315]: E0805 21:56:10.576094 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.576271 kubelet[3315]: W0805 21:56:10.576242 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.576692 kubelet[3315]: E0805 21:56:10.576571 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.580332 kubelet[3315]: E0805 21:56:10.579861 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.580332 kubelet[3315]: W0805 21:56:10.579936 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.580332 kubelet[3315]: E0805 21:56:10.580084 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.580332 kubelet[3315]: I0805 21:56:10.580168 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d27212b5-d6e4-4e2e-9f51-5d83c596a8d0-kubelet-dir\") pod \"csi-node-driver-k4s94\" (UID: \"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0\") " pod="calico-system/csi-node-driver-k4s94" Aug 5 21:56:10.582903 kubelet[3315]: E0805 21:56:10.581859 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.582903 kubelet[3315]: W0805 21:56:10.581992 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.582903 kubelet[3315]: E0805 21:56:10.582188 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.584666 kubelet[3315]: E0805 21:56:10.583813 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.585647 kubelet[3315]: W0805 21:56:10.585072 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.585647 kubelet[3315]: E0805 21:56:10.585183 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.587803 kubelet[3315]: E0805 21:56:10.587735 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.588827 kubelet[3315]: W0805 21:56:10.588730 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.589354 kubelet[3315]: E0805 21:56:10.589258 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.590681 kubelet[3315]: E0805 21:56:10.590607 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.590681 kubelet[3315]: W0805 21:56:10.590668 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.591143 kubelet[3315]: E0805 21:56:10.590725 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.593618 kubelet[3315]: E0805 21:56:10.593406 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.593618 kubelet[3315]: W0805 21:56:10.593576 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.594309 kubelet[3315]: E0805 21:56:10.593751 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.595527 kubelet[3315]: E0805 21:56:10.595434 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.595527 kubelet[3315]: W0805 21:56:10.595511 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.595772 kubelet[3315]: E0805 21:56:10.595558 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.598985 kubelet[3315]: E0805 21:56:10.598843 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.598985 kubelet[3315]: W0805 21:56:10.598944 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.599391 kubelet[3315]: E0805 21:56:10.599028 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.599391 kubelet[3315]: I0805 21:56:10.599161 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmdcr\" (UniqueName: \"kubernetes.io/projected/d27212b5-d6e4-4e2e-9f51-5d83c596a8d0-kube-api-access-zmdcr\") pod \"csi-node-driver-k4s94\" (UID: \"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0\") " pod="calico-system/csi-node-driver-k4s94" Aug 5 21:56:10.601665 kubelet[3315]: E0805 21:56:10.601549 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.601665 kubelet[3315]: W0805 21:56:10.601659 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.601665 kubelet[3315]: E0805 21:56:10.601738 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.603232 kubelet[3315]: E0805 21:56:10.603152 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.603232 kubelet[3315]: W0805 21:56:10.603210 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.603232 kubelet[3315]: E0805 21:56:10.603293 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.700846 kubelet[3315]: E0805 21:56:10.700738 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.700846 kubelet[3315]: W0805 21:56:10.700849 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.701313 kubelet[3315]: E0805 21:56:10.701180 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.705123 kubelet[3315]: E0805 21:56:10.705002 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.705497 kubelet[3315]: W0805 21:56:10.705129 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.705497 kubelet[3315]: E0805 21:56:10.705238 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.706749 kubelet[3315]: E0805 21:56:10.706315 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.706749 kubelet[3315]: W0805 21:56:10.706362 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.707804 kubelet[3315]: E0805 21:56:10.707045 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.707804 kubelet[3315]: W0805 21:56:10.707091 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.707804 kubelet[3315]: E0805 21:56:10.707154 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.707804 kubelet[3315]: E0805 21:56:10.707337 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.709429 kubelet[3315]: E0805 21:56:10.709103 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.709429 kubelet[3315]: W0805 21:56:10.709139 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.709429 kubelet[3315]: E0805 21:56:10.709217 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.711182 kubelet[3315]: E0805 21:56:10.709949 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.711182 kubelet[3315]: W0805 21:56:10.710010 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.711182 kubelet[3315]: E0805 21:56:10.710095 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.711182 kubelet[3315]: E0805 21:56:10.711022 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.711182 kubelet[3315]: W0805 21:56:10.711088 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.712763 kubelet[3315]: E0805 21:56:10.712624 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.714416 kubelet[3315]: E0805 21:56:10.713936 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.714416 kubelet[3315]: W0805 21:56:10.714000 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.717191 kubelet[3315]: E0805 21:56:10.714789 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.717191 kubelet[3315]: W0805 21:56:10.714822 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.717191 kubelet[3315]: E0805 21:56:10.715280 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.717191 kubelet[3315]: E0805 21:56:10.715540 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.718754 kubelet[3315]: E0805 21:56:10.718194 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.718754 kubelet[3315]: W0805 21:56:10.718304 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.718754 kubelet[3315]: E0805 21:56:10.718539 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.719570 kubelet[3315]: E0805 21:56:10.719354 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.719570 kubelet[3315]: W0805 21:56:10.719385 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.719707 kubelet[3315]: E0805 21:56:10.719638 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.720761 kubelet[3315]: E0805 21:56:10.720350 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.721383 kubelet[3315]: W0805 21:56:10.720777 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.721797 kubelet[3315]: E0805 21:56:10.721572 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.722668 kubelet[3315]: E0805 21:56:10.722368 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.722668 kubelet[3315]: W0805 21:56:10.722421 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.722668 kubelet[3315]: E0805 21:56:10.722574 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.724149 kubelet[3315]: E0805 21:56:10.723991 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.724149 kubelet[3315]: W0805 21:56:10.724021 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.724419 kubelet[3315]: E0805 21:56:10.724363 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.724706 kubelet[3315]: E0805 21:56:10.724561 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.724706 kubelet[3315]: W0805 21:56:10.724585 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.726004 kubelet[3315]: E0805 21:56:10.725292 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.727537 kubelet[3315]: E0805 21:56:10.727405 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.727537 kubelet[3315]: W0805 21:56:10.727447 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.728779 kubelet[3315]: E0805 21:56:10.728600 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.729963 kubelet[3315]: E0805 21:56:10.729854 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.729963 kubelet[3315]: W0805 21:56:10.729902 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.730863 kubelet[3315]: E0805 21:56:10.730801 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.731903 kubelet[3315]: E0805 21:56:10.731762 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.731903 kubelet[3315]: W0805 21:56:10.731824 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.732856 kubelet[3315]: E0805 21:56:10.732049 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.733728 kubelet[3315]: E0805 21:56:10.733671 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.733728 kubelet[3315]: W0805 21:56:10.733712 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.733929 kubelet[3315]: E0805 21:56:10.733850 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.735429 kubelet[3315]: E0805 21:56:10.735193 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.735429 kubelet[3315]: W0805 21:56:10.735257 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.737022 kubelet[3315]: E0805 21:56:10.736808 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.737381 kubelet[3315]: E0805 21:56:10.737191 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.737381 kubelet[3315]: W0805 21:56:10.737239 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.738285 kubelet[3315]: E0805 21:56:10.737951 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.740160 kubelet[3315]: E0805 21:56:10.739853 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.740160 kubelet[3315]: W0805 21:56:10.739976 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.741881 kubelet[3315]: E0805 21:56:10.741659 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.742829 kubelet[3315]: E0805 21:56:10.742767 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.742829 kubelet[3315]: W0805 21:56:10.742819 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.743022 kubelet[3315]: E0805 21:56:10.742979 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.743581 kubelet[3315]: E0805 21:56:10.743540 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.743581 kubelet[3315]: W0805 21:56:10.743574 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.743744 kubelet[3315]: E0805 21:56:10.743618 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.745251 kubelet[3315]: E0805 21:56:10.744759 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.745251 kubelet[3315]: W0805 21:56:10.744830 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.745251 kubelet[3315]: E0805 21:56:10.744890 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.800181 kubelet[3315]: E0805 21:56:10.799779 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:10.800181 kubelet[3315]: W0805 21:56:10.799855 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:10.800181 kubelet[3315]: E0805 21:56:10.799932 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:10.813425 containerd[2015]: time="2024-08-05T21:56:10.812880879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c5lld,Uid:22f22430-5ae1-4c03-8137-d7e8a0477a78,Namespace:calico-system,Attempt:0,}" Aug 5 21:56:10.884942 containerd[2015]: time="2024-08-05T21:56:10.883863615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:56:10.885418 containerd[2015]: time="2024-08-05T21:56:10.884861607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:10.886658 containerd[2015]: time="2024-08-05T21:56:10.885385563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:56:10.888112 containerd[2015]: time="2024-08-05T21:56:10.887271723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:10.952322 systemd[1]: Started cri-containerd-ade04d6d4f8e9dbdbaf8429c6c2e331fca8a31c9c8245dff1cfd687cdbc24fb7.scope - libcontainer container ade04d6d4f8e9dbdbaf8429c6c2e331fca8a31c9c8245dff1cfd687cdbc24fb7. Aug 5 21:56:10.983985 containerd[2015]: time="2024-08-05T21:56:10.983887840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c87d4bb7-2bvbm,Uid:54efaf5e-b06a-43dd-a0fb-36c447e8c8a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0c097e9f482e481cf53ab9181962d9bdda39fb4fee0f70976620996fd7e2924\"" Aug 5 21:56:10.993649 containerd[2015]: time="2024-08-05T21:56:10.992881468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 21:56:11.118144 containerd[2015]: time="2024-08-05T21:56:11.112564945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c5lld,Uid:22f22430-5ae1-4c03-8137-d7e8a0477a78,Namespace:calico-system,Attempt:0,} returns sandbox id \"ade04d6d4f8e9dbdbaf8429c6c2e331fca8a31c9c8245dff1cfd687cdbc24fb7\"" Aug 5 21:56:12.229294 kubelet[3315]: E0805 21:56:12.228725 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4s94" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" Aug 5 21:56:13.629165 containerd[2015]: time="2024-08-05T21:56:13.629073545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:13.633657 containerd[2015]: time="2024-08-05T21:56:13.632519081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Aug 5 21:56:13.638322 containerd[2015]: time="2024-08-05T21:56:13.638105501Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:13.652961 containerd[2015]: time="2024-08-05T21:56:13.652876445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:13.658804 containerd[2015]: time="2024-08-05T21:56:13.658089377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.665124041s" Aug 5 21:56:13.658804 containerd[2015]: time="2024-08-05T21:56:13.658378133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Aug 5 21:56:13.664605 containerd[2015]: time="2024-08-05T21:56:13.663660245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 21:56:13.725363 containerd[2015]: time="2024-08-05T21:56:13.725275770Z" level=info msg="CreateContainer within sandbox \"b0c097e9f482e481cf53ab9181962d9bdda39fb4fee0f70976620996fd7e2924\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 21:56:13.790307 containerd[2015]: time="2024-08-05T21:56:13.790200282Z" level=info msg="CreateContainer within sandbox \"b0c097e9f482e481cf53ab9181962d9bdda39fb4fee0f70976620996fd7e2924\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b30e87230e4b8249a33ceb3f20e259c5a784b86de288be7726493c54ef7d0a7f\"" Aug 5 21:56:13.791976 containerd[2015]: time="2024-08-05T21:56:13.791873046Z" level=info msg="StartContainer for \"b30e87230e4b8249a33ceb3f20e259c5a784b86de288be7726493c54ef7d0a7f\"" Aug 5 21:56:13.900836 systemd[1]: Started cri-containerd-b30e87230e4b8249a33ceb3f20e259c5a784b86de288be7726493c54ef7d0a7f.scope - libcontainer container b30e87230e4b8249a33ceb3f20e259c5a784b86de288be7726493c54ef7d0a7f. Aug 5 21:56:14.057311 containerd[2015]: time="2024-08-05T21:56:14.057244179Z" level=info msg="StartContainer for \"b30e87230e4b8249a33ceb3f20e259c5a784b86de288be7726493c54ef7d0a7f\" returns successfully" Aug 5 21:56:14.229249 kubelet[3315]: E0805 21:56:14.228283 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4s94" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" Aug 5 21:56:14.614990 kubelet[3315]: E0805 21:56:14.614457 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.614990 kubelet[3315]: W0805 21:56:14.614682 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.614990 kubelet[3315]: E0805 21:56:14.614728 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.618562 kubelet[3315]: E0805 21:56:14.617076 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.618562 kubelet[3315]: W0805 21:56:14.617116 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.618562 kubelet[3315]: E0805 21:56:14.617158 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.618562 kubelet[3315]: E0805 21:56:14.617664 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.618562 kubelet[3315]: W0805 21:56:14.617699 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.618562 kubelet[3315]: E0805 21:56:14.617737 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.619323 kubelet[3315]: E0805 21:56:14.618633 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.619323 kubelet[3315]: W0805 21:56:14.618680 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.619323 kubelet[3315]: E0805 21:56:14.618730 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.619323 kubelet[3315]: E0805 21:56:14.619358 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.619323 kubelet[3315]: W0805 21:56:14.619382 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.619323 kubelet[3315]: E0805 21:56:14.619415 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.619323 kubelet[3315]: E0805 21:56:14.619809 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.619323 kubelet[3315]: W0805 21:56:14.619831 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.619323 kubelet[3315]: E0805 21:56:14.619862 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.619323 kubelet[3315]: E0805 21:56:14.621259 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.622781 kubelet[3315]: W0805 21:56:14.621322 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.622781 kubelet[3315]: E0805 21:56:14.621373 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.622781 kubelet[3315]: E0805 21:56:14.621923 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.622781 kubelet[3315]: W0805 21:56:14.621943 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.622781 kubelet[3315]: E0805 21:56:14.621971 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.622781 kubelet[3315]: E0805 21:56:14.622355 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.622781 kubelet[3315]: W0805 21:56:14.622379 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.622781 kubelet[3315]: E0805 21:56:14.622415 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.623319 kubelet[3315]: E0805 21:56:14.622994 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.623319 kubelet[3315]: W0805 21:56:14.623072 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.623319 kubelet[3315]: E0805 21:56:14.623131 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.625778 kubelet[3315]: E0805 21:56:14.623763 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.625778 kubelet[3315]: W0805 21:56:14.623848 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.625778 kubelet[3315]: E0805 21:56:14.623889 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.625778 kubelet[3315]: E0805 21:56:14.624582 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.625778 kubelet[3315]: W0805 21:56:14.624616 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.625778 kubelet[3315]: E0805 21:56:14.624657 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.625778 kubelet[3315]: E0805 21:56:14.625296 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.625778 kubelet[3315]: W0805 21:56:14.625333 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.625778 kubelet[3315]: E0805 21:56:14.625376 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.630294 kubelet[3315]: E0805 21:56:14.626823 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.630294 kubelet[3315]: W0805 21:56:14.626852 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.630294 kubelet[3315]: E0805 21:56:14.626888 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.630294 kubelet[3315]: E0805 21:56:14.629940 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.630294 kubelet[3315]: W0805 21:56:14.629974 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.630294 kubelet[3315]: E0805 21:56:14.630018 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.661165 kubelet[3315]: E0805 21:56:14.660903 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.661835 kubelet[3315]: W0805 21:56:14.661753 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.662012 kubelet[3315]: E0805 21:56:14.661868 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.663106 kubelet[3315]: E0805 21:56:14.662945 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.663106 kubelet[3315]: W0805 21:56:14.663021 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.663106 kubelet[3315]: E0805 21:56:14.663109 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.664725 kubelet[3315]: E0805 21:56:14.663842 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.664725 kubelet[3315]: W0805 21:56:14.663887 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.664725 kubelet[3315]: E0805 21:56:14.664042 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.665752 kubelet[3315]: E0805 21:56:14.664835 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.665752 kubelet[3315]: W0805 21:56:14.664876 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.665752 kubelet[3315]: E0805 21:56:14.664996 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.666344 kubelet[3315]: E0805 21:56:14.665955 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.666344 kubelet[3315]: W0805 21:56:14.665994 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.666344 kubelet[3315]: E0805 21:56:14.666064 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.668262 kubelet[3315]: E0805 21:56:14.666835 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.668262 kubelet[3315]: W0805 21:56:14.666866 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.668262 kubelet[3315]: E0805 21:56:14.666913 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.668547 kubelet[3315]: E0805 21:56:14.668359 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.668547 kubelet[3315]: W0805 21:56:14.668387 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.668547 kubelet[3315]: E0805 21:56:14.668433 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.669674 kubelet[3315]: E0805 21:56:14.669090 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.669674 kubelet[3315]: W0805 21:56:14.669142 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.669674 kubelet[3315]: E0805 21:56:14.669303 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.671530 kubelet[3315]: E0805 21:56:14.670403 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.671530 kubelet[3315]: W0805 21:56:14.670593 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.671530 kubelet[3315]: E0805 21:56:14.670770 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.671530 kubelet[3315]: E0805 21:56:14.671241 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.671530 kubelet[3315]: W0805 21:56:14.671265 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.671530 kubelet[3315]: E0805 21:56:14.671379 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.672983 kubelet[3315]: E0805 21:56:14.671892 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.672983 kubelet[3315]: W0805 21:56:14.671944 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.672983 kubelet[3315]: E0805 21:56:14.672325 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.672983 kubelet[3315]: E0805 21:56:14.673085 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.672983 kubelet[3315]: W0805 21:56:14.673126 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.672983 kubelet[3315]: E0805 21:56:14.673280 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.675392 kubelet[3315]: E0805 21:56:14.674017 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.675392 kubelet[3315]: W0805 21:56:14.674060 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.675392 kubelet[3315]: E0805 21:56:14.674111 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.675392 kubelet[3315]: E0805 21:56:14.675088 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.675392 kubelet[3315]: W0805 21:56:14.675135 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.675392 kubelet[3315]: E0805 21:56:14.675217 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.682503 kubelet[3315]: E0805 21:56:14.681809 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.682503 kubelet[3315]: W0805 21:56:14.681874 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.682503 kubelet[3315]: E0805 21:56:14.681938 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.682503 kubelet[3315]: E0805 21:56:14.682451 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.682930 kubelet[3315]: W0805 21:56:14.682561 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.682930 kubelet[3315]: E0805 21:56:14.682598 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.685190 kubelet[3315]: E0805 21:56:14.685110 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.685190 kubelet[3315]: W0805 21:56:14.685165 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.686803 kubelet[3315]: E0805 21:56:14.685330 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:14.686803 kubelet[3315]: E0805 21:56:14.686151 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:14.686803 kubelet[3315]: W0805 21:56:14.686199 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:14.686803 kubelet[3315]: E0805 21:56:14.686266 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.245518 containerd[2015]: time="2024-08-05T21:56:15.243023933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:15.251272 containerd[2015]: time="2024-08-05T21:56:15.251074841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Aug 5 21:56:15.253957 containerd[2015]: time="2024-08-05T21:56:15.253829765Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:15.271510 containerd[2015]: time="2024-08-05T21:56:15.269442653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:15.277920 containerd[2015]: time="2024-08-05T21:56:15.273327053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.609420928s" Aug 5 21:56:15.278144 containerd[2015]: time="2024-08-05T21:56:15.278101277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Aug 5 21:56:15.295745 containerd[2015]: time="2024-08-05T21:56:15.294023969Z" level=info msg="CreateContainer within sandbox \"ade04d6d4f8e9dbdbaf8429c6c2e331fca8a31c9c8245dff1cfd687cdbc24fb7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 21:56:15.366754 containerd[2015]: time="2024-08-05T21:56:15.366621246Z" level=info msg="CreateContainer within sandbox \"ade04d6d4f8e9dbdbaf8429c6c2e331fca8a31c9c8245dff1cfd687cdbc24fb7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d\"" Aug 5 21:56:15.369444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460992984.mount: Deactivated successfully. Aug 5 21:56:15.372649 containerd[2015]: time="2024-08-05T21:56:15.372495834Z" level=info msg="StartContainer for \"c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d\"" Aug 5 21:56:15.480394 systemd[1]: run-containerd-runc-k8s.io-c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d-runc.yFLUyC.mount: Deactivated successfully. Aug 5 21:56:15.502699 systemd[1]: Started cri-containerd-c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d.scope - libcontainer container c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d. Aug 5 21:56:15.563315 kubelet[3315]: I0805 21:56:15.561899 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:56:15.642691 kubelet[3315]: E0805 21:56:15.641560 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.642691 kubelet[3315]: W0805 21:56:15.641602 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.642691 kubelet[3315]: E0805 21:56:15.641643 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.646004 containerd[2015]: time="2024-08-05T21:56:15.642435991Z" level=info msg="StartContainer for \"c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d\" returns successfully" Aug 5 21:56:15.646448 kubelet[3315]: E0805 21:56:15.643889 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.646448 kubelet[3315]: W0805 21:56:15.643936 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.646448 kubelet[3315]: E0805 21:56:15.643984 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.646448 kubelet[3315]: E0805 21:56:15.646516 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.646448 kubelet[3315]: W0805 21:56:15.646548 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.646448 kubelet[3315]: E0805 21:56:15.646590 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.649961 kubelet[3315]: E0805 21:56:15.647265 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.649961 kubelet[3315]: W0805 21:56:15.647320 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.649961 kubelet[3315]: E0805 21:56:15.647367 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.649961 kubelet[3315]: E0805 21:56:15.648119 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.649961 kubelet[3315]: W0805 21:56:15.648164 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.649961 kubelet[3315]: E0805 21:56:15.648207 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.649961 kubelet[3315]: E0805 21:56:15.648702 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.649961 kubelet[3315]: W0805 21:56:15.648731 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.649961 kubelet[3315]: E0805 21:56:15.648765 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.649961 kubelet[3315]: E0805 21:56:15.649323 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.654660 kubelet[3315]: W0805 21:56:15.649364 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.654660 kubelet[3315]: E0805 21:56:15.649409 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.654660 kubelet[3315]: E0805 21:56:15.650012 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.654660 kubelet[3315]: W0805 21:56:15.650056 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.654660 kubelet[3315]: E0805 21:56:15.650098 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.654660 kubelet[3315]: E0805 21:56:15.651732 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.654660 kubelet[3315]: W0805 21:56:15.651772 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.654660 kubelet[3315]: E0805 21:56:15.651817 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.654660 kubelet[3315]: E0805 21:56:15.652557 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.654660 kubelet[3315]: W0805 21:56:15.652597 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.656407 kubelet[3315]: E0805 21:56:15.652639 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.656407 kubelet[3315]: E0805 21:56:15.653221 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.656407 kubelet[3315]: W0805 21:56:15.653250 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.656407 kubelet[3315]: E0805 21:56:15.653285 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.656407 kubelet[3315]: E0805 21:56:15.653758 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.656407 kubelet[3315]: W0805 21:56:15.653786 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.656407 kubelet[3315]: E0805 21:56:15.653824 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.656407 kubelet[3315]: E0805 21:56:15.654261 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.656407 kubelet[3315]: W0805 21:56:15.654285 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.656407 kubelet[3315]: E0805 21:56:15.654321 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.658141 kubelet[3315]: E0805 21:56:15.654765 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.658141 kubelet[3315]: W0805 21:56:15.654796 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.658141 kubelet[3315]: E0805 21:56:15.654834 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.658141 kubelet[3315]: E0805 21:56:15.655750 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.658141 kubelet[3315]: W0805 21:56:15.655786 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.658141 kubelet[3315]: E0805 21:56:15.655830 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.683223 kubelet[3315]: E0805 21:56:15.683089 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.683223 kubelet[3315]: W0805 21:56:15.683150 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.683772 kubelet[3315]: E0805 21:56:15.683190 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.685980 kubelet[3315]: E0805 21:56:15.685572 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.685980 kubelet[3315]: W0805 21:56:15.685619 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.685980 kubelet[3315]: E0805 21:56:15.685676 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.687219 kubelet[3315]: E0805 21:56:15.686865 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.687219 kubelet[3315]: W0805 21:56:15.686913 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.687219 kubelet[3315]: E0805 21:56:15.686955 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.687920 kubelet[3315]: E0805 21:56:15.687386 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.687920 kubelet[3315]: W0805 21:56:15.687409 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.687920 kubelet[3315]: E0805 21:56:15.687451 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.689559 kubelet[3315]: E0805 21:56:15.688034 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.689559 kubelet[3315]: W0805 21:56:15.688067 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.689559 kubelet[3315]: E0805 21:56:15.688130 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.689559 kubelet[3315]: E0805 21:56:15.688778 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.689559 kubelet[3315]: W0805 21:56:15.688807 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.689559 kubelet[3315]: E0805 21:56:15.688867 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.691342 kubelet[3315]: E0805 21:56:15.690430 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.691342 kubelet[3315]: W0805 21:56:15.690504 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.691342 kubelet[3315]: E0805 21:56:15.690560 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.691342 kubelet[3315]: E0805 21:56:15.691217 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.691342 kubelet[3315]: W0805 21:56:15.691248 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.691342 kubelet[3315]: E0805 21:56:15.691327 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.692685 kubelet[3315]: E0805 21:56:15.692598 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.692685 kubelet[3315]: W0805 21:56:15.692665 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.692886 kubelet[3315]: E0805 21:56:15.692854 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.693596 kubelet[3315]: E0805 21:56:15.693301 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.693596 kubelet[3315]: W0805 21:56:15.693327 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.693744 kubelet[3315]: E0805 21:56:15.693602 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.693953 kubelet[3315]: E0805 21:56:15.693902 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.693953 kubelet[3315]: W0805 21:56:15.693942 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.694316 kubelet[3315]: E0805 21:56:15.694015 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.694834 kubelet[3315]: E0805 21:56:15.694446 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.694834 kubelet[3315]: W0805 21:56:15.694503 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.694834 kubelet[3315]: E0805 21:56:15.694558 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.696012 kubelet[3315]: E0805 21:56:15.695949 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.696012 kubelet[3315]: W0805 21:56:15.695989 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.696801 kubelet[3315]: E0805 21:56:15.696215 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.698112 kubelet[3315]: E0805 21:56:15.697117 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.698112 kubelet[3315]: W0805 21:56:15.697164 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.698112 kubelet[3315]: E0805 21:56:15.697220 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.698112 kubelet[3315]: E0805 21:56:15.697771 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.698112 kubelet[3315]: W0805 21:56:15.697796 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.698112 kubelet[3315]: E0805 21:56:15.697827 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.698701 kubelet[3315]: E0805 21:56:15.698263 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.698701 kubelet[3315]: W0805 21:56:15.698307 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.698701 kubelet[3315]: E0805 21:56:15.698448 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.699929 kubelet[3315]: E0805 21:56:15.699855 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.699929 kubelet[3315]: W0805 21:56:15.699904 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.700206 kubelet[3315]: E0805 21:56:15.699964 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.700697 kubelet[3315]: E0805 21:56:15.700635 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:56:15.700697 kubelet[3315]: W0805 21:56:15.700676 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:56:15.700866 kubelet[3315]: E0805 21:56:15.700713 3315 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:56:15.715256 systemd[1]: cri-containerd-c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d.scope: Deactivated successfully. Aug 5 21:56:16.207153 containerd[2015]: time="2024-08-05T21:56:16.206710290Z" level=info msg="shim disconnected" id=c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d namespace=k8s.io Aug 5 21:56:16.207153 containerd[2015]: time="2024-08-05T21:56:16.206826810Z" level=warning msg="cleaning up after shim disconnected" id=c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d namespace=k8s.io Aug 5 21:56:16.207153 containerd[2015]: time="2024-08-05T21:56:16.206858490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:56:16.232619 kubelet[3315]: E0805 21:56:16.231769 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4s94" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" Aug 5 21:56:16.354755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4ff8c919ad2aa3b10e079dff9ea5a6cbfbbaf87102c3fc3db342f068920fc0d-rootfs.mount: Deactivated successfully. Aug 5 21:56:16.577295 containerd[2015]: time="2024-08-05T21:56:16.576697592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 21:56:16.625414 kubelet[3315]: I0805 21:56:16.625329 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5c87d4bb7-2bvbm" podStartSLOduration=4.954456895 podCreationTimestamp="2024-08-05 21:56:09 +0000 UTC" firstStartedPulling="2024-08-05 21:56:10.990052768 +0000 UTC m=+23.123610416" lastFinishedPulling="2024-08-05 21:56:13.660854285 +0000 UTC m=+25.794411945" observedRunningTime="2024-08-05 21:56:14.589881354 +0000 UTC m=+26.723439014" watchObservedRunningTime="2024-08-05 21:56:16.625258424 +0000 UTC m=+28.758816180" Aug 5 21:56:18.238869 kubelet[3315]: E0805 21:56:18.234179 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4s94" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" Aug 5 21:56:20.230589 kubelet[3315]: E0805 21:56:20.229402 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4s94" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" Aug 5 21:56:21.682941 containerd[2015]: time="2024-08-05T21:56:21.682843741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:21.685757 containerd[2015]: time="2024-08-05T21:56:21.685430041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Aug 5 21:56:21.687861 containerd[2015]: time="2024-08-05T21:56:21.687518677Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:21.696448 containerd[2015]: time="2024-08-05T21:56:21.696352561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:21.698858 containerd[2015]: time="2024-08-05T21:56:21.698778445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 5.121989893s" Aug 5 21:56:21.701935 containerd[2015]: time="2024-08-05T21:56:21.699047989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Aug 5 21:56:21.706664 containerd[2015]: time="2024-08-05T21:56:21.706435009Z" level=info msg="CreateContainer within sandbox \"ade04d6d4f8e9dbdbaf8429c6c2e331fca8a31c9c8245dff1cfd687cdbc24fb7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 21:56:21.742047 containerd[2015]: time="2024-08-05T21:56:21.741532789Z" level=info msg="CreateContainer within sandbox \"ade04d6d4f8e9dbdbaf8429c6c2e331fca8a31c9c8245dff1cfd687cdbc24fb7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"961019136544693c91c5ab5ebe8ce39d73d82e334b3d039e0f07cefaa4282884\"" Aug 5 21:56:21.748224 containerd[2015]: time="2024-08-05T21:56:21.743776741Z" level=info msg="StartContainer for \"961019136544693c91c5ab5ebe8ce39d73d82e334b3d039e0f07cefaa4282884\"" Aug 5 21:56:21.751391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176327543.mount: Deactivated successfully. Aug 5 21:56:21.836145 systemd[1]: Started cri-containerd-961019136544693c91c5ab5ebe8ce39d73d82e334b3d039e0f07cefaa4282884.scope - libcontainer container 961019136544693c91c5ab5ebe8ce39d73d82e334b3d039e0f07cefaa4282884. Aug 5 21:56:21.928666 containerd[2015]: time="2024-08-05T21:56:21.928531154Z" level=info msg="StartContainer for \"961019136544693c91c5ab5ebe8ce39d73d82e334b3d039e0f07cefaa4282884\" returns successfully" Aug 5 21:56:22.229363 kubelet[3315]: E0805 21:56:22.228631 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4s94" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" Aug 5 21:56:22.971897 containerd[2015]: time="2024-08-05T21:56:22.971780631Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:56:22.978800 systemd[1]: cri-containerd-961019136544693c91c5ab5ebe8ce39d73d82e334b3d039e0f07cefaa4282884.scope: Deactivated successfully. Aug 5 21:56:23.001869 kubelet[3315]: I0805 21:56:23.001556 3315 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 21:56:23.109675 kubelet[3315]: I0805 21:56:23.107126 3315 topology_manager.go:215] "Topology Admit Handler" podUID="d7c936aa-fa2e-4a71-ab28-36e483f65d67" podNamespace="kube-system" podName="coredns-5dd5756b68-t762x" Aug 5 21:56:23.114978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-961019136544693c91c5ab5ebe8ce39d73d82e334b3d039e0f07cefaa4282884-rootfs.mount: Deactivated successfully. Aug 5 21:56:23.146995 systemd[1]: Created slice kubepods-burstable-podd7c936aa_fa2e_4a71_ab28_36e483f65d67.slice - libcontainer container kubepods-burstable-podd7c936aa_fa2e_4a71_ab28_36e483f65d67.slice. Aug 5 21:56:23.157692 kubelet[3315]: I0805 21:56:23.155949 3315 topology_manager.go:215] "Topology Admit Handler" podUID="42fbd30e-8f61-40db-b477-f13005157c9f" podNamespace="kube-system" podName="coredns-5dd5756b68-grq8t" Aug 5 21:56:23.172986 kubelet[3315]: I0805 21:56:23.171317 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7c936aa-fa2e-4a71-ab28-36e483f65d67-config-volume\") pod \"coredns-5dd5756b68-t762x\" (UID: \"d7c936aa-fa2e-4a71-ab28-36e483f65d67\") " pod="kube-system/coredns-5dd5756b68-t762x" Aug 5 21:56:23.172986 kubelet[3315]: I0805 21:56:23.171612 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnbw5\" (UniqueName: \"kubernetes.io/projected/d7c936aa-fa2e-4a71-ab28-36e483f65d67-kube-api-access-vnbw5\") pod \"coredns-5dd5756b68-t762x\" (UID: \"d7c936aa-fa2e-4a71-ab28-36e483f65d67\") " pod="kube-system/coredns-5dd5756b68-t762x" Aug 5 21:56:23.186782 kubelet[3315]: I0805 21:56:23.186686 3315 topology_manager.go:215] "Topology Admit Handler" podUID="8dc15cde-1f64-4a5a-924e-a115890fac64" podNamespace="calico-system" podName="calico-kube-controllers-8876484d6-njtsj" Aug 5 21:56:23.208437 systemd[1]: Created slice kubepods-burstable-pod42fbd30e_8f61_40db_b477_f13005157c9f.slice - libcontainer container kubepods-burstable-pod42fbd30e_8f61_40db_b477_f13005157c9f.slice. Aug 5 21:56:23.240280 systemd[1]: Created slice kubepods-besteffort-pod8dc15cde_1f64_4a5a_924e_a115890fac64.slice - libcontainer container kubepods-besteffort-pod8dc15cde_1f64_4a5a_924e_a115890fac64.slice. Aug 5 21:56:23.274150 kubelet[3315]: I0805 21:56:23.274069 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m68z\" (UniqueName: \"kubernetes.io/projected/8dc15cde-1f64-4a5a-924e-a115890fac64-kube-api-access-9m68z\") pod \"calico-kube-controllers-8876484d6-njtsj\" (UID: \"8dc15cde-1f64-4a5a-924e-a115890fac64\") " pod="calico-system/calico-kube-controllers-8876484d6-njtsj" Aug 5 21:56:23.277032 kubelet[3315]: I0805 21:56:23.274161 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dc15cde-1f64-4a5a-924e-a115890fac64-tigera-ca-bundle\") pod \"calico-kube-controllers-8876484d6-njtsj\" (UID: \"8dc15cde-1f64-4a5a-924e-a115890fac64\") " pod="calico-system/calico-kube-controllers-8876484d6-njtsj" Aug 5 21:56:23.277032 kubelet[3315]: I0805 21:56:23.274219 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42fbd30e-8f61-40db-b477-f13005157c9f-config-volume\") pod \"coredns-5dd5756b68-grq8t\" (UID: \"42fbd30e-8f61-40db-b477-f13005157c9f\") " pod="kube-system/coredns-5dd5756b68-grq8t" Aug 5 21:56:23.277032 kubelet[3315]: I0805 21:56:23.274347 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8q4x\" (UniqueName: \"kubernetes.io/projected/42fbd30e-8f61-40db-b477-f13005157c9f-kube-api-access-w8q4x\") pod \"coredns-5dd5756b68-grq8t\" (UID: \"42fbd30e-8f61-40db-b477-f13005157c9f\") " pod="kube-system/coredns-5dd5756b68-grq8t" Aug 5 21:56:23.374287 kubelet[3315]: I0805 21:56:23.373687 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:56:23.470247 containerd[2015]: time="2024-08-05T21:56:23.470120258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-t762x,Uid:d7c936aa-fa2e-4a71-ab28-36e483f65d67,Namespace:kube-system,Attempt:0,}" Aug 5 21:56:23.531444 containerd[2015]: time="2024-08-05T21:56:23.531213758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-grq8t,Uid:42fbd30e-8f61-40db-b477-f13005157c9f,Namespace:kube-system,Attempt:0,}" Aug 5 21:56:23.562217 containerd[2015]: time="2024-08-05T21:56:23.562043870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8876484d6-njtsj,Uid:8dc15cde-1f64-4a5a-924e-a115890fac64,Namespace:calico-system,Attempt:0,}" Aug 5 21:56:23.914917 containerd[2015]: time="2024-08-05T21:56:23.914259124Z" level=info msg="shim disconnected" id=961019136544693c91c5ab5ebe8ce39d73d82e334b3d039e0f07cefaa4282884 namespace=k8s.io Aug 5 21:56:23.917196 containerd[2015]: time="2024-08-05T21:56:23.916633444Z" level=warning msg="cleaning up after shim disconnected" id=961019136544693c91c5ab5ebe8ce39d73d82e334b3d039e0f07cefaa4282884 namespace=k8s.io Aug 5 21:56:23.917887 containerd[2015]: time="2024-08-05T21:56:23.916982140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:56:24.068794 containerd[2015]: time="2024-08-05T21:56:24.068432737Z" level=error msg="Failed to destroy network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.072015 containerd[2015]: time="2024-08-05T21:56:24.071240869Z" level=error msg="encountered an error cleaning up failed sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.072015 containerd[2015]: time="2024-08-05T21:56:24.071407441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8876484d6-njtsj,Uid:8dc15cde-1f64-4a5a-924e-a115890fac64,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.073795 kubelet[3315]: E0805 21:56:24.073740 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.076229 kubelet[3315]: E0805 21:56:24.074435 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8876484d6-njtsj" Aug 5 21:56:24.076229 kubelet[3315]: E0805 21:56:24.074594 3315 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8876484d6-njtsj" Aug 5 21:56:24.076229 kubelet[3315]: E0805 21:56:24.074791 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8876484d6-njtsj_calico-system(8dc15cde-1f64-4a5a-924e-a115890fac64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8876484d6-njtsj_calico-system(8dc15cde-1f64-4a5a-924e-a115890fac64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8876484d6-njtsj" podUID="8dc15cde-1f64-4a5a-924e-a115890fac64" Aug 5 21:56:24.095856 containerd[2015]: time="2024-08-05T21:56:24.095634889Z" level=error msg="Failed to destroy network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.096721 containerd[2015]: time="2024-08-05T21:56:24.096425401Z" level=error msg="encountered an error cleaning up failed sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.098589 containerd[2015]: time="2024-08-05T21:56:24.096722437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-grq8t,Uid:42fbd30e-8f61-40db-b477-f13005157c9f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.098766 kubelet[3315]: E0805 21:56:24.097284 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.098766 kubelet[3315]: E0805 21:56:24.097499 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-grq8t" Aug 5 21:56:24.098766 kubelet[3315]: E0805 21:56:24.097560 3315 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-grq8t" Aug 5 21:56:24.098964 kubelet[3315]: E0805 21:56:24.097661 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-grq8t_kube-system(42fbd30e-8f61-40db-b477-f13005157c9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-grq8t_kube-system(42fbd30e-8f61-40db-b477-f13005157c9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-grq8t" podUID="42fbd30e-8f61-40db-b477-f13005157c9f" Aug 5 21:56:24.110602 containerd[2015]: time="2024-08-05T21:56:24.110234953Z" level=error msg="Failed to destroy network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.113665 containerd[2015]: time="2024-08-05T21:56:24.112561573Z" level=error msg="encountered an error cleaning up failed sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.113665 containerd[2015]: time="2024-08-05T21:56:24.112721017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-t762x,Uid:d7c936aa-fa2e-4a71-ab28-36e483f65d67,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.117017 kubelet[3315]: E0805 21:56:24.114271 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.117017 kubelet[3315]: E0805 21:56:24.114363 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-t762x" Aug 5 21:56:24.117017 kubelet[3315]: E0805 21:56:24.114409 3315 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-t762x" Aug 5 21:56:24.117271 kubelet[3315]: E0805 21:56:24.114523 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-t762x_kube-system(d7c936aa-fa2e-4a71-ab28-36e483f65d67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-t762x_kube-system(d7c936aa-fa2e-4a71-ab28-36e483f65d67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-t762x" podUID="d7c936aa-fa2e-4a71-ab28-36e483f65d67" Aug 5 21:56:24.123051 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112-shm.mount: Deactivated successfully. Aug 5 21:56:24.244304 systemd[1]: Created slice kubepods-besteffort-podd27212b5_d6e4_4e2e_9f51_5d83c596a8d0.slice - libcontainer container kubepods-besteffort-podd27212b5_d6e4_4e2e_9f51_5d83c596a8d0.slice. Aug 5 21:56:24.250622 containerd[2015]: time="2024-08-05T21:56:24.250535354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4s94,Uid:d27212b5-d6e4-4e2e-9f51-5d83c596a8d0,Namespace:calico-system,Attempt:0,}" Aug 5 21:56:24.435684 containerd[2015]: time="2024-08-05T21:56:24.435506295Z" level=error msg="Failed to destroy network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.440593 containerd[2015]: time="2024-08-05T21:56:24.440410191Z" level=error msg="encountered an error cleaning up failed sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.441597 containerd[2015]: time="2024-08-05T21:56:24.441057051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4s94,Uid:d27212b5-d6e4-4e2e-9f51-5d83c596a8d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.444998 kubelet[3315]: E0805 21:56:24.444846 3315 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.447261 kubelet[3315]: E0805 21:56:24.445092 3315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4s94" Aug 5 21:56:24.447261 kubelet[3315]: E0805 21:56:24.445182 3315 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4s94" Aug 5 21:56:24.447261 kubelet[3315]: E0805 21:56:24.445376 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k4s94_calico-system(d27212b5-d6e4-4e2e-9f51-5d83c596a8d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k4s94_calico-system(d27212b5-d6e4-4e2e-9f51-5d83c596a8d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4s94" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" Aug 5 21:56:24.449395 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a-shm.mount: Deactivated successfully. Aug 5 21:56:24.631702 kubelet[3315]: I0805 21:56:24.631435 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:24.633917 containerd[2015]: time="2024-08-05T21:56:24.633636640Z" level=info msg="StopPodSandbox for \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\"" Aug 5 21:56:24.635406 containerd[2015]: time="2024-08-05T21:56:24.635219284Z" level=info msg="Ensure that sandbox 3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a in task-service has been cleanup successfully" Aug 5 21:56:24.638301 kubelet[3315]: I0805 21:56:24.637814 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:24.639948 containerd[2015]: time="2024-08-05T21:56:24.639728728Z" level=info msg="StopPodSandbox for \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\"" Aug 5 21:56:24.641013 containerd[2015]: time="2024-08-05T21:56:24.640739728Z" level=info msg="Ensure that sandbox c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2 in task-service has been cleanup successfully" Aug 5 21:56:24.644968 kubelet[3315]: I0805 21:56:24.643105 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:24.646663 containerd[2015]: time="2024-08-05T21:56:24.646587916Z" level=info msg="StopPodSandbox for \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\"" Aug 5 21:56:24.649095 containerd[2015]: time="2024-08-05T21:56:24.648864292Z" level=info msg="Ensure that sandbox 123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014 in task-service has been cleanup successfully" Aug 5 21:56:24.654737 kubelet[3315]: I0805 21:56:24.654509 3315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:24.660030 containerd[2015]: time="2024-08-05T21:56:24.659121424Z" level=info msg="StopPodSandbox for \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\"" Aug 5 21:56:24.660030 containerd[2015]: time="2024-08-05T21:56:24.659502604Z" level=info msg="Ensure that sandbox ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112 in task-service has been cleanup successfully" Aug 5 21:56:24.683956 containerd[2015]: time="2024-08-05T21:56:24.683841604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 21:56:24.826059 containerd[2015]: time="2024-08-05T21:56:24.825961169Z" level=error msg="StopPodSandbox for \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\" failed" error="failed to destroy network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.829238 kubelet[3315]: E0805 21:56:24.828606 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:24.829238 kubelet[3315]: E0805 21:56:24.828923 3315 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a"} Aug 5 21:56:24.829238 kubelet[3315]: E0805 21:56:24.829069 3315 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:56:24.829238 kubelet[3315]: E0805 21:56:24.829161 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4s94" podUID="d27212b5-d6e4-4e2e-9f51-5d83c596a8d0" Aug 5 21:56:24.855104 containerd[2015]: time="2024-08-05T21:56:24.855008465Z" level=error msg="StopPodSandbox for \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\" failed" error="failed to destroy network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.855851 kubelet[3315]: E0805 21:56:24.855628 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:24.855851 kubelet[3315]: E0805 21:56:24.855710 3315 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014"} Aug 5 21:56:24.855851 kubelet[3315]: E0805 21:56:24.855779 3315 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42fbd30e-8f61-40db-b477-f13005157c9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:56:24.855851 kubelet[3315]: E0805 21:56:24.855849 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42fbd30e-8f61-40db-b477-f13005157c9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-grq8t" podUID="42fbd30e-8f61-40db-b477-f13005157c9f" Aug 5 21:56:24.872031 containerd[2015]: time="2024-08-05T21:56:24.871920833Z" level=error msg="StopPodSandbox for \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\" failed" error="failed to destroy network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.873444 kubelet[3315]: E0805 21:56:24.873057 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:24.873444 kubelet[3315]: E0805 21:56:24.873270 3315 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2"} Aug 5 21:56:24.873444 kubelet[3315]: E0805 21:56:24.873362 3315 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8dc15cde-1f64-4a5a-924e-a115890fac64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:56:24.874040 kubelet[3315]: E0805 21:56:24.873918 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8dc15cde-1f64-4a5a-924e-a115890fac64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8876484d6-njtsj" podUID="8dc15cde-1f64-4a5a-924e-a115890fac64" Aug 5 21:56:24.881567 containerd[2015]: time="2024-08-05T21:56:24.881425949Z" level=error msg="StopPodSandbox for \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\" failed" error="failed to destroy network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:56:24.883013 kubelet[3315]: E0805 21:56:24.882267 3315 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:24.883013 kubelet[3315]: E0805 21:56:24.882421 3315 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112"} Aug 5 21:56:24.883013 kubelet[3315]: E0805 21:56:24.882613 3315 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7c936aa-fa2e-4a71-ab28-36e483f65d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:56:24.883013 kubelet[3315]: E0805 21:56:24.882703 3315 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7c936aa-fa2e-4a71-ab28-36e483f65d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-t762x" podUID="d7c936aa-fa2e-4a71-ab28-36e483f65d67" Aug 5 21:56:31.834283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2270797847.mount: Deactivated successfully. Aug 5 21:56:31.949376 containerd[2015]: time="2024-08-05T21:56:31.949074144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:31.952793 containerd[2015]: time="2024-08-05T21:56:31.952429896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Aug 5 21:56:31.954615 containerd[2015]: time="2024-08-05T21:56:31.954273780Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:31.964267 containerd[2015]: time="2024-08-05T21:56:31.963947496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:31.967074 containerd[2015]: time="2024-08-05T21:56:31.966671088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 7.282729776s" Aug 5 21:56:31.967074 containerd[2015]: time="2024-08-05T21:56:31.966804576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Aug 5 21:56:32.007872 containerd[2015]: time="2024-08-05T21:56:32.007311008Z" level=info msg="CreateContainer within sandbox \"ade04d6d4f8e9dbdbaf8429c6c2e331fca8a31c9c8245dff1cfd687cdbc24fb7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 21:56:32.082309 containerd[2015]: time="2024-08-05T21:56:32.082115745Z" level=info msg="CreateContainer within sandbox \"ade04d6d4f8e9dbdbaf8429c6c2e331fca8a31c9c8245dff1cfd687cdbc24fb7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f3c59f399c582685d71a928801702c66b111d16e6e3d27329a12404f4fe1600f\"" Aug 5 21:56:32.084201 containerd[2015]: time="2024-08-05T21:56:32.083883453Z" level=info msg="StartContainer for \"f3c59f399c582685d71a928801702c66b111d16e6e3d27329a12404f4fe1600f\"" Aug 5 21:56:32.161845 systemd[1]: Started cri-containerd-f3c59f399c582685d71a928801702c66b111d16e6e3d27329a12404f4fe1600f.scope - libcontainer container f3c59f399c582685d71a928801702c66b111d16e6e3d27329a12404f4fe1600f. Aug 5 21:56:32.265721 containerd[2015]: time="2024-08-05T21:56:32.265610302Z" level=info msg="StartContainer for \"f3c59f399c582685d71a928801702c66b111d16e6e3d27329a12404f4fe1600f\" returns successfully" Aug 5 21:56:32.473327 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 21:56:32.473812 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 21:56:32.758618 kubelet[3315]: I0805 21:56:32.758330 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-c5lld" podStartSLOduration=1.913800029 podCreationTimestamp="2024-08-05 21:56:10 +0000 UTC" firstStartedPulling="2024-08-05 21:56:11.123351409 +0000 UTC m=+23.256909045" lastFinishedPulling="2024-08-05 21:56:31.967753812 +0000 UTC m=+44.101311460" observedRunningTime="2024-08-05 21:56:32.754268436 +0000 UTC m=+44.887826084" watchObservedRunningTime="2024-08-05 21:56:32.758202444 +0000 UTC m=+44.891760104" Aug 5 21:56:33.915575 kubelet[3315]: I0805 21:56:33.915433 3315 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:56:35.647990 systemd-networkd[1930]: vxlan.calico: Link UP Aug 5 21:56:35.648015 systemd-networkd[1930]: vxlan.calico: Gained carrier Aug 5 21:56:35.665140 (udev-worker)[4551]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:56:35.720899 (udev-worker)[4559]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:56:36.233887 containerd[2015]: time="2024-08-05T21:56:36.233503441Z" level=info msg="StopPodSandbox for \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\"" Aug 5 21:56:36.234735 containerd[2015]: time="2024-08-05T21:56:36.234579613Z" level=info msg="StopPodSandbox for \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\"" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.481 [INFO][4624] k8s.go 608: Cleaning up netns ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.483 [INFO][4624] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" iface="eth0" netns="/var/run/netns/cni-03ba7483-34ab-70e8-b78c-b72967063579" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.485 [INFO][4624] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" iface="eth0" netns="/var/run/netns/cni-03ba7483-34ab-70e8-b78c-b72967063579" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.487 [INFO][4624] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" iface="eth0" netns="/var/run/netns/cni-03ba7483-34ab-70e8-b78c-b72967063579" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.491 [INFO][4624] k8s.go 615: Releasing IP address(es) ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.491 [INFO][4624] utils.go 188: Calico CNI releasing IP address ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.611 [INFO][4637] ipam_plugin.go 411: Releasing address using handleID ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" HandleID="k8s-pod-network.3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.611 [INFO][4637] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.612 [INFO][4637] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.639 [WARNING][4637] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" HandleID="k8s-pod-network.3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.639 [INFO][4637] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" HandleID="k8s-pod-network.3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.663 [INFO][4637] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:36.676456 containerd[2015]: 2024-08-05 21:56:36.672 [INFO][4624] k8s.go 621: Teardown processing complete. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:36.687636 containerd[2015]: time="2024-08-05T21:56:36.681686056Z" level=info msg="TearDown network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\" successfully" Aug 5 21:56:36.687636 containerd[2015]: time="2024-08-05T21:56:36.683613232Z" level=info msg="StopPodSandbox for \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\" returns successfully" Aug 5 21:56:36.690985 containerd[2015]: time="2024-08-05T21:56:36.688979560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4s94,Uid:d27212b5-d6e4-4e2e-9f51-5d83c596a8d0,Namespace:calico-system,Attempt:1,}" Aug 5 21:56:36.693876 systemd[1]: run-netns-cni\x2d03ba7483\x2d34ab\x2d70e8\x2db78c\x2db72967063579.mount: Deactivated successfully. Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.495 [INFO][4623] k8s.go 608: Cleaning up netns ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.496 [INFO][4623] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" iface="eth0" netns="/var/run/netns/cni-e233cad1-5b4b-06cb-4e2f-9f97967b4123" Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.497 [INFO][4623] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" iface="eth0" netns="/var/run/netns/cni-e233cad1-5b4b-06cb-4e2f-9f97967b4123" Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.499 [INFO][4623] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" iface="eth0" netns="/var/run/netns/cni-e233cad1-5b4b-06cb-4e2f-9f97967b4123" Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.500 [INFO][4623] k8s.go 615: Releasing IP address(es) ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.500 [INFO][4623] utils.go 188: Calico CNI releasing IP address ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.657 [INFO][4642] ipam_plugin.go 411: Releasing address using handleID ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" HandleID="k8s-pod-network.c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.658 [INFO][4642] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.666 [INFO][4642] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.701 [WARNING][4642] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" HandleID="k8s-pod-network.c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.702 [INFO][4642] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" HandleID="k8s-pod-network.c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.708 [INFO][4642] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:36.721412 containerd[2015]: 2024-08-05 21:56:36.716 [INFO][4623] k8s.go 621: Teardown processing complete. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:36.725136 containerd[2015]: time="2024-08-05T21:56:36.724981672Z" level=info msg="TearDown network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\" successfully" Aug 5 21:56:36.725136 containerd[2015]: time="2024-08-05T21:56:36.725093992Z" level=info msg="StopPodSandbox for \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\" returns successfully" Aug 5 21:56:36.728600 containerd[2015]: time="2024-08-05T21:56:36.728418484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8876484d6-njtsj,Uid:8dc15cde-1f64-4a5a-924e-a115890fac64,Namespace:calico-system,Attempt:1,}" Aug 5 21:56:36.733253 systemd[1]: run-netns-cni\x2de233cad1\x2d5b4b\x2d06cb\x2d4e2f\x2d9f97967b4123.mount: Deactivated successfully. Aug 5 21:56:37.171744 systemd-networkd[1930]: vxlan.calico: Gained IPv6LL Aug 5 21:56:37.232533 containerd[2015]: time="2024-08-05T21:56:37.231280814Z" level=info msg="StopPodSandbox for \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\"" Aug 5 21:56:37.455235 (udev-worker)[4566]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:56:37.471137 systemd-networkd[1930]: calib2664148230: Link UP Aug 5 21:56:37.479750 systemd-networkd[1930]: calib2664148230: Gained carrier Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.022 [INFO][4662] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0 calico-kube-controllers-8876484d6- calico-system 8dc15cde-1f64-4a5a-924e-a115890fac64 728 0 2024-08-05 21:56:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8876484d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-69 calico-kube-controllers-8876484d6-njtsj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib2664148230 [] []}} ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Namespace="calico-system" Pod="calico-kube-controllers-8876484d6-njtsj" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.023 [INFO][4662] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Namespace="calico-system" Pod="calico-kube-controllers-8876484d6-njtsj" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.164 [INFO][4675] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" HandleID="k8s-pod-network.b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.221 [INFO][4675] ipam_plugin.go 264: Auto assigning IP ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" HandleID="k8s-pod-network.b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c4f80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-69", "pod":"calico-kube-controllers-8876484d6-njtsj", "timestamp":"2024-08-05 21:56:37.16444661 +0000 UTC"}, Hostname:"ip-172-31-26-69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.222 [INFO][4675] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.223 [INFO][4675] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.223 [INFO][4675] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-69' Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.235 [INFO][4675] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" host="ip-172-31-26-69" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.265 [INFO][4675] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-69" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.311 [INFO][4675] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.329 [INFO][4675] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.343 [INFO][4675] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.345 [INFO][4675] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" host="ip-172-31-26-69" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.374 [INFO][4675] ipam.go 1685: Creating new handle: k8s-pod-network.b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687 Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.399 [INFO][4675] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" host="ip-172-31-26-69" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.423 [INFO][4675] ipam.go 1216: Successfully claimed IPs: [192.168.34.65/26] block=192.168.34.64/26 handle="k8s-pod-network.b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" host="ip-172-31-26-69" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.425 [INFO][4675] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.65/26] handle="k8s-pod-network.b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" host="ip-172-31-26-69" Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.426 [INFO][4675] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:37.597753 containerd[2015]: 2024-08-05 21:56:37.427 [INFO][4675] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.65/26] IPv6=[] ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" HandleID="k8s-pod-network.b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:37.599441 containerd[2015]: 2024-08-05 21:56:37.438 [INFO][4662] k8s.go 386: Populated endpoint ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Namespace="calico-system" Pod="calico-kube-controllers-8876484d6-njtsj" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0", GenerateName:"calico-kube-controllers-8876484d6-", Namespace:"calico-system", SelfLink:"", UID:"8dc15cde-1f64-4a5a-924e-a115890fac64", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8876484d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"", Pod:"calico-kube-controllers-8876484d6-njtsj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2664148230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:37.599441 containerd[2015]: 2024-08-05 21:56:37.438 [INFO][4662] k8s.go 387: Calico CNI using IPs: [192.168.34.65/32] ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Namespace="calico-system" Pod="calico-kube-controllers-8876484d6-njtsj" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:37.599441 containerd[2015]: 2024-08-05 21:56:37.439 [INFO][4662] dataplane_linux.go 68: Setting the host side veth name to calib2664148230 ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Namespace="calico-system" Pod="calico-kube-controllers-8876484d6-njtsj" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:37.599441 containerd[2015]: 2024-08-05 21:56:37.472 [INFO][4662] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Namespace="calico-system" Pod="calico-kube-controllers-8876484d6-njtsj" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:37.599441 containerd[2015]: 2024-08-05 21:56:37.481 [INFO][4662] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Namespace="calico-system" Pod="calico-kube-controllers-8876484d6-njtsj" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0", GenerateName:"calico-kube-controllers-8876484d6-", Namespace:"calico-system", SelfLink:"", UID:"8dc15cde-1f64-4a5a-924e-a115890fac64", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8876484d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687", Pod:"calico-kube-controllers-8876484d6-njtsj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2664148230", MAC:"f6:55:5b:23:2c:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:37.599441 containerd[2015]: 2024-08-05 21:56:37.575 [INFO][4662] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687" Namespace="calico-system" Pod="calico-kube-controllers-8876484d6-njtsj" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:37.791169 containerd[2015]: time="2024-08-05T21:56:37.789292253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:56:37.791169 containerd[2015]: time="2024-08-05T21:56:37.789436577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:37.791169 containerd[2015]: time="2024-08-05T21:56:37.789508025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:56:37.791169 containerd[2015]: time="2024-08-05T21:56:37.789548693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:37.885544 systemd-networkd[1930]: cali83328eacdf4: Link UP Aug 5 21:56:37.895613 systemd-networkd[1930]: cali83328eacdf4: Gained carrier Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.482 [INFO][4704] k8s.go 608: Cleaning up netns ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.482 [INFO][4704] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" iface="eth0" netns="/var/run/netns/cni-760c871f-1ea9-eb11-4b23-6eeb0f873e33" Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.493 [INFO][4704] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" iface="eth0" netns="/var/run/netns/cni-760c871f-1ea9-eb11-4b23-6eeb0f873e33" Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.499 [INFO][4704] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" iface="eth0" netns="/var/run/netns/cni-760c871f-1ea9-eb11-4b23-6eeb0f873e33" Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.499 [INFO][4704] k8s.go 615: Releasing IP address(es) ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.500 [INFO][4704] utils.go 188: Calico CNI releasing IP address ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.627 [INFO][4713] ipam_plugin.go 411: Releasing address using handleID ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" HandleID="k8s-pod-network.ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.629 [INFO][4713] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.818 [INFO][4713] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.868 [WARNING][4713] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" HandleID="k8s-pod-network.ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.868 [INFO][4713] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" HandleID="k8s-pod-network.ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.881 [INFO][4713] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:37.946626 containerd[2015]: 2024-08-05 21:56:37.928 [INFO][4704] k8s.go 621: Teardown processing complete. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:37.958883 containerd[2015]: time="2024-08-05T21:56:37.956061822Z" level=info msg="TearDown network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\" successfully" Aug 5 21:56:37.958883 containerd[2015]: time="2024-08-05T21:56:37.956178510Z" level=info msg="StopPodSandbox for \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\" returns successfully" Aug 5 21:56:37.961407 systemd[1]: Started cri-containerd-b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687.scope - libcontainer container b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687. Aug 5 21:56:37.973269 containerd[2015]: time="2024-08-05T21:56:37.965901594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-t762x,Uid:d7c936aa-fa2e-4a71-ab28-36e483f65d67,Namespace:kube-system,Attempt:1,}" Aug 5 21:56:37.986391 systemd[1]: run-netns-cni\x2d760c871f\x2d1ea9\x2deb11\x2d4b23\x2d6eeb0f873e33.mount: Deactivated successfully. Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.008 [INFO][4652] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0 csi-node-driver- calico-system d27212b5-d6e4-4e2e-9f51-5d83c596a8d0 727 0 2024-08-05 21:56:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-26-69 csi-node-driver-k4s94 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali83328eacdf4 [] []}} ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Namespace="calico-system" Pod="csi-node-driver-k4s94" WorkloadEndpoint="ip--172--31--26--69-k8s-csi--node--driver--k4s94-" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.009 [INFO][4652] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Namespace="calico-system" Pod="csi-node-driver-k4s94" WorkloadEndpoint="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.179 [INFO][4679] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" HandleID="k8s-pod-network.8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.235 [INFO][4679] ipam_plugin.go 264: Auto assigning IP ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" HandleID="k8s-pod-network.8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316540), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-69", "pod":"csi-node-driver-k4s94", "timestamp":"2024-08-05 21:56:37.179910134 +0000 UTC"}, Hostname:"ip-172-31-26-69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.235 [INFO][4679] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.426 [INFO][4679] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.427 [INFO][4679] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-69' Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.441 [INFO][4679] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" host="ip-172-31-26-69" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.494 [INFO][4679] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-69" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.582 [INFO][4679] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.613 [INFO][4679] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.635 [INFO][4679] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.635 [INFO][4679] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" host="ip-172-31-26-69" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.677 [INFO][4679] ipam.go 1685: Creating new handle: k8s-pod-network.8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.729 [INFO][4679] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" host="ip-172-31-26-69" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.815 [INFO][4679] ipam.go 1216: Successfully claimed IPs: [192.168.34.66/26] block=192.168.34.64/26 handle="k8s-pod-network.8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" host="ip-172-31-26-69" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.817 [INFO][4679] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.66/26] handle="k8s-pod-network.8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" host="ip-172-31-26-69" Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.818 [INFO][4679] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:38.042625 containerd[2015]: 2024-08-05 21:56:37.818 [INFO][4679] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.66/26] IPv6=[] ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" HandleID="k8s-pod-network.8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:38.043850 containerd[2015]: 2024-08-05 21:56:37.840 [INFO][4652] k8s.go 386: Populated endpoint ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Namespace="calico-system" Pod="csi-node-driver-k4s94" WorkloadEndpoint="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"", Pod:"csi-node-driver-k4s94", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83328eacdf4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:38.043850 containerd[2015]: 2024-08-05 21:56:37.842 [INFO][4652] k8s.go 387: Calico CNI using IPs: [192.168.34.66/32] ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Namespace="calico-system" Pod="csi-node-driver-k4s94" WorkloadEndpoint="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:38.043850 containerd[2015]: 2024-08-05 21:56:37.842 [INFO][4652] dataplane_linux.go 68: Setting the host side veth name to cali83328eacdf4 ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Namespace="calico-system" Pod="csi-node-driver-k4s94" WorkloadEndpoint="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:38.043850 containerd[2015]: 2024-08-05 21:56:37.898 [INFO][4652] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Namespace="calico-system" Pod="csi-node-driver-k4s94" WorkloadEndpoint="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:38.043850 containerd[2015]: 2024-08-05 21:56:37.917 [INFO][4652] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Namespace="calico-system" Pod="csi-node-driver-k4s94" WorkloadEndpoint="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d", Pod:"csi-node-driver-k4s94", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83328eacdf4", MAC:"02:99:dc:90:7c:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:38.043850 containerd[2015]: 2024-08-05 21:56:38.007 [INFO][4652] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d" Namespace="calico-system" Pod="csi-node-driver-k4s94" WorkloadEndpoint="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:38.263081 containerd[2015]: time="2024-08-05T21:56:38.262274031Z" level=info msg="StopPodSandbox for \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\"" Aug 5 21:56:38.364121 containerd[2015]: time="2024-08-05T21:56:38.361926892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:56:38.364121 containerd[2015]: time="2024-08-05T21:56:38.362045044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:38.364121 containerd[2015]: time="2024-08-05T21:56:38.362083828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:56:38.364121 containerd[2015]: time="2024-08-05T21:56:38.362114428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:38.421438 containerd[2015]: time="2024-08-05T21:56:38.421300180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8876484d6-njtsj,Uid:8dc15cde-1f64-4a5a-924e-a115890fac64,Namespace:calico-system,Attempt:1,} returns sandbox id \"b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687\"" Aug 5 21:56:38.436754 containerd[2015]: time="2024-08-05T21:56:38.436655128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 21:56:38.479102 systemd[1]: Started cri-containerd-8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d.scope - libcontainer container 8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d. Aug 5 21:56:38.642766 containerd[2015]: time="2024-08-05T21:56:38.641323889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4s94,Uid:d27212b5-d6e4-4e2e-9f51-5d83c596a8d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d\"" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.597 [INFO][4817] k8s.go 608: Cleaning up netns ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.599 [INFO][4817] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" iface="eth0" netns="/var/run/netns/cni-a038177a-8a3c-91fa-fdb4-c8b8d3fa6d76" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.599 [INFO][4817] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" iface="eth0" netns="/var/run/netns/cni-a038177a-8a3c-91fa-fdb4-c8b8d3fa6d76" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.600 [INFO][4817] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" iface="eth0" netns="/var/run/netns/cni-a038177a-8a3c-91fa-fdb4-c8b8d3fa6d76" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.600 [INFO][4817] k8s.go 615: Releasing IP address(es) ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.600 [INFO][4817] utils.go 188: Calico CNI releasing IP address ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.743 [INFO][4859] ipam_plugin.go 411: Releasing address using handleID ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" HandleID="k8s-pod-network.123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.744 [INFO][4859] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.783 [INFO][4859] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.820 [WARNING][4859] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" HandleID="k8s-pod-network.123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.820 [INFO][4859] ipam_plugin.go 439: Releasing address using workloadID ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" HandleID="k8s-pod-network.123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.828 [INFO][4859] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:38.845079 containerd[2015]: 2024-08-05 21:56:38.836 [INFO][4817] k8s.go 621: Teardown processing complete. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:38.853249 containerd[2015]: time="2024-08-05T21:56:38.853119318Z" level=info msg="TearDown network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\" successfully" Aug 5 21:56:38.853651 containerd[2015]: time="2024-08-05T21:56:38.853392330Z" level=info msg="StopPodSandbox for \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\" returns successfully" Aug 5 21:56:38.856994 systemd-networkd[1930]: cali1fec380de87: Link UP Aug 5 21:56:38.860835 systemd[1]: run-netns-cni\x2da038177a\x2d8a3c\x2d91fa\x2dfdb4\x2dc8b8d3fa6d76.mount: Deactivated successfully. Aug 5 21:56:38.862373 containerd[2015]: time="2024-08-05T21:56:38.860844750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-grq8t,Uid:42fbd30e-8f61-40db-b477-f13005157c9f,Namespace:kube-system,Attempt:1,}" Aug 5 21:56:38.869283 systemd-networkd[1930]: cali1fec380de87: Gained carrier Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.274 [INFO][4766] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0 coredns-5dd5756b68- kube-system d7c936aa-fa2e-4a71-ab28-36e483f65d67 735 0 2024-08-05 21:56:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-69 coredns-5dd5756b68-t762x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1fec380de87 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Namespace="kube-system" Pod="coredns-5dd5756b68-t762x" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.274 [INFO][4766] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Namespace="kube-system" Pod="coredns-5dd5756b68-t762x" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.520 [INFO][4816] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" HandleID="k8s-pod-network.d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.572 [INFO][4816] ipam_plugin.go 264: Auto assigning IP ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" HandleID="k8s-pod-network.d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000115a00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-69", "pod":"coredns-5dd5756b68-t762x", "timestamp":"2024-08-05 21:56:38.520791473 +0000 UTC"}, Hostname:"ip-172-31-26-69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.573 [INFO][4816] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.573 [INFO][4816] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.573 [INFO][4816] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-69' Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.589 [INFO][4816] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" host="ip-172-31-26-69" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.623 [INFO][4816] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-69" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.640 [INFO][4816] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.654 [INFO][4816] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.664 [INFO][4816] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.664 [INFO][4816] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" host="ip-172-31-26-69" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.679 [INFO][4816] ipam.go 1685: Creating new handle: k8s-pod-network.d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.754 [INFO][4816] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" host="ip-172-31-26-69" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.782 [INFO][4816] ipam.go 1216: Successfully claimed IPs: [192.168.34.67/26] block=192.168.34.64/26 handle="k8s-pod-network.d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" host="ip-172-31-26-69" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.782 [INFO][4816] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.67/26] handle="k8s-pod-network.d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" host="ip-172-31-26-69" Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.782 [INFO][4816] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:38.951106 containerd[2015]: 2024-08-05 21:56:38.782 [INFO][4816] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.67/26] IPv6=[] ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" HandleID="k8s-pod-network.d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:38.956910 containerd[2015]: 2024-08-05 21:56:38.801 [INFO][4766] k8s.go 386: Populated endpoint ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Namespace="kube-system" Pod="coredns-5dd5756b68-t762x" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d7c936aa-fa2e-4a71-ab28-36e483f65d67", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"", Pod:"coredns-5dd5756b68-t762x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fec380de87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:38.956910 containerd[2015]: 2024-08-05 21:56:38.807 [INFO][4766] k8s.go 387: Calico CNI using IPs: [192.168.34.67/32] ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Namespace="kube-system" Pod="coredns-5dd5756b68-t762x" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:38.956910 containerd[2015]: 2024-08-05 21:56:38.807 [INFO][4766] dataplane_linux.go 68: Setting the host side veth name to cali1fec380de87 ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Namespace="kube-system" Pod="coredns-5dd5756b68-t762x" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:38.956910 containerd[2015]: 2024-08-05 21:56:38.874 [INFO][4766] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Namespace="kube-system" Pod="coredns-5dd5756b68-t762x" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:38.956910 containerd[2015]: 2024-08-05 21:56:38.877 [INFO][4766] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Namespace="kube-system" Pod="coredns-5dd5756b68-t762x" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d7c936aa-fa2e-4a71-ab28-36e483f65d67", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c", Pod:"coredns-5dd5756b68-t762x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fec380de87", MAC:"ae:8e:89:c3:4d:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:38.956910 containerd[2015]: 2024-08-05 21:56:38.914 [INFO][4766] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c" Namespace="kube-system" Pod="coredns-5dd5756b68-t762x" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:39.026490 systemd-networkd[1930]: cali83328eacdf4: Gained IPv6LL Aug 5 21:56:39.195087 containerd[2015]: time="2024-08-05T21:56:39.194748808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:56:39.195087 containerd[2015]: time="2024-08-05T21:56:39.194903032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:39.195087 containerd[2015]: time="2024-08-05T21:56:39.194946148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:56:39.195087 containerd[2015]: time="2024-08-05T21:56:39.194971948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:39.255917 systemd[1]: Started cri-containerd-d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c.scope - libcontainer container d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c. Aug 5 21:56:39.391582 containerd[2015]: time="2024-08-05T21:56:39.391311785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-t762x,Uid:d7c936aa-fa2e-4a71-ab28-36e483f65d67,Namespace:kube-system,Attempt:1,} returns sandbox id \"d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c\"" Aug 5 21:56:39.410965 containerd[2015]: time="2024-08-05T21:56:39.410878433Z" level=info msg="CreateContainer within sandbox \"d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:56:39.451812 systemd-networkd[1930]: califa06c92863b: Link UP Aug 5 21:56:39.456861 systemd-networkd[1930]: califa06c92863b: Gained carrier Aug 5 21:56:39.473920 systemd-networkd[1930]: calib2664148230: Gained IPv6LL Aug 5 21:56:39.498230 containerd[2015]: time="2024-08-05T21:56:39.497641074Z" level=info msg="CreateContainer within sandbox \"d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7591e2725a2f807f5e217bcb74859d0c85cf95f9204fff2e29158f1237e1f976\"" Aug 5 21:56:39.504688 containerd[2015]: time="2024-08-05T21:56:39.504185934Z" level=info msg="StartContainer for \"7591e2725a2f807f5e217bcb74859d0c85cf95f9204fff2e29158f1237e1f976\"" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.159 [INFO][4873] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0 coredns-5dd5756b68- kube-system 42fbd30e-8f61-40db-b477-f13005157c9f 742 0 2024-08-05 21:56:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-69 coredns-5dd5756b68-grq8t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa06c92863b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Namespace="kube-system" Pod="coredns-5dd5756b68-grq8t" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.159 [INFO][4873] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Namespace="kube-system" Pod="coredns-5dd5756b68-grq8t" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.282 [INFO][4913] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" HandleID="k8s-pod-network.a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.320 [INFO][4913] ipam_plugin.go 264: Auto assigning IP ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" HandleID="k8s-pod-network.a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001012d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-69", "pod":"coredns-5dd5756b68-grq8t", "timestamp":"2024-08-05 21:56:39.282601276 +0000 UTC"}, Hostname:"ip-172-31-26-69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.321 [INFO][4913] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.323 [INFO][4913] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.323 [INFO][4913] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-69' Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.328 [INFO][4913] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" host="ip-172-31-26-69" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.337 [INFO][4913] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-69" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.361 [INFO][4913] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.369 [INFO][4913] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.378 [INFO][4913] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.378 [INFO][4913] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" host="ip-172-31-26-69" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.383 [INFO][4913] ipam.go 1685: Creating new handle: k8s-pod-network.a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6 Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.402 [INFO][4913] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" host="ip-172-31-26-69" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.424 [INFO][4913] ipam.go 1216: Successfully claimed IPs: [192.168.34.68/26] block=192.168.34.64/26 handle="k8s-pod-network.a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" host="ip-172-31-26-69" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.425 [INFO][4913] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.68/26] handle="k8s-pod-network.a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" host="ip-172-31-26-69" Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.425 [INFO][4913] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:39.517268 containerd[2015]: 2024-08-05 21:56:39.426 [INFO][4913] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.68/26] IPv6=[] ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" HandleID="k8s-pod-network.a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:39.522765 containerd[2015]: 2024-08-05 21:56:39.435 [INFO][4873] k8s.go 386: Populated endpoint ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Namespace="kube-system" Pod="coredns-5dd5756b68-grq8t" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"42fbd30e-8f61-40db-b477-f13005157c9f", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"", Pod:"coredns-5dd5756b68-grq8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa06c92863b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:39.522765 containerd[2015]: 2024-08-05 21:56:39.438 [INFO][4873] k8s.go 387: Calico CNI using IPs: [192.168.34.68/32] ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Namespace="kube-system" Pod="coredns-5dd5756b68-grq8t" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:39.522765 containerd[2015]: 2024-08-05 21:56:39.438 [INFO][4873] dataplane_linux.go 68: Setting the host side veth name to califa06c92863b ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Namespace="kube-system" Pod="coredns-5dd5756b68-grq8t" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:39.522765 containerd[2015]: 2024-08-05 21:56:39.455 [INFO][4873] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Namespace="kube-system" Pod="coredns-5dd5756b68-grq8t" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:39.522765 containerd[2015]: 2024-08-05 21:56:39.461 [INFO][4873] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Namespace="kube-system" Pod="coredns-5dd5756b68-grq8t" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"42fbd30e-8f61-40db-b477-f13005157c9f", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6", Pod:"coredns-5dd5756b68-grq8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa06c92863b", MAC:"62:07:da:d0:6a:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:39.522765 containerd[2015]: 2024-08-05 21:56:39.507 [INFO][4873] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6" Namespace="kube-system" Pod="coredns-5dd5756b68-grq8t" WorkloadEndpoint="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:39.618097 systemd[1]: Started cri-containerd-7591e2725a2f807f5e217bcb74859d0c85cf95f9204fff2e29158f1237e1f976.scope - libcontainer container 7591e2725a2f807f5e217bcb74859d0c85cf95f9204fff2e29158f1237e1f976. Aug 5 21:56:39.705852 containerd[2015]: time="2024-08-05T21:56:39.705354091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:56:39.705852 containerd[2015]: time="2024-08-05T21:56:39.705669643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:39.705852 containerd[2015]: time="2024-08-05T21:56:39.705727867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:56:39.705852 containerd[2015]: time="2024-08-05T21:56:39.705783511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:56:39.783355 systemd[1]: Started cri-containerd-a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6.scope - libcontainer container a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6. Aug 5 21:56:39.849084 containerd[2015]: time="2024-08-05T21:56:39.847683355Z" level=info msg="StartContainer for \"7591e2725a2f807f5e217bcb74859d0c85cf95f9204fff2e29158f1237e1f976\" returns successfully" Aug 5 21:56:39.941051 containerd[2015]: time="2024-08-05T21:56:39.940926320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-grq8t,Uid:42fbd30e-8f61-40db-b477-f13005157c9f,Namespace:kube-system,Attempt:1,} returns sandbox id \"a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6\"" Aug 5 21:56:39.956992 containerd[2015]: time="2024-08-05T21:56:39.956760548Z" level=info msg="CreateContainer within sandbox \"a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:56:40.016775 containerd[2015]: time="2024-08-05T21:56:40.016661632Z" level=info msg="CreateContainer within sandbox \"a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d2dee0ff43169c9ae80e551a6b86e472c0464da4ac8348554306f04f44910d2\"" Aug 5 21:56:40.022503 containerd[2015]: time="2024-08-05T21:56:40.019332196Z" level=info msg="StartContainer for \"4d2dee0ff43169c9ae80e551a6b86e472c0464da4ac8348554306f04f44910d2\"" Aug 5 21:56:40.142301 systemd[1]: Started cri-containerd-4d2dee0ff43169c9ae80e551a6b86e472c0464da4ac8348554306f04f44910d2.scope - libcontainer container 4d2dee0ff43169c9ae80e551a6b86e472c0464da4ac8348554306f04f44910d2. Aug 5 21:56:40.344719 containerd[2015]: time="2024-08-05T21:56:40.344523858Z" level=info msg="StartContainer for \"4d2dee0ff43169c9ae80e551a6b86e472c0464da4ac8348554306f04f44910d2\" returns successfully" Aug 5 21:56:40.626133 systemd-networkd[1930]: cali1fec380de87: Gained IPv6LL Aug 5 21:56:41.014170 kubelet[3315]: I0805 21:56:41.013882 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-grq8t" podStartSLOduration=40.013655405 podCreationTimestamp="2024-08-05 21:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:56:40.934426653 +0000 UTC m=+53.067984313" watchObservedRunningTime="2024-08-05 21:56:41.013655405 +0000 UTC m=+53.147213173" Aug 5 21:56:41.129762 kubelet[3315]: I0805 21:56:41.128430 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-t762x" podStartSLOduration=40.12796281 podCreationTimestamp="2024-08-05 21:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:56:41.009863753 +0000 UTC m=+53.143421845" watchObservedRunningTime="2024-08-05 21:56:41.12796281 +0000 UTC m=+53.261520482" Aug 5 21:56:41.330739 systemd-networkd[1930]: califa06c92863b: Gained IPv6LL Aug 5 21:56:42.583401 containerd[2015]: time="2024-08-05T21:56:42.581234097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:42.585519 containerd[2015]: time="2024-08-05T21:56:42.584710329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Aug 5 21:56:42.596550 containerd[2015]: time="2024-08-05T21:56:42.596291781Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:42.620707 containerd[2015]: time="2024-08-05T21:56:42.620324145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:42.627206 containerd[2015]: time="2024-08-05T21:56:42.626931753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 4.190097957s" Aug 5 21:56:42.627878 containerd[2015]: time="2024-08-05T21:56:42.627142653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Aug 5 21:56:42.634500 containerd[2015]: time="2024-08-05T21:56:42.633283941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 21:56:42.707866 containerd[2015]: time="2024-08-05T21:56:42.707789385Z" level=info msg="CreateContainer within sandbox \"b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 21:56:42.764427 containerd[2015]: time="2024-08-05T21:56:42.764317198Z" level=info msg="CreateContainer within sandbox \"b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3f46418094ce3631babad84e5403c7c9c184fe369235729fc0175ae14af53a49\"" Aug 5 21:56:42.768769 containerd[2015]: time="2024-08-05T21:56:42.768551518Z" level=info msg="StartContainer for \"3f46418094ce3631babad84e5403c7c9c184fe369235729fc0175ae14af53a49\"" Aug 5 21:56:42.929696 systemd[1]: Started cri-containerd-3f46418094ce3631babad84e5403c7c9c184fe369235729fc0175ae14af53a49.scope - libcontainer container 3f46418094ce3631babad84e5403c7c9c184fe369235729fc0175ae14af53a49. Aug 5 21:56:43.267550 systemd[1]: Started sshd@7-172.31.26.69:22-147.75.109.163:47216.service - OpenSSH per-connection server daemon (147.75.109.163:47216). Aug 5 21:56:43.425949 ntpd[1990]: Listen normally on 8 vxlan.calico 192.168.34.64:123 Aug 5 21:56:43.427618 ntpd[1990]: 5 Aug 21:56:43 ntpd[1990]: Listen normally on 8 vxlan.calico 192.168.34.64:123 Aug 5 21:56:43.427618 ntpd[1990]: 5 Aug 21:56:43 ntpd[1990]: Listen normally on 9 vxlan.calico [fe80::64b1:4aff:fee6:a1dc%4]:123 Aug 5 21:56:43.427618 ntpd[1990]: 5 Aug 21:56:43 ntpd[1990]: Listen normally on 10 calib2664148230 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 21:56:43.427618 ntpd[1990]: 5 Aug 21:56:43 ntpd[1990]: Listen normally on 11 cali83328eacdf4 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 21:56:43.427618 ntpd[1990]: 5 Aug 21:56:43 ntpd[1990]: Listen normally on 12 cali1fec380de87 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 21:56:43.427618 ntpd[1990]: 5 Aug 21:56:43 ntpd[1990]: Listen normally on 13 califa06c92863b [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 21:56:43.426187 ntpd[1990]: Listen normally on 9 vxlan.calico [fe80::64b1:4aff:fee6:a1dc%4]:123 Aug 5 21:56:43.426297 ntpd[1990]: Listen normally on 10 calib2664148230 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 21:56:43.426383 ntpd[1990]: Listen normally on 11 cali83328eacdf4 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 21:56:43.426452 ntpd[1990]: Listen normally on 12 cali1fec380de87 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 21:56:43.426576 ntpd[1990]: Listen normally on 13 califa06c92863b [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 21:56:43.527161 sshd[5116]: Accepted publickey for core from 147.75.109.163 port 47216 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:56:43.535214 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:56:43.551137 containerd[2015]: time="2024-08-05T21:56:43.550988194Z" level=info msg="StartContainer for \"3f46418094ce3631babad84e5403c7c9c184fe369235729fc0175ae14af53a49\" returns successfully" Aug 5 21:56:43.557427 systemd-logind[1995]: New session 8 of user core. Aug 5 21:56:43.567444 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 21:56:43.990271 sshd[5116]: pam_unix(sshd:session): session closed for user core Aug 5 21:56:44.019177 systemd-logind[1995]: Session 8 logged out. Waiting for processes to exit. Aug 5 21:56:44.021986 systemd[1]: sshd@7-172.31.26.69:22-147.75.109.163:47216.service: Deactivated successfully. Aug 5 21:56:44.036778 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 21:56:44.043281 systemd-logind[1995]: Removed session 8. Aug 5 21:56:45.355579 containerd[2015]: time="2024-08-05T21:56:45.354660083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:45.360503 containerd[2015]: time="2024-08-05T21:56:45.359184083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Aug 5 21:56:45.363037 containerd[2015]: time="2024-08-05T21:56:45.362955863Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:45.375405 containerd[2015]: time="2024-08-05T21:56:45.375285395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:45.380144 containerd[2015]: time="2024-08-05T21:56:45.379762403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 2.742964862s" Aug 5 21:56:45.380144 containerd[2015]: time="2024-08-05T21:56:45.379843007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Aug 5 21:56:45.396304 containerd[2015]: time="2024-08-05T21:56:45.395080691Z" level=info msg="CreateContainer within sandbox \"8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 21:56:45.419407 kubelet[3315]: I0805 21:56:45.419341 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8876484d6-njtsj" podStartSLOduration=31.215810814 podCreationTimestamp="2024-08-05 21:56:10 +0000 UTC" firstStartedPulling="2024-08-05 21:56:38.427603096 +0000 UTC m=+50.561160744" lastFinishedPulling="2024-08-05 21:56:42.631057809 +0000 UTC m=+54.764615469" observedRunningTime="2024-08-05 21:56:44.05041988 +0000 UTC m=+56.183977552" watchObservedRunningTime="2024-08-05 21:56:45.419265539 +0000 UTC m=+57.552823235" Aug 5 21:56:45.449624 containerd[2015]: time="2024-08-05T21:56:45.447145091Z" level=info msg="CreateContainer within sandbox \"8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"08c721299591a231abcedf8c9c807edbebc73dd179bbc66f6daa6946ee8af4b3\"" Aug 5 21:56:45.449624 containerd[2015]: time="2024-08-05T21:56:45.448450151Z" level=info msg="StartContainer for \"08c721299591a231abcedf8c9c807edbebc73dd179bbc66f6daa6946ee8af4b3\"" Aug 5 21:56:45.579198 systemd[1]: Started cri-containerd-08c721299591a231abcedf8c9c807edbebc73dd179bbc66f6daa6946ee8af4b3.scope - libcontainer container 08c721299591a231abcedf8c9c807edbebc73dd179bbc66f6daa6946ee8af4b3. Aug 5 21:56:45.792530 containerd[2015]: time="2024-08-05T21:56:45.791849173Z" level=info msg="StartContainer for \"08c721299591a231abcedf8c9c807edbebc73dd179bbc66f6daa6946ee8af4b3\" returns successfully" Aug 5 21:56:45.799456 containerd[2015]: time="2024-08-05T21:56:45.798932137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 21:56:47.510638 containerd[2015]: time="2024-08-05T21:56:47.508790977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:47.512184 containerd[2015]: time="2024-08-05T21:56:47.512084353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Aug 5 21:56:47.512996 containerd[2015]: time="2024-08-05T21:56:47.512912365Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:47.521807 containerd[2015]: time="2024-08-05T21:56:47.521658901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:56:47.526980 containerd[2015]: time="2024-08-05T21:56:47.526865521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.727829692s" Aug 5 21:56:47.527503 containerd[2015]: time="2024-08-05T21:56:47.527226745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Aug 5 21:56:47.534171 containerd[2015]: time="2024-08-05T21:56:47.534113701Z" level=info msg="CreateContainer within sandbox \"8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 21:56:47.577776 containerd[2015]: time="2024-08-05T21:56:47.577132370Z" level=info msg="CreateContainer within sandbox \"8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"659c950a84ba1148b979b1cf390d1f773d6424b8276779fabaaceb626b87d6e2\"" Aug 5 21:56:47.587661 containerd[2015]: time="2024-08-05T21:56:47.585878174Z" level=info msg="StartContainer for \"659c950a84ba1148b979b1cf390d1f773d6424b8276779fabaaceb626b87d6e2\"" Aug 5 21:56:47.590153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3260989326.mount: Deactivated successfully. Aug 5 21:56:47.743027 systemd[1]: run-containerd-runc-k8s.io-659c950a84ba1148b979b1cf390d1f773d6424b8276779fabaaceb626b87d6e2-runc.KaYHEf.mount: Deactivated successfully. Aug 5 21:56:47.767163 systemd[1]: Started cri-containerd-659c950a84ba1148b979b1cf390d1f773d6424b8276779fabaaceb626b87d6e2.scope - libcontainer container 659c950a84ba1148b979b1cf390d1f773d6424b8276779fabaaceb626b87d6e2. Aug 5 21:56:47.896415 containerd[2015]: time="2024-08-05T21:56:47.896307531Z" level=info msg="StartContainer for \"659c950a84ba1148b979b1cf390d1f773d6424b8276779fabaaceb626b87d6e2\" returns successfully" Aug 5 21:56:48.302564 containerd[2015]: time="2024-08-05T21:56:48.300398953Z" level=info msg="StopPodSandbox for \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\"" Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.478 [WARNING][5274] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0", GenerateName:"calico-kube-controllers-8876484d6-", Namespace:"calico-system", SelfLink:"", UID:"8dc15cde-1f64-4a5a-924e-a115890fac64", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8876484d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687", Pod:"calico-kube-controllers-8876484d6-njtsj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2664148230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.479 [INFO][5274] k8s.go 608: Cleaning up netns ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.480 [INFO][5274] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" iface="eth0" netns="" Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.482 [INFO][5274] k8s.go 615: Releasing IP address(es) ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.482 [INFO][5274] utils.go 188: Calico CNI releasing IP address ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.559 [INFO][5280] ipam_plugin.go 411: Releasing address using handleID ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" HandleID="k8s-pod-network.c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.560 [INFO][5280] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.560 [INFO][5280] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.580 [WARNING][5280] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" HandleID="k8s-pod-network.c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.580 [INFO][5280] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" HandleID="k8s-pod-network.c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.587 [INFO][5280] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:48.596574 containerd[2015]: 2024-08-05 21:56:48.591 [INFO][5274] k8s.go 621: Teardown processing complete. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:48.600813 containerd[2015]: time="2024-08-05T21:56:48.599666067Z" level=info msg="TearDown network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\" successfully" Aug 5 21:56:48.600813 containerd[2015]: time="2024-08-05T21:56:48.599734479Z" level=info msg="StopPodSandbox for \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\" returns successfully" Aug 5 21:56:48.601492 containerd[2015]: time="2024-08-05T21:56:48.601371639Z" level=info msg="RemovePodSandbox for \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\"" Aug 5 21:56:48.601643 containerd[2015]: time="2024-08-05T21:56:48.601494543Z" level=info msg="Forcibly stopping sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\"" Aug 5 21:56:48.660549 kubelet[3315]: I0805 21:56:48.660412 3315 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 21:56:48.664582 kubelet[3315]: I0805 21:56:48.663695 3315 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.796 [WARNING][5298] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0", GenerateName:"calico-kube-controllers-8876484d6-", Namespace:"calico-system", SelfLink:"", UID:"8dc15cde-1f64-4a5a-924e-a115890fac64", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8876484d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"b4465f90aff4da23db4b9ff9987adb0308c9d7e10bdd4563545b0461e85e1687", Pod:"calico-kube-controllers-8876484d6-njtsj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2664148230", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.799 [INFO][5298] k8s.go 608: Cleaning up netns ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.799 [INFO][5298] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" iface="eth0" netns="" Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.799 [INFO][5298] k8s.go 615: Releasing IP address(es) ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.799 [INFO][5298] utils.go 188: Calico CNI releasing IP address ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.872 [INFO][5305] ipam_plugin.go 411: Releasing address using handleID ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" HandleID="k8s-pod-network.c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.873 [INFO][5305] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.873 [INFO][5305] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.899 [WARNING][5305] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" HandleID="k8s-pod-network.c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.900 [INFO][5305] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" HandleID="k8s-pod-network.c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Workload="ip--172--31--26--69-k8s-calico--kube--controllers--8876484d6--njtsj-eth0" Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.909 [INFO][5305] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:48.924656 containerd[2015]: 2024-08-05 21:56:48.916 [INFO][5298] k8s.go 621: Teardown processing complete. ContainerID="c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2" Aug 5 21:56:48.924656 containerd[2015]: time="2024-08-05T21:56:48.921751408Z" level=info msg="TearDown network for sandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\" successfully" Aug 5 21:56:48.930345 containerd[2015]: time="2024-08-05T21:56:48.930208732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:56:48.930646 containerd[2015]: time="2024-08-05T21:56:48.930418912Z" level=info msg="RemovePodSandbox \"c7e0f8cefa5ee89d841a346ab1582d956de48a31fb9e4b9db09ea4ec0df7b1f2\" returns successfully" Aug 5 21:56:48.935983 containerd[2015]: time="2024-08-05T21:56:48.935771536Z" level=info msg="StopPodSandbox for \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\"" Aug 5 21:56:49.052129 systemd[1]: Started sshd@8-172.31.26.69:22-147.75.109.163:57972.service - OpenSSH per-connection server daemon (147.75.109.163:57972). Aug 5 21:56:49.306067 sshd[5329]: Accepted publickey for core from 147.75.109.163 port 57972 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:56:49.315894 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.103 [WARNING][5323] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d", Pod:"csi-node-driver-k4s94", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83328eacdf4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.106 [INFO][5323] k8s.go 608: Cleaning up netns ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.107 [INFO][5323] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" iface="eth0" netns="" Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.107 [INFO][5323] k8s.go 615: Releasing IP address(es) ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.107 [INFO][5323] utils.go 188: Calico CNI releasing IP address ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.229 [INFO][5331] ipam_plugin.go 411: Releasing address using handleID ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" HandleID="k8s-pod-network.3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.230 [INFO][5331] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.230 [INFO][5331] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.281 [WARNING][5331] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" HandleID="k8s-pod-network.3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.281 [INFO][5331] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" HandleID="k8s-pod-network.3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.298 [INFO][5331] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:49.325917 containerd[2015]: 2024-08-05 21:56:49.318 [INFO][5323] k8s.go 621: Teardown processing complete. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:49.330259 containerd[2015]: time="2024-08-05T21:56:49.325963322Z" level=info msg="TearDown network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\" successfully" Aug 5 21:56:49.330259 containerd[2015]: time="2024-08-05T21:56:49.326004974Z" level=info msg="StopPodSandbox for \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\" returns successfully" Aug 5 21:56:49.330259 containerd[2015]: time="2024-08-05T21:56:49.330042638Z" level=info msg="RemovePodSandbox for \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\"" Aug 5 21:56:49.335921 containerd[2015]: time="2024-08-05T21:56:49.330108134Z" level=info msg="Forcibly stopping sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\"" Aug 5 21:56:49.345659 systemd-logind[1995]: New session 9 of user core. Aug 5 21:56:49.356202 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.595 [WARNING][5354] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d27212b5-d6e4-4e2e-9f51-5d83c596a8d0", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"8ca2b955a322d7ba4e46810dc08b6553418072bbcaa49d9c3047e2a825773d9d", Pod:"csi-node-driver-k4s94", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83328eacdf4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.597 [INFO][5354] k8s.go 608: Cleaning up netns ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.598 [INFO][5354] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" iface="eth0" netns="" Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.598 [INFO][5354] k8s.go 615: Releasing IP address(es) ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.598 [INFO][5354] utils.go 188: Calico CNI releasing IP address ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.676 [INFO][5368] ipam_plugin.go 411: Releasing address using handleID ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" HandleID="k8s-pod-network.3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.677 [INFO][5368] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.677 [INFO][5368] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.727 [WARNING][5368] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" HandleID="k8s-pod-network.3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.727 [INFO][5368] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" HandleID="k8s-pod-network.3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Workload="ip--172--31--26--69-k8s-csi--node--driver--k4s94-eth0" Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.752 [INFO][5368] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:49.765303 containerd[2015]: 2024-08-05 21:56:49.758 [INFO][5354] k8s.go 621: Teardown processing complete. ContainerID="3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a" Aug 5 21:56:49.765303 containerd[2015]: time="2024-08-05T21:56:49.764816729Z" level=info msg="TearDown network for sandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\" successfully" Aug 5 21:56:49.780024 containerd[2015]: time="2024-08-05T21:56:49.779617397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:56:49.780024 containerd[2015]: time="2024-08-05T21:56:49.779775149Z" level=info msg="RemovePodSandbox \"3804255ddc4a56c5ab24b76dcf92d28daed098b39ab3e1b25b5eb4e9f73d742a\" returns successfully" Aug 5 21:56:49.781646 containerd[2015]: time="2024-08-05T21:56:49.781544513Z" level=info msg="StopPodSandbox for \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\"" Aug 5 21:56:49.861985 sshd[5329]: pam_unix(sshd:session): session closed for user core Aug 5 21:56:49.880913 systemd[1]: sshd@8-172.31.26.69:22-147.75.109.163:57972.service: Deactivated successfully. Aug 5 21:56:49.898134 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 21:56:49.906733 systemd-logind[1995]: Session 9 logged out. Waiting for processes to exit. Aug 5 21:56:49.918161 systemd-logind[1995]: Removed session 9. Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:49.960 [WARNING][5386] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d7c936aa-fa2e-4a71-ab28-36e483f65d67", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c", Pod:"coredns-5dd5756b68-t762x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fec380de87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:49.961 [INFO][5386] k8s.go 608: Cleaning up netns ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:49.962 [INFO][5386] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" iface="eth0" netns="" Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:49.962 [INFO][5386] k8s.go 615: Releasing IP address(es) ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:49.963 [INFO][5386] utils.go 188: Calico CNI releasing IP address ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:50.044 [INFO][5396] ipam_plugin.go 411: Releasing address using handleID ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" HandleID="k8s-pod-network.ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:50.046 [INFO][5396] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:50.046 [INFO][5396] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:50.072 [WARNING][5396] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" HandleID="k8s-pod-network.ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:50.072 [INFO][5396] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" HandleID="k8s-pod-network.ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:50.081 [INFO][5396] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:50.094068 containerd[2015]: 2024-08-05 21:56:50.089 [INFO][5386] k8s.go 621: Teardown processing complete. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:50.095598 containerd[2015]: time="2024-08-05T21:56:50.095383850Z" level=info msg="TearDown network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\" successfully" Aug 5 21:56:50.095598 containerd[2015]: time="2024-08-05T21:56:50.095583074Z" level=info msg="StopPodSandbox for \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\" returns successfully" Aug 5 21:56:50.100022 containerd[2015]: time="2024-08-05T21:56:50.099253034Z" level=info msg="RemovePodSandbox for \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\"" Aug 5 21:56:50.100022 containerd[2015]: time="2024-08-05T21:56:50.099323858Z" level=info msg="Forcibly stopping sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\"" Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.246 [WARNING][5414] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d7c936aa-fa2e-4a71-ab28-36e483f65d67", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"d2706a39b108080f9e71782a5f891329c70e18c4b1ce1bf3d058b0031d6c5a3c", Pod:"coredns-5dd5756b68-t762x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fec380de87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.248 [INFO][5414] k8s.go 608: Cleaning up netns ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.249 [INFO][5414] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" iface="eth0" netns="" Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.249 [INFO][5414] k8s.go 615: Releasing IP address(es) ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.249 [INFO][5414] utils.go 188: Calico CNI releasing IP address ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.351 [INFO][5420] ipam_plugin.go 411: Releasing address using handleID ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" HandleID="k8s-pod-network.ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.354 [INFO][5420] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.355 [INFO][5420] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.379 [WARNING][5420] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" HandleID="k8s-pod-network.ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.379 [INFO][5420] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" HandleID="k8s-pod-network.ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--t762x-eth0" Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.383 [INFO][5420] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:50.393030 containerd[2015]: 2024-08-05 21:56:50.388 [INFO][5414] k8s.go 621: Teardown processing complete. ContainerID="ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112" Aug 5 21:56:50.395397 containerd[2015]: time="2024-08-05T21:56:50.393945484Z" level=info msg="TearDown network for sandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\" successfully" Aug 5 21:56:50.404144 containerd[2015]: time="2024-08-05T21:56:50.403717036Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:56:50.404144 containerd[2015]: time="2024-08-05T21:56:50.403833808Z" level=info msg="RemovePodSandbox \"ffe89fd31d4df0c9cb0253681575ca4321fc824b816995d1fd2eacce2b542112\" returns successfully" Aug 5 21:56:50.406511 containerd[2015]: time="2024-08-05T21:56:50.406176844Z" level=info msg="StopPodSandbox for \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\"" Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.532 [WARNING][5440] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"42fbd30e-8f61-40db-b477-f13005157c9f", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6", Pod:"coredns-5dd5756b68-grq8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa06c92863b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.532 [INFO][5440] k8s.go 608: Cleaning up netns ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.533 [INFO][5440] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" iface="eth0" netns="" Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.533 [INFO][5440] k8s.go 615: Releasing IP address(es) ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.533 [INFO][5440] utils.go 188: Calico CNI releasing IP address ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.609 [INFO][5446] ipam_plugin.go 411: Releasing address using handleID ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" HandleID="k8s-pod-network.123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.610 [INFO][5446] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.610 [INFO][5446] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.626 [WARNING][5446] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" HandleID="k8s-pod-network.123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.626 [INFO][5446] ipam_plugin.go 439: Releasing address using workloadID ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" HandleID="k8s-pod-network.123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.631 [INFO][5446] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:50.648327 containerd[2015]: 2024-08-05 21:56:50.634 [INFO][5440] k8s.go 621: Teardown processing complete. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:50.652246 containerd[2015]: time="2024-08-05T21:56:50.649993877Z" level=info msg="TearDown network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\" successfully" Aug 5 21:56:50.652246 containerd[2015]: time="2024-08-05T21:56:50.651387389Z" level=info msg="StopPodSandbox for \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\" returns successfully" Aug 5 21:56:50.655185 containerd[2015]: time="2024-08-05T21:56:50.655002089Z" level=info msg="RemovePodSandbox for \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\"" Aug 5 21:56:50.655589 containerd[2015]: time="2024-08-05T21:56:50.655181081Z" level=info msg="Forcibly stopping sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\"" Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.786 [WARNING][5464] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"42fbd30e-8f61-40db-b477-f13005157c9f", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"a7bf1c09ff992f9384e3caeb76bbf77177e20c49e0dcc405433a5e1ca7d3acd6", Pod:"coredns-5dd5756b68-grq8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa06c92863b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.787 [INFO][5464] k8s.go 608: Cleaning up netns ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.788 [INFO][5464] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" iface="eth0" netns="" Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.788 [INFO][5464] k8s.go 615: Releasing IP address(es) ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.788 [INFO][5464] utils.go 188: Calico CNI releasing IP address ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.859 [INFO][5470] ipam_plugin.go 411: Releasing address using handleID ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" HandleID="k8s-pod-network.123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.860 [INFO][5470] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.860 [INFO][5470] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.875 [WARNING][5470] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" HandleID="k8s-pod-network.123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.875 [INFO][5470] ipam_plugin.go 439: Releasing address using workloadID ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" HandleID="k8s-pod-network.123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Workload="ip--172--31--26--69-k8s-coredns--5dd5756b68--grq8t-eth0" Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.879 [INFO][5470] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:56:50.888052 containerd[2015]: 2024-08-05 21:56:50.884 [INFO][5464] k8s.go 621: Teardown processing complete. ContainerID="123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014" Aug 5 21:56:50.890117 containerd[2015]: time="2024-08-05T21:56:50.888176922Z" level=info msg="TearDown network for sandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\" successfully" Aug 5 21:56:50.898451 containerd[2015]: time="2024-08-05T21:56:50.897966018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:56:50.898451 containerd[2015]: time="2024-08-05T21:56:50.898071378Z" level=info msg="RemovePodSandbox \"123234ca1f6b2350ca91b9d0b1ab08ec3c9b0f090ed5a8b2f56cd40e95577014\" returns successfully" Aug 5 21:56:54.910910 systemd[1]: Started sshd@9-172.31.26.69:22-147.75.109.163:35462.service - OpenSSH per-connection server daemon (147.75.109.163:35462). Aug 5 21:56:55.117660 sshd[5496]: Accepted publickey for core from 147.75.109.163 port 35462 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:56:55.124341 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:56:55.136791 systemd-logind[1995]: New session 10 of user core. Aug 5 21:56:55.141981 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 21:56:55.432203 sshd[5496]: pam_unix(sshd:session): session closed for user core Aug 5 21:56:55.442231 systemd[1]: sshd@9-172.31.26.69:22-147.75.109.163:35462.service: Deactivated successfully. Aug 5 21:56:55.451714 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 21:56:55.454516 systemd-logind[1995]: Session 10 logged out. Waiting for processes to exit. Aug 5 21:56:55.482390 systemd[1]: Started sshd@10-172.31.26.69:22-147.75.109.163:35466.service - OpenSSH per-connection server daemon (147.75.109.163:35466). Aug 5 21:56:55.485666 systemd-logind[1995]: Removed session 10. Aug 5 21:56:55.690649 sshd[5510]: Accepted publickey for core from 147.75.109.163 port 35466 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:56:55.697725 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:56:55.726783 systemd-logind[1995]: New session 11 of user core. Aug 5 21:56:55.743175 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 21:56:56.539074 sshd[5510]: pam_unix(sshd:session): session closed for user core Aug 5 21:56:56.562764 systemd[1]: sshd@10-172.31.26.69:22-147.75.109.163:35466.service: Deactivated successfully. Aug 5 21:56:56.581169 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 21:56:56.624092 systemd-logind[1995]: Session 11 logged out. Waiting for processes to exit. Aug 5 21:56:56.640156 systemd[1]: Started sshd@11-172.31.26.69:22-147.75.109.163:35472.service - OpenSSH per-connection server daemon (147.75.109.163:35472). Aug 5 21:56:56.645914 systemd-logind[1995]: Removed session 11. Aug 5 21:56:56.858837 sshd[5528]: Accepted publickey for core from 147.75.109.163 port 35472 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:56:56.863396 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:56:56.876210 systemd-logind[1995]: New session 12 of user core. Aug 5 21:56:56.884871 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 21:56:57.199645 sshd[5528]: pam_unix(sshd:session): session closed for user core Aug 5 21:56:57.208114 systemd[1]: sshd@11-172.31.26.69:22-147.75.109.163:35472.service: Deactivated successfully. Aug 5 21:56:57.218188 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 21:56:57.224035 systemd-logind[1995]: Session 12 logged out. Waiting for processes to exit. Aug 5 21:56:57.227941 systemd-logind[1995]: Removed session 12. Aug 5 21:57:02.270851 systemd[1]: Started sshd@12-172.31.26.69:22-147.75.109.163:35480.service - OpenSSH per-connection server daemon (147.75.109.163:35480). Aug 5 21:57:02.485098 sshd[5554]: Accepted publickey for core from 147.75.109.163 port 35480 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:02.489882 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:02.509927 systemd-logind[1995]: New session 13 of user core. Aug 5 21:57:02.516030 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 21:57:02.866209 sshd[5554]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:02.880265 systemd[1]: sshd@12-172.31.26.69:22-147.75.109.163:35480.service: Deactivated successfully. Aug 5 21:57:02.889730 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 21:57:02.892996 systemd-logind[1995]: Session 13 logged out. Waiting for processes to exit. Aug 5 21:57:02.898585 systemd-logind[1995]: Removed session 13. Aug 5 21:57:03.978309 systemd[1]: run-containerd-runc-k8s.io-f3c59f399c582685d71a928801702c66b111d16e6e3d27329a12404f4fe1600f-runc.31kmkL.mount: Deactivated successfully. Aug 5 21:57:07.917265 systemd[1]: Started sshd@13-172.31.26.69:22-147.75.109.163:53396.service - OpenSSH per-connection server daemon (147.75.109.163:53396). Aug 5 21:57:08.119175 sshd[5595]: Accepted publickey for core from 147.75.109.163 port 53396 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:08.125714 sshd[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:08.142871 systemd-logind[1995]: New session 14 of user core. Aug 5 21:57:08.153229 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 21:57:08.487120 sshd[5595]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:08.498443 systemd[1]: sshd@13-172.31.26.69:22-147.75.109.163:53396.service: Deactivated successfully. Aug 5 21:57:08.506299 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 21:57:08.509386 systemd-logind[1995]: Session 14 logged out. Waiting for processes to exit. Aug 5 21:57:08.513960 systemd-logind[1995]: Removed session 14. Aug 5 21:57:13.535328 systemd[1]: Started sshd@14-172.31.26.69:22-147.75.109.163:53406.service - OpenSSH per-connection server daemon (147.75.109.163:53406). Aug 5 21:57:13.748145 sshd[5628]: Accepted publickey for core from 147.75.109.163 port 53406 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:13.753047 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:13.774412 systemd-logind[1995]: New session 15 of user core. Aug 5 21:57:13.780276 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 21:57:14.128250 sshd[5628]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:14.139532 systemd[1]: sshd@14-172.31.26.69:22-147.75.109.163:53406.service: Deactivated successfully. Aug 5 21:57:14.146668 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 21:57:14.149672 systemd-logind[1995]: Session 15 logged out. Waiting for processes to exit. Aug 5 21:57:14.153374 systemd-logind[1995]: Removed session 15. Aug 5 21:57:19.177617 systemd[1]: Started sshd@15-172.31.26.69:22-147.75.109.163:60872.service - OpenSSH per-connection server daemon (147.75.109.163:60872). Aug 5 21:57:19.385534 sshd[5652]: Accepted publickey for core from 147.75.109.163 port 60872 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:19.392062 sshd[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:19.406180 systemd-logind[1995]: New session 16 of user core. Aug 5 21:57:19.420174 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 21:57:19.874798 sshd[5652]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:19.888093 systemd[1]: sshd@15-172.31.26.69:22-147.75.109.163:60872.service: Deactivated successfully. Aug 5 21:57:19.904611 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 21:57:19.908622 systemd-logind[1995]: Session 16 logged out. Waiting for processes to exit. Aug 5 21:57:19.920900 systemd-logind[1995]: Removed session 16. Aug 5 21:57:24.920120 systemd[1]: Started sshd@16-172.31.26.69:22-147.75.109.163:56324.service - OpenSSH per-connection server daemon (147.75.109.163:56324). Aug 5 21:57:25.116596 sshd[5686]: Accepted publickey for core from 147.75.109.163 port 56324 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:25.120241 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:25.132364 systemd-logind[1995]: New session 17 of user core. Aug 5 21:57:25.139914 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 21:57:25.441117 sshd[5686]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:25.453517 systemd[1]: sshd@16-172.31.26.69:22-147.75.109.163:56324.service: Deactivated successfully. Aug 5 21:57:25.464036 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 21:57:25.467881 systemd-logind[1995]: Session 17 logged out. Waiting for processes to exit. Aug 5 21:57:25.504706 systemd[1]: Started sshd@17-172.31.26.69:22-147.75.109.163:56326.service - OpenSSH per-connection server daemon (147.75.109.163:56326). Aug 5 21:57:25.519186 systemd-logind[1995]: Removed session 17. Aug 5 21:57:25.726302 sshd[5699]: Accepted publickey for core from 147.75.109.163 port 56326 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:25.730182 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:25.743876 systemd-logind[1995]: New session 18 of user core. Aug 5 21:57:25.747138 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 21:57:26.273152 sshd[5699]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:26.282457 systemd[1]: sshd@17-172.31.26.69:22-147.75.109.163:56326.service: Deactivated successfully. Aug 5 21:57:26.291912 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 21:57:26.302919 systemd-logind[1995]: Session 18 logged out. Waiting for processes to exit. Aug 5 21:57:26.328685 systemd[1]: Started sshd@18-172.31.26.69:22-147.75.109.163:56340.service - OpenSSH per-connection server daemon (147.75.109.163:56340). Aug 5 21:57:26.332315 systemd-logind[1995]: Removed session 18. Aug 5 21:57:26.525219 sshd[5709]: Accepted publickey for core from 147.75.109.163 port 56340 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:26.529919 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:26.541603 systemd-logind[1995]: New session 19 of user core. Aug 5 21:57:26.548105 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 21:57:28.362729 sshd[5709]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:28.380189 systemd[1]: sshd@18-172.31.26.69:22-147.75.109.163:56340.service: Deactivated successfully. Aug 5 21:57:28.396864 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 21:57:28.401011 systemd[1]: session-19.scope: Consumed 1.258s CPU time. Aug 5 21:57:28.415055 systemd-logind[1995]: Session 19 logged out. Waiting for processes to exit. Aug 5 21:57:28.465802 systemd[1]: Started sshd@19-172.31.26.69:22-147.75.109.163:56344.service - OpenSSH per-connection server daemon (147.75.109.163:56344). Aug 5 21:57:28.470074 systemd-logind[1995]: Removed session 19. Aug 5 21:57:28.673579 sshd[5727]: Accepted publickey for core from 147.75.109.163 port 56344 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:28.681963 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:28.704305 systemd-logind[1995]: New session 20 of user core. Aug 5 21:57:28.713747 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 21:57:29.639243 sshd[5727]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:29.648652 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 21:57:29.656816 systemd[1]: sshd@19-172.31.26.69:22-147.75.109.163:56344.service: Deactivated successfully. Aug 5 21:57:29.665895 systemd-logind[1995]: Session 20 logged out. Waiting for processes to exit. Aug 5 21:57:29.697769 systemd[1]: Started sshd@20-172.31.26.69:22-147.75.109.163:56354.service - OpenSSH per-connection server daemon (147.75.109.163:56354). Aug 5 21:57:29.700910 systemd-logind[1995]: Removed session 20. Aug 5 21:57:29.903882 sshd[5746]: Accepted publickey for core from 147.75.109.163 port 56354 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:29.910316 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:29.922681 systemd-logind[1995]: New session 21 of user core. Aug 5 21:57:29.928837 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 21:57:30.214095 sshd[5746]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:30.224773 systemd-logind[1995]: Session 21 logged out. Waiting for processes to exit. Aug 5 21:57:30.226514 systemd[1]: sshd@20-172.31.26.69:22-147.75.109.163:56354.service: Deactivated successfully. Aug 5 21:57:30.234018 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 21:57:30.246086 systemd-logind[1995]: Removed session 21. Aug 5 21:57:34.128347 kubelet[3315]: I0805 21:57:34.128113 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-k4s94" podStartSLOduration=75.245344897 podCreationTimestamp="2024-08-05 21:56:10 +0000 UTC" firstStartedPulling="2024-08-05 21:56:38.646378565 +0000 UTC m=+50.779936201" lastFinishedPulling="2024-08-05 21:56:47.528798181 +0000 UTC m=+59.662355841" observedRunningTime="2024-08-05 21:56:48.0550044 +0000 UTC m=+60.188562072" watchObservedRunningTime="2024-08-05 21:57:34.127764537 +0000 UTC m=+106.261322833" Aug 5 21:57:35.254142 systemd[1]: Started sshd@21-172.31.26.69:22-147.75.109.163:44280.service - OpenSSH per-connection server daemon (147.75.109.163:44280). Aug 5 21:57:35.456395 sshd[5783]: Accepted publickey for core from 147.75.109.163 port 44280 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:35.461040 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:35.471430 systemd-logind[1995]: New session 22 of user core. Aug 5 21:57:35.481116 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 21:57:35.762146 sshd[5783]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:35.771069 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 21:57:35.777886 systemd[1]: sshd@21-172.31.26.69:22-147.75.109.163:44280.service: Deactivated successfully. Aug 5 21:57:35.786783 systemd-logind[1995]: Session 22 logged out. Waiting for processes to exit. Aug 5 21:57:35.789194 systemd-logind[1995]: Removed session 22. Aug 5 21:57:38.171911 kubelet[3315]: I0805 21:57:38.171806 3315 topology_manager.go:215] "Topology Admit Handler" podUID="16e79faf-e710-499e-bc95-fe0617d7b1b0" podNamespace="calico-apiserver" podName="calico-apiserver-d8754fdb6-gjw27" Aug 5 21:57:38.201312 systemd[1]: Created slice kubepods-besteffort-pod16e79faf_e710_499e_bc95_fe0617d7b1b0.slice - libcontainer container kubepods-besteffort-pod16e79faf_e710_499e_bc95_fe0617d7b1b0.slice. Aug 5 21:57:38.219851 kubelet[3315]: I0805 21:57:38.219065 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dksz5\" (UniqueName: \"kubernetes.io/projected/16e79faf-e710-499e-bc95-fe0617d7b1b0-kube-api-access-dksz5\") pod \"calico-apiserver-d8754fdb6-gjw27\" (UID: \"16e79faf-e710-499e-bc95-fe0617d7b1b0\") " pod="calico-apiserver/calico-apiserver-d8754fdb6-gjw27" Aug 5 21:57:38.219851 kubelet[3315]: I0805 21:57:38.219225 3315 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16e79faf-e710-499e-bc95-fe0617d7b1b0-calico-apiserver-certs\") pod \"calico-apiserver-d8754fdb6-gjw27\" (UID: \"16e79faf-e710-499e-bc95-fe0617d7b1b0\") " pod="calico-apiserver/calico-apiserver-d8754fdb6-gjw27" Aug 5 21:57:38.223629 kubelet[3315]: W0805 21:57:38.222529 3315 reflector.go:535] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-26-69" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-26-69' and this object Aug 5 21:57:38.223629 kubelet[3315]: E0805 21:57:38.222621 3315 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-26-69" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-26-69' and this object Aug 5 21:57:39.321023 kubelet[3315]: E0805 21:57:39.320944 3315 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Aug 5 21:57:39.322769 kubelet[3315]: E0805 21:57:39.321108 3315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16e79faf-e710-499e-bc95-fe0617d7b1b0-calico-apiserver-certs podName:16e79faf-e710-499e-bc95-fe0617d7b1b0 nodeName:}" failed. No retries permitted until 2024-08-05 21:57:39.821056535 +0000 UTC m=+111.954614195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/16e79faf-e710-499e-bc95-fe0617d7b1b0-calico-apiserver-certs") pod "calico-apiserver-d8754fdb6-gjw27" (UID: "16e79faf-e710-499e-bc95-fe0617d7b1b0") : failed to sync secret cache: timed out waiting for the condition Aug 5 21:57:40.014402 containerd[2015]: time="2024-08-05T21:57:40.013810730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8754fdb6-gjw27,Uid:16e79faf-e710-499e-bc95-fe0617d7b1b0,Namespace:calico-apiserver,Attempt:0,}" Aug 5 21:57:40.371889 systemd-networkd[1930]: calic665e53daec: Link UP Aug 5 21:57:40.375709 systemd-networkd[1930]: calic665e53daec: Gained carrier Aug 5 21:57:40.393078 (udev-worker)[5828]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.168 [INFO][5810] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0 calico-apiserver-d8754fdb6- calico-apiserver 16e79faf-e710-499e-bc95-fe0617d7b1b0 1126 0 2024-08-05 21:57:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8754fdb6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-69 calico-apiserver-d8754fdb6-gjw27 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic665e53daec [] []}} ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Namespace="calico-apiserver" Pod="calico-apiserver-d8754fdb6-gjw27" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.169 [INFO][5810] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Namespace="calico-apiserver" Pod="calico-apiserver-d8754fdb6-gjw27" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.246 [INFO][5821] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" HandleID="k8s-pod-network.954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Workload="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.272 [INFO][5821] ipam_plugin.go 264: Auto assigning IP ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" HandleID="k8s-pod-network.954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Workload="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-69", "pod":"calico-apiserver-d8754fdb6-gjw27", "timestamp":"2024-08-05 21:57:40.246127707 +0000 UTC"}, Hostname:"ip-172-31-26-69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.273 [INFO][5821] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.273 [INFO][5821] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.276 [INFO][5821] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-69' Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.283 [INFO][5821] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" host="ip-172-31-26-69" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.304 [INFO][5821] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-69" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.319 [INFO][5821] ipam.go 489: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.325 [INFO][5821] ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.330 [INFO][5821] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-69" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.330 [INFO][5821] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" host="ip-172-31-26-69" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.336 [INFO][5821] ipam.go 1685: Creating new handle: k8s-pod-network.954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.344 [INFO][5821] ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" host="ip-172-31-26-69" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.356 [INFO][5821] ipam.go 1216: Successfully claimed IPs: [192.168.34.69/26] block=192.168.34.64/26 handle="k8s-pod-network.954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" host="ip-172-31-26-69" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.356 [INFO][5821] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.69/26] handle="k8s-pod-network.954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" host="ip-172-31-26-69" Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.357 [INFO][5821] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:57:40.410847 containerd[2015]: 2024-08-05 21:57:40.357 [INFO][5821] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.34.69/26] IPv6=[] ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" HandleID="k8s-pod-network.954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Workload="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" Aug 5 21:57:40.412828 containerd[2015]: 2024-08-05 21:57:40.364 [INFO][5810] k8s.go 386: Populated endpoint ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Namespace="calico-apiserver" Pod="calico-apiserver-d8754fdb6-gjw27" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0", GenerateName:"calico-apiserver-d8754fdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"16e79faf-e710-499e-bc95-fe0617d7b1b0", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 57, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8754fdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"", Pod:"calico-apiserver-d8754fdb6-gjw27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic665e53daec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:57:40.412828 containerd[2015]: 2024-08-05 21:57:40.365 [INFO][5810] k8s.go 387: Calico CNI using IPs: [192.168.34.69/32] ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Namespace="calico-apiserver" Pod="calico-apiserver-d8754fdb6-gjw27" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" Aug 5 21:57:40.412828 containerd[2015]: 2024-08-05 21:57:40.365 [INFO][5810] dataplane_linux.go 68: Setting the host side veth name to calic665e53daec ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Namespace="calico-apiserver" Pod="calico-apiserver-d8754fdb6-gjw27" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" Aug 5 21:57:40.412828 containerd[2015]: 2024-08-05 21:57:40.375 [INFO][5810] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Namespace="calico-apiserver" Pod="calico-apiserver-d8754fdb6-gjw27" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" Aug 5 21:57:40.412828 containerd[2015]: 2024-08-05 21:57:40.380 [INFO][5810] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Namespace="calico-apiserver" Pod="calico-apiserver-d8754fdb6-gjw27" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0", GenerateName:"calico-apiserver-d8754fdb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"16e79faf-e710-499e-bc95-fe0617d7b1b0", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 57, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8754fdb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-69", ContainerID:"954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f", Pod:"calico-apiserver-d8754fdb6-gjw27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic665e53daec", MAC:"a6:84:ac:f3:6d:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:57:40.412828 containerd[2015]: 2024-08-05 21:57:40.405 [INFO][5810] k8s.go 500: Wrote updated endpoint to datastore ContainerID="954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f" Namespace="calico-apiserver" Pod="calico-apiserver-d8754fdb6-gjw27" WorkloadEndpoint="ip--172--31--26--69-k8s-calico--apiserver--d8754fdb6--gjw27-eth0" Aug 5 21:57:40.501457 containerd[2015]: time="2024-08-05T21:57:40.501171149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:57:40.501856 containerd[2015]: time="2024-08-05T21:57:40.501369869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:57:40.501856 containerd[2015]: time="2024-08-05T21:57:40.501443561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:57:40.501856 containerd[2015]: time="2024-08-05T21:57:40.501525173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:57:40.596219 systemd[1]: Started cri-containerd-954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f.scope - libcontainer container 954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f. Aug 5 21:57:40.810320 systemd[1]: Started sshd@22-172.31.26.69:22-147.75.109.163:44286.service - OpenSSH per-connection server daemon (147.75.109.163:44286). Aug 5 21:57:40.817063 containerd[2015]: time="2024-08-05T21:57:40.815841666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8754fdb6-gjw27,Uid:16e79faf-e710-499e-bc95-fe0617d7b1b0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f\"" Aug 5 21:57:40.826953 containerd[2015]: time="2024-08-05T21:57:40.826842762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 21:57:41.051079 sshd[5885]: Accepted publickey for core from 147.75.109.163 port 44286 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:41.056149 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:41.069914 systemd-logind[1995]: New session 23 of user core. Aug 5 21:57:41.080027 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 21:57:41.415314 sshd[5885]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:41.428390 systemd-logind[1995]: Session 23 logged out. Waiting for processes to exit. Aug 5 21:57:41.430266 systemd[1]: sshd@22-172.31.26.69:22-147.75.109.163:44286.service: Deactivated successfully. Aug 5 21:57:41.438783 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 21:57:41.444700 systemd-logind[1995]: Removed session 23. Aug 5 21:57:42.002190 systemd-networkd[1930]: calic665e53daec: Gained IPv6LL Aug 5 21:57:44.425278 ntpd[1990]: Listen normally on 14 calic665e53daec [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 21:57:44.428945 ntpd[1990]: 5 Aug 21:57:44 ntpd[1990]: Listen normally on 14 calic665e53daec [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 21:57:44.574947 containerd[2015]: time="2024-08-05T21:57:44.574855785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:57:44.578180 containerd[2015]: time="2024-08-05T21:57:44.578010777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Aug 5 21:57:44.583325 containerd[2015]: time="2024-08-05T21:57:44.582837825Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:57:44.593903 containerd[2015]: time="2024-08-05T21:57:44.593803941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:57:44.601357 containerd[2015]: time="2024-08-05T21:57:44.600251145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 3.773292427s" Aug 5 21:57:44.601357 containerd[2015]: time="2024-08-05T21:57:44.600353205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Aug 5 21:57:44.623810 containerd[2015]: time="2024-08-05T21:57:44.623730021Z" level=info msg="CreateContainer within sandbox \"954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 21:57:44.683602 containerd[2015]: time="2024-08-05T21:57:44.680759397Z" level=info msg="CreateContainer within sandbox \"954a0d110f158dca3dc1c22d98d113647b6fac44960a3ad8a1abd328ca07127f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ae281e146a5917e4542871b7ab2292aef662162ca039665d116ba1cf72c2dde9\"" Aug 5 21:57:44.682302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2298251072.mount: Deactivated successfully. Aug 5 21:57:44.690412 containerd[2015]: time="2024-08-05T21:57:44.688849401Z" level=info msg="StartContainer for \"ae281e146a5917e4542871b7ab2292aef662162ca039665d116ba1cf72c2dde9\"" Aug 5 21:57:44.803818 systemd[1]: Started cri-containerd-ae281e146a5917e4542871b7ab2292aef662162ca039665d116ba1cf72c2dde9.scope - libcontainer container ae281e146a5917e4542871b7ab2292aef662162ca039665d116ba1cf72c2dde9. Aug 5 21:57:44.986710 containerd[2015]: time="2024-08-05T21:57:44.986398919Z" level=info msg="StartContainer for \"ae281e146a5917e4542871b7ab2292aef662162ca039665d116ba1cf72c2dde9\" returns successfully" Aug 5 21:57:46.468446 systemd[1]: Started sshd@23-172.31.26.69:22-147.75.109.163:49128.service - OpenSSH per-connection server daemon (147.75.109.163:49128). Aug 5 21:57:46.678979 sshd[5950]: Accepted publickey for core from 147.75.109.163 port 49128 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:46.684340 sshd[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:46.699055 systemd-logind[1995]: New session 24 of user core. Aug 5 21:57:46.708935 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 21:57:47.097594 sshd[5950]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:47.115261 systemd[1]: sshd@23-172.31.26.69:22-147.75.109.163:49128.service: Deactivated successfully. Aug 5 21:57:47.129925 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 21:57:47.136657 systemd-logind[1995]: Session 24 logged out. Waiting for processes to exit. Aug 5 21:57:47.142985 systemd-logind[1995]: Removed session 24. Aug 5 21:57:47.719839 kubelet[3315]: I0805 21:57:47.719763 3315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d8754fdb6-gjw27" podStartSLOduration=5.941749329 podCreationTimestamp="2024-08-05 21:57:38 +0000 UTC" firstStartedPulling="2024-08-05 21:57:40.824437158 +0000 UTC m=+112.957994818" lastFinishedPulling="2024-08-05 21:57:44.602378505 +0000 UTC m=+116.735936153" observedRunningTime="2024-08-05 21:57:45.432456321 +0000 UTC m=+117.566013993" watchObservedRunningTime="2024-08-05 21:57:47.719690664 +0000 UTC m=+119.853248336" Aug 5 21:57:52.167773 systemd[1]: Started sshd@24-172.31.26.69:22-147.75.109.163:49134.service - OpenSSH per-connection server daemon (147.75.109.163:49134). Aug 5 21:57:52.369426 sshd[5975]: Accepted publickey for core from 147.75.109.163 port 49134 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:52.377348 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:52.405269 systemd-logind[1995]: New session 25 of user core. Aug 5 21:57:52.413091 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 21:57:52.785086 sshd[5975]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:52.802294 systemd[1]: sshd@24-172.31.26.69:22-147.75.109.163:49134.service: Deactivated successfully. Aug 5 21:57:52.818373 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 21:57:52.825367 systemd-logind[1995]: Session 25 logged out. Waiting for processes to exit. Aug 5 21:57:52.830195 systemd-logind[1995]: Removed session 25. Aug 5 21:57:57.835379 systemd[1]: Started sshd@25-172.31.26.69:22-147.75.109.163:46668.service - OpenSSH per-connection server daemon (147.75.109.163:46668). Aug 5 21:57:58.036742 sshd[6013]: Accepted publickey for core from 147.75.109.163 port 46668 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:57:58.041453 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:57:58.054011 systemd-logind[1995]: New session 26 of user core. Aug 5 21:57:58.058868 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 21:57:58.365565 sshd[6013]: pam_unix(sshd:session): session closed for user core Aug 5 21:57:58.372523 systemd[1]: sshd@25-172.31.26.69:22-147.75.109.163:46668.service: Deactivated successfully. Aug 5 21:57:58.377648 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 21:57:58.381810 systemd-logind[1995]: Session 26 logged out. Waiting for processes to exit. Aug 5 21:57:58.385876 systemd-logind[1995]: Removed session 26. Aug 5 21:58:03.415966 systemd[1]: Started sshd@26-172.31.26.69:22-147.75.109.163:46684.service - OpenSSH per-connection server daemon (147.75.109.163:46684). Aug 5 21:58:03.616857 sshd[6036]: Accepted publickey for core from 147.75.109.163 port 46684 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:58:03.623123 sshd[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:58:03.640260 systemd-logind[1995]: New session 27 of user core. Aug 5 21:58:03.648860 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 21:58:04.042898 sshd[6036]: pam_unix(sshd:session): session closed for user core Aug 5 21:58:04.063593 systemd[1]: sshd@26-172.31.26.69:22-147.75.109.163:46684.service: Deactivated successfully. Aug 5 21:58:04.075101 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 21:58:04.087868 systemd-logind[1995]: Session 27 logged out. Waiting for processes to exit. Aug 5 21:58:04.092326 systemd-logind[1995]: Removed session 27. Aug 5 21:58:09.090526 systemd[1]: Started sshd@27-172.31.26.69:22-147.75.109.163:46636.service - OpenSSH per-connection server daemon (147.75.109.163:46636). Aug 5 21:58:09.307675 sshd[6070]: Accepted publickey for core from 147.75.109.163 port 46636 ssh2: RSA SHA256:wCWVMkJZxcY0ESuxWRGGxvacdmS2jITy+nqYuVzP2ZU Aug 5 21:58:09.310784 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:58:09.321893 systemd-logind[1995]: New session 28 of user core. Aug 5 21:58:09.328829 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 21:58:09.647268 sshd[6070]: pam_unix(sshd:session): session closed for user core Aug 5 21:58:09.662347 systemd[1]: sshd@27-172.31.26.69:22-147.75.109.163:46636.service: Deactivated successfully. Aug 5 21:58:09.664734 systemd-logind[1995]: Session 28 logged out. Waiting for processes to exit. Aug 5 21:58:09.679231 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 21:58:09.693537 systemd-logind[1995]: Removed session 28. Aug 5 21:58:24.055701 systemd[1]: cri-containerd-4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2.scope: Deactivated successfully. Aug 5 21:58:24.058820 systemd[1]: cri-containerd-4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2.scope: Consumed 8.869s CPU time, 22.2M memory peak, 0B memory swap peak. Aug 5 21:58:24.143867 containerd[2015]: time="2024-08-05T21:58:24.143638929Z" level=info msg="shim disconnected" id=4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2 namespace=k8s.io Aug 5 21:58:24.146916 containerd[2015]: time="2024-08-05T21:58:24.143858565Z" level=warning msg="cleaning up after shim disconnected" id=4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2 namespace=k8s.io Aug 5 21:58:24.146916 containerd[2015]: time="2024-08-05T21:58:24.143907981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:58:24.149286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2-rootfs.mount: Deactivated successfully. Aug 5 21:58:24.588041 kubelet[3315]: I0805 21:58:24.587989 3315 scope.go:117] "RemoveContainer" containerID="4605d53a361d81c6d2b395cee53828c5fa78a199d05e0e4c4bda796f1d0ecaf2" Aug 5 21:58:24.595779 containerd[2015]: time="2024-08-05T21:58:24.595721028Z" level=info msg="CreateContainer within sandbox \"f6123fb6911594d1f1e936819dea1fde02bb93681620d28e9ef1bbfd7a891402\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 5 21:58:24.639851 containerd[2015]: time="2024-08-05T21:58:24.639729468Z" level=info msg="CreateContainer within sandbox \"f6123fb6911594d1f1e936819dea1fde02bb93681620d28e9ef1bbfd7a891402\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3e71c32320aaf8af9d09d3c42dc61b8adf5830f6024ae7c174de2f8784fe0577\"" Aug 5 21:58:24.643002 containerd[2015]: time="2024-08-05T21:58:24.641855832Z" level=info msg="StartContainer for \"3e71c32320aaf8af9d09d3c42dc61b8adf5830f6024ae7c174de2f8784fe0577\"" Aug 5 21:58:24.739562 systemd[1]: Started cri-containerd-3e71c32320aaf8af9d09d3c42dc61b8adf5830f6024ae7c174de2f8784fe0577.scope - libcontainer container 3e71c32320aaf8af9d09d3c42dc61b8adf5830f6024ae7c174de2f8784fe0577. Aug 5 21:58:24.827091 containerd[2015]: time="2024-08-05T21:58:24.827005813Z" level=info msg="StartContainer for \"3e71c32320aaf8af9d09d3c42dc61b8adf5830f6024ae7c174de2f8784fe0577\" returns successfully" Aug 5 21:58:24.920914 systemd[1]: cri-containerd-a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a.scope: Deactivated successfully. Aug 5 21:58:24.923568 systemd[1]: cri-containerd-a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a.scope: Consumed 13.901s CPU time. Aug 5 21:58:25.020650 containerd[2015]: time="2024-08-05T21:58:25.015751954Z" level=info msg="shim disconnected" id=a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a namespace=k8s.io Aug 5 21:58:25.020650 containerd[2015]: time="2024-08-05T21:58:25.015877150Z" level=warning msg="cleaning up after shim disconnected" id=a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a namespace=k8s.io Aug 5 21:58:25.020650 containerd[2015]: time="2024-08-05T21:58:25.015899578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:58:25.020808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a-rootfs.mount: Deactivated successfully. Aug 5 21:58:25.610384 kubelet[3315]: I0805 21:58:25.610225 3315 scope.go:117] "RemoveContainer" containerID="a90d82cdb8764528ced366164d28a36a7a4cf9ddd1b16648ac6630ee34256d2a" Aug 5 21:58:25.620096 containerd[2015]: time="2024-08-05T21:58:25.619662997Z" level=info msg="CreateContainer within sandbox \"a729e1cf29a1a60fac8db0dc23f87a39d51220f5db64cb507a7323d6825781ca\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 5 21:58:25.663065 containerd[2015]: time="2024-08-05T21:58:25.662592145Z" level=info msg="CreateContainer within sandbox \"a729e1cf29a1a60fac8db0dc23f87a39d51220f5db64cb507a7323d6825781ca\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4fcf4ada49447bfe7a2f555e33c29a0c91d6f44a54597a70d1519e52f195c38e\"" Aug 5 21:58:25.669640 containerd[2015]: time="2024-08-05T21:58:25.668450701Z" level=info msg="StartContainer for \"4fcf4ada49447bfe7a2f555e33c29a0c91d6f44a54597a70d1519e52f195c38e\"" Aug 5 21:58:25.810738 systemd[1]: Started cri-containerd-4fcf4ada49447bfe7a2f555e33c29a0c91d6f44a54597a70d1519e52f195c38e.scope - libcontainer container 4fcf4ada49447bfe7a2f555e33c29a0c91d6f44a54597a70d1519e52f195c38e. Aug 5 21:58:25.916181 containerd[2015]: time="2024-08-05T21:58:25.915436406Z" level=info msg="StartContainer for \"4fcf4ada49447bfe7a2f555e33c29a0c91d6f44a54597a70d1519e52f195c38e\" returns successfully" Aug 5 21:58:29.098953 systemd[1]: cri-containerd-92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc.scope: Deactivated successfully. Aug 5 21:58:29.104152 systemd[1]: cri-containerd-92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc.scope: Consumed 3.789s CPU time, 15.5M memory peak, 0B memory swap peak. Aug 5 21:58:29.172700 containerd[2015]: time="2024-08-05T21:58:29.171801494Z" level=info msg="shim disconnected" id=92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc namespace=k8s.io Aug 5 21:58:29.172700 containerd[2015]: time="2024-08-05T21:58:29.171942026Z" level=warning msg="cleaning up after shim disconnected" id=92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc namespace=k8s.io Aug 5 21:58:29.172700 containerd[2015]: time="2024-08-05T21:58:29.171970358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:58:29.174124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc-rootfs.mount: Deactivated successfully. Aug 5 21:58:29.637291 kubelet[3315]: I0805 21:58:29.637198 3315 scope.go:117] "RemoveContainer" containerID="92b13b26756956440a34bd200658d1479f04fcc8ac84575278ef4b597e7c6ebc" Aug 5 21:58:29.647095 containerd[2015]: time="2024-08-05T21:58:29.647012453Z" level=info msg="CreateContainer within sandbox \"af18780cd5b999c5cdb19f69841560b8d0cfef27613848735f2900cbe72e009a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 5 21:58:29.688568 containerd[2015]: time="2024-08-05T21:58:29.687263369Z" level=info msg="CreateContainer within sandbox \"af18780cd5b999c5cdb19f69841560b8d0cfef27613848735f2900cbe72e009a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a8e17d9e922f95f5a93861849365dc97e652243a598c9e2b5933ba4ee08cbaa2\"" Aug 5 21:58:29.691396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2850809643.mount: Deactivated successfully. Aug 5 21:58:29.694609 containerd[2015]: time="2024-08-05T21:58:29.694128353Z" level=info msg="StartContainer for \"a8e17d9e922f95f5a93861849365dc97e652243a598c9e2b5933ba4ee08cbaa2\"" Aug 5 21:58:29.783226 systemd[1]: Started cri-containerd-a8e17d9e922f95f5a93861849365dc97e652243a598c9e2b5933ba4ee08cbaa2.scope - libcontainer container a8e17d9e922f95f5a93861849365dc97e652243a598c9e2b5933ba4ee08cbaa2. Aug 5 21:58:29.884503 containerd[2015]: time="2024-08-05T21:58:29.881717970Z" level=info msg="StartContainer for \"a8e17d9e922f95f5a93861849365dc97e652243a598c9e2b5933ba4ee08cbaa2\" returns successfully" Aug 5 21:58:30.170616 systemd[1]: run-containerd-runc-k8s.io-a8e17d9e922f95f5a93861849365dc97e652243a598c9e2b5933ba4ee08cbaa2-runc.sGQs9v.mount: Deactivated successfully. Aug 5 21:58:31.303661 kubelet[3315]: E0805 21:58:31.302848 3315 controller.go:193] "Failed to update lease" err="Put \"https://172.31.26.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-69?timeout=10s\": context deadline exceeded" Aug 5 21:58:33.965963 systemd[1]: run-containerd-runc-k8s.io-f3c59f399c582685d71a928801702c66b111d16e6e3d27329a12404f4fe1600f-runc.uDVSaK.mount: Deactivated successfully.