Sep 4 17:36:30.066705 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:36:30.066750 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:36:30.066836 kernel: BIOS-provided physical RAM map: Sep 4 17:36:30.066848 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:36:30.066859 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:36:30.066871 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:36:30.066890 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Sep 4 17:36:30.066901 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Sep 4 17:36:30.066911 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Sep 4 17:36:30.066922 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:36:30.066933 kernel: NX (Execute Disable) protection: active Sep 4 17:36:30.066946 kernel: APIC: Static calls initialized Sep 4 17:36:30.066957 kernel: SMBIOS 2.7 present. Sep 4 17:36:30.066968 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 4 17:36:30.066986 kernel: Hypervisor detected: KVM Sep 4 17:36:30.066999 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:36:30.067012 kernel: kvm-clock: using sched offset of 6318012721 cycles Sep 4 17:36:30.067062 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:36:30.067128 kernel: tsc: Detected 2500.004 MHz processor Sep 4 17:36:30.067181 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:36:30.067195 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:36:30.067213 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Sep 4 17:36:30.067226 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:36:30.067358 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:36:30.067418 kernel: Using GB pages for direct mapping Sep 4 17:36:30.067440 kernel: ACPI: Early table checksum verification disabled Sep 4 17:36:30.067457 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Sep 4 17:36:30.067471 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Sep 4 17:36:30.067488 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:36:30.067503 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 4 17:36:30.067523 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Sep 4 17:36:30.067590 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 17:36:30.067695 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:36:30.067712 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 4 17:36:30.067727 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:36:30.067742 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 4 17:36:30.067757 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 4 17:36:30.067816 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 17:36:30.067834 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Sep 4 17:36:30.067856 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Sep 4 17:36:30.067878 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Sep 4 17:36:30.067893 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Sep 4 17:36:30.067908 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Sep 4 17:36:30.067925 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Sep 4 17:36:30.067943 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Sep 4 17:36:30.067956 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Sep 4 17:36:30.067971 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Sep 4 17:36:30.067986 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Sep 4 17:36:30.068000 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:36:30.070162 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:36:30.070236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 4 17:36:30.070256 kernel: NUMA: Initialized distance table, cnt=1 Sep 4 17:36:30.070270 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Sep 4 17:36:30.070290 kernel: Zone ranges: Sep 4 17:36:30.070303 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:36:30.070316 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Sep 4 17:36:30.070328 kernel: Normal empty Sep 4 17:36:30.070340 kernel: Movable zone start for each node Sep 4 17:36:30.070353 kernel: Early memory node ranges Sep 4 17:36:30.070366 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:36:30.070378 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Sep 4 17:36:30.070391 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Sep 4 17:36:30.070479 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:36:30.070499 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:36:30.070590 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Sep 4 17:36:30.070604 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:36:30.070616 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:36:30.070629 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 4 17:36:30.070642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:36:30.070655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:36:30.070668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:36:30.070856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:36:30.070874 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:36:30.070887 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:36:30.070900 kernel: TSC deadline timer available Sep 4 17:36:30.070913 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:36:30.071142 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:36:30.071155 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Sep 4 17:36:30.071168 kernel: Booting paravirtualized kernel on KVM Sep 4 17:36:30.071181 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:36:30.071193 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:36:30.071211 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:36:30.071224 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:36:30.071236 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:36:30.071248 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:36:30.071261 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:36:30.071276 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:36:30.071289 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:36:30.071301 kernel: random: crng init done Sep 4 17:36:30.071316 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:36:30.071601 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:36:30.071652 kernel: Fallback order for Node 0: 0 Sep 4 17:36:30.071665 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Sep 4 17:36:30.071678 kernel: Policy zone: DMA32 Sep 4 17:36:30.071690 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:36:30.071703 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 125152K reserved, 0K cma-reserved) Sep 4 17:36:30.071716 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:36:30.071729 kernel: Kernel/User page tables isolation: enabled Sep 4 17:36:30.071746 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:36:30.071758 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:36:30.071771 kernel: Dynamic Preempt: voluntary Sep 4 17:36:30.072308 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:36:30.072326 kernel: rcu: RCU event tracing is enabled. Sep 4 17:36:30.072340 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:36:30.072354 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:36:30.072431 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:36:30.072444 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:36:30.072462 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:36:30.072476 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:36:30.072489 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 17:36:30.072503 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:36:30.072517 kernel: Console: colour VGA+ 80x25 Sep 4 17:36:30.072531 kernel: printk: console [ttyS0] enabled Sep 4 17:36:30.072545 kernel: ACPI: Core revision 20230628 Sep 4 17:36:30.072559 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 4 17:36:30.072657 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:36:30.072675 kernel: x2apic enabled Sep 4 17:36:30.072689 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:36:30.072714 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 4 17:36:30.072732 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Sep 4 17:36:30.072748 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:36:30.072763 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:36:30.072777 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:36:30.072790 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:36:30.072804 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:36:30.072819 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:36:30.072876 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:36:30.072895 kernel: RETBleed: Vulnerable Sep 4 17:36:30.073105 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:36:30.073129 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:36:30.073145 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:36:30.073160 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:36:30.073176 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:36:30.073192 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:36:30.073208 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:36:30.073227 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 4 17:36:30.073242 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 4 17:36:30.073265 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:36:30.073284 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:36:30.073299 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:36:30.073313 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 4 17:36:30.073328 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:36:30.073342 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 4 17:36:30.073356 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 4 17:36:30.073372 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 4 17:36:30.073386 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 4 17:36:30.073405 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 4 17:36:30.073421 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 4 17:36:30.073438 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 4 17:36:30.073452 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:36:30.073466 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:36:30.073481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:36:30.073496 kernel: landlock: Up and running. Sep 4 17:36:30.073512 kernel: SELinux: Initializing. Sep 4 17:36:30.073528 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:36:30.073541 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:36:30.073557 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:36:30.073576 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:36:30.073592 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:36:30.073609 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:36:30.073623 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:36:30.073639 kernel: signal: max sigframe size: 3632 Sep 4 17:36:30.073654 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:36:30.073769 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:36:30.073789 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:36:30.073805 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:36:30.073825 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:36:30.073841 kernel: .... node #0, CPUs: #1 Sep 4 17:36:30.073943 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 17:36:30.073964 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:36:30.073981 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:36:30.073998 kernel: smpboot: Max logical packages: 1 Sep 4 17:36:30.079504 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Sep 4 17:36:30.079530 kernel: devtmpfs: initialized Sep 4 17:36:30.079545 kernel: x86/mm: Memory block size: 128MB Sep 4 17:36:30.079569 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:36:30.079583 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:36:30.079597 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:36:30.079612 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:36:30.079626 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:36:30.079640 kernel: audit: type=2000 audit(1725471389.138:1): state=initialized audit_enabled=0 res=1 Sep 4 17:36:30.079653 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:36:30.079667 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:36:30.079680 kernel: cpuidle: using governor menu Sep 4 17:36:30.079698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:36:30.079711 kernel: dca service started, version 1.12.1 Sep 4 17:36:30.079725 kernel: PCI: Using configuration type 1 for base access Sep 4 17:36:30.079739 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:36:30.079752 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:36:30.079766 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:36:30.079780 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:36:30.079794 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:36:30.079807 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:36:30.079823 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:36:30.079837 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:36:30.079851 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:36:30.079864 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 17:36:30.079877 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:36:30.079891 kernel: ACPI: Interpreter enabled Sep 4 17:36:30.079904 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:36:30.079917 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:36:30.079931 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:36:30.079947 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:36:30.079960 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 4 17:36:30.079974 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:36:30.080236 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:36:30.083737 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 17:36:30.083932 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 17:36:30.083956 kernel: acpiphp: Slot [3] registered Sep 4 17:36:30.083980 kernel: acpiphp: Slot [4] registered Sep 4 17:36:30.083997 kernel: acpiphp: Slot [5] registered Sep 4 17:36:30.084032 kernel: acpiphp: Slot [6] registered Sep 4 17:36:30.086315 kernel: acpiphp: Slot [7] registered Sep 4 17:36:30.086336 kernel: acpiphp: Slot [8] registered Sep 4 17:36:30.086356 kernel: acpiphp: Slot [9] registered Sep 4 17:36:30.086376 kernel: acpiphp: Slot [10] registered Sep 4 17:36:30.086395 kernel: acpiphp: Slot [11] registered Sep 4 17:36:30.086414 kernel: acpiphp: Slot [12] registered Sep 4 17:36:30.086442 kernel: acpiphp: Slot [13] registered Sep 4 17:36:30.086462 kernel: acpiphp: Slot [14] registered Sep 4 17:36:30.086481 kernel: acpiphp: Slot [15] registered Sep 4 17:36:30.086501 kernel: acpiphp: Slot [16] registered Sep 4 17:36:30.086520 kernel: acpiphp: Slot [17] registered Sep 4 17:36:30.086540 kernel: acpiphp: Slot [18] registered Sep 4 17:36:30.086559 kernel: acpiphp: Slot [19] registered Sep 4 17:36:30.086578 kernel: acpiphp: Slot [20] registered Sep 4 17:36:30.086598 kernel: acpiphp: Slot [21] registered Sep 4 17:36:30.086670 kernel: acpiphp: Slot [22] registered Sep 4 17:36:30.086699 kernel: acpiphp: Slot [23] registered Sep 4 17:36:30.086718 kernel: acpiphp: Slot [24] registered Sep 4 17:36:30.086737 kernel: acpiphp: Slot [25] registered Sep 4 17:36:30.086757 kernel: acpiphp: Slot [26] registered Sep 4 17:36:30.086777 kernel: acpiphp: Slot [27] registered Sep 4 17:36:30.086791 kernel: acpiphp: Slot [28] registered Sep 4 17:36:30.086810 kernel: acpiphp: Slot [29] registered Sep 4 17:36:30.086824 kernel: acpiphp: Slot [30] registered Sep 4 17:36:30.086838 kernel: acpiphp: Slot [31] registered Sep 4 17:36:30.086853 kernel: PCI host bridge to bus 0000:00 Sep 4 17:36:30.088143 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:36:30.088297 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:36:30.089637 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:36:30.089777 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 17:36:30.089896 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:36:30.090064 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:36:30.090216 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:36:30.090356 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 4 17:36:30.090482 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:36:30.090756 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Sep 4 17:36:30.090897 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 4 17:36:30.093084 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 4 17:36:30.093418 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 4 17:36:30.093572 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 4 17:36:30.093703 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 4 17:36:30.093834 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 4 17:36:30.093980 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 4 17:36:30.100207 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Sep 4 17:36:30.100366 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 4 17:36:30.100569 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:36:30.100721 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:36:30.100851 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Sep 4 17:36:30.100989 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:36:30.101231 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Sep 4 17:36:30.101253 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:36:30.101269 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:36:30.101283 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:36:30.101303 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:36:30.101318 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:36:30.101332 kernel: iommu: Default domain type: Translated Sep 4 17:36:30.101346 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:36:30.101361 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:36:30.101375 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:36:30.101389 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:36:30.101407 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Sep 4 17:36:30.101533 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 4 17:36:30.101662 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 4 17:36:30.101787 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:36:30.101805 kernel: vgaarb: loaded Sep 4 17:36:30.101819 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 4 17:36:30.101833 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 4 17:36:30.101848 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:36:30.101862 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:36:30.101876 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:36:30.101894 kernel: pnp: PnP ACPI init Sep 4 17:36:30.101907 kernel: pnp: PnP ACPI: found 5 devices Sep 4 17:36:30.101921 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:36:30.101935 kernel: NET: Registered PF_INET protocol family Sep 4 17:36:30.101948 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:36:30.101961 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 17:36:30.101975 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:36:30.101988 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:36:30.102001 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 17:36:30.102040 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 17:36:30.102055 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:36:30.102069 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:36:30.102083 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:36:30.102097 kernel: NET: Registered PF_XDP protocol family Sep 4 17:36:30.102219 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:36:30.102338 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:36:30.102452 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:36:30.102570 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 17:36:30.102748 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:36:30.102768 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:36:30.102782 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:36:30.102797 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 4 17:36:30.102810 kernel: clocksource: Switched to clocksource tsc Sep 4 17:36:30.102824 kernel: Initialise system trusted keyrings Sep 4 17:36:30.102837 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 17:36:30.102855 kernel: Key type asymmetric registered Sep 4 17:36:30.102867 kernel: Asymmetric key parser 'x509' registered Sep 4 17:36:30.102881 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:36:30.102895 kernel: io scheduler mq-deadline registered Sep 4 17:36:30.102909 kernel: io scheduler kyber registered Sep 4 17:36:30.102923 kernel: io scheduler bfq registered Sep 4 17:36:30.102936 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:36:30.102950 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:36:30.102965 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:36:30.102981 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:36:30.102995 kernel: i8042: Warning: Keylock active Sep 4 17:36:30.103010 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:36:30.103198 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:36:30.103513 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 17:36:30.103647 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 17:36:30.103765 kernel: rtc_cmos 00:00: setting system clock to 2024-09-04T17:36:29 UTC (1725471389) Sep 4 17:36:30.103887 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 17:36:30.103912 kernel: intel_pstate: CPU model not supported Sep 4 17:36:30.103926 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:36:30.103939 kernel: Segment Routing with IPv6 Sep 4 17:36:30.104115 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:36:30.104130 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:36:30.104143 kernel: Key type dns_resolver registered Sep 4 17:36:30.104157 kernel: IPI shorthand broadcast: enabled Sep 4 17:36:30.104171 kernel: sched_clock: Marking stable (729001629, 312774696)->(1240763807, -198987482) Sep 4 17:36:30.104184 kernel: registered taskstats version 1 Sep 4 17:36:30.104204 kernel: Loading compiled-in X.509 certificates Sep 4 17:36:30.104217 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:36:30.104231 kernel: Key type .fscrypt registered Sep 4 17:36:30.104245 kernel: Key type fscrypt-provisioning registered Sep 4 17:36:30.104259 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:36:30.104273 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:36:30.104285 kernel: ima: No architecture policies found Sep 4 17:36:30.104298 kernel: clk: Disabling unused clocks Sep 4 17:36:30.104312 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:36:30.104329 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:36:30.104343 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:36:30.104357 kernel: Run /init as init process Sep 4 17:36:30.104370 kernel: with arguments: Sep 4 17:36:30.104422 kernel: /init Sep 4 17:36:30.104436 kernel: with environment: Sep 4 17:36:30.104449 kernel: HOME=/ Sep 4 17:36:30.104463 kernel: TERM=linux Sep 4 17:36:30.104476 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:36:30.104500 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:36:30.104531 systemd[1]: Detected virtualization amazon. Sep 4 17:36:30.104549 systemd[1]: Detected architecture x86-64. Sep 4 17:36:30.104564 systemd[1]: Running in initrd. Sep 4 17:36:30.104578 systemd[1]: No hostname configured, using default hostname. Sep 4 17:36:30.104596 systemd[1]: Hostname set to . Sep 4 17:36:30.104611 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:36:30.104625 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:36:30.104639 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:36:30.104654 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:36:30.104671 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:36:30.104686 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:36:30.104701 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:36:30.104719 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:36:30.104736 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:36:30.104751 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:36:30.104766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:36:30.104781 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:36:30.104863 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:36:30.104878 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:36:30.104897 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:36:30.104912 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:36:30.104927 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:36:30.104943 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:36:30.104958 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:36:30.104974 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:36:30.104989 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:36:30.105004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:36:30.106112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:36:30.106138 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:36:30.106155 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:36:30.106177 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:36:30.106197 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:36:30.106211 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:36:30.106227 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:36:30.106245 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:36:30.106263 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:36:30.106278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:36:30.106333 systemd-journald[178]: Collecting audit messages is disabled. Sep 4 17:36:30.106368 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:36:30.106387 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:36:30.106402 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:36:30.106418 systemd-journald[178]: Journal started Sep 4 17:36:30.106453 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2972fc79ee37158ec4ca05ce52ecb0) is 4.8M, max 38.6M, 33.7M free. Sep 4 17:36:30.112574 systemd-modules-load[179]: Inserted module 'overlay' Sep 4 17:36:30.250808 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:36:30.250848 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:36:30.250869 kernel: Bridge firewalling registered Sep 4 17:36:30.176955 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 4 17:36:30.253106 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:36:30.253836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:36:30.260545 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:30.265109 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:36:30.269916 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:36:30.275277 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:36:30.287777 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:36:30.299195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:36:30.305638 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:36:30.320265 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:36:30.324398 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:36:30.334239 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:36:30.342327 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:36:30.345375 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:36:30.379904 dracut-cmdline[212]: dracut-dracut-053 Sep 4 17:36:30.384692 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:36:30.422843 systemd-resolved[213]: Positive Trust Anchors: Sep 4 17:36:30.422868 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:36:30.422918 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:36:30.440072 systemd-resolved[213]: Defaulting to hostname 'linux'. Sep 4 17:36:30.442666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:36:30.445802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:36:30.492046 kernel: SCSI subsystem initialized Sep 4 17:36:30.505055 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:36:30.517045 kernel: iscsi: registered transport (tcp) Sep 4 17:36:30.542148 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:36:30.542227 kernel: QLogic iSCSI HBA Driver Sep 4 17:36:30.589588 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:36:30.595185 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:36:30.629503 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:36:30.629611 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:36:30.629634 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:36:30.695074 kernel: raid6: avx512x4 gen() 13362 MB/s Sep 4 17:36:30.695154 kernel: raid6: avx512x2 gen() 13384 MB/s Sep 4 17:36:30.713077 kernel: raid6: avx512x1 gen() 13385 MB/s Sep 4 17:36:30.730056 kernel: raid6: avx2x4 gen() 13290 MB/s Sep 4 17:36:30.747071 kernel: raid6: avx2x2 gen() 12632 MB/s Sep 4 17:36:30.764214 kernel: raid6: avx2x1 gen() 9001 MB/s Sep 4 17:36:30.764287 kernel: raid6: using algorithm avx512x1 gen() 13385 MB/s Sep 4 17:36:30.785166 kernel: raid6: .... xor() 10478 MB/s, rmw enabled Sep 4 17:36:30.785299 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:36:30.843312 kernel: xor: automatically using best checksumming function avx Sep 4 17:36:31.108043 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:36:31.120791 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:36:31.131213 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:36:31.162836 systemd-udevd[397]: Using default interface naming scheme 'v255'. Sep 4 17:36:31.168969 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:36:31.182299 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:36:31.215590 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 4 17:36:31.251861 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:36:31.259232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:36:31.331279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:36:31.344242 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:36:31.387632 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:36:31.395524 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:36:31.399214 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:36:31.407555 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:36:31.425386 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:36:31.454810 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:36:31.497973 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:36:31.498310 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:36:31.500080 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:36:31.516058 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 4 17:36:31.548075 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:0d:fa:84:34:0b Sep 4 17:36:31.548678 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:36:31.544329 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:36:31.555417 kernel: AES CTR mode by8 optimization enabled Sep 4 17:36:31.555455 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:36:31.555707 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 17:36:31.544549 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:36:31.557554 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:36:31.564501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:36:31.564708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:31.566559 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:36:31.576455 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:36:31.578524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:36:31.585965 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:36:31.585999 kernel: GPT:9289727 != 16777215 Sep 4 17:36:31.586066 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:36:31.586086 kernel: GPT:9289727 != 16777215 Sep 4 17:36:31.586111 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:36:31.586130 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:36:31.592840 (udev-worker)[445]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:36:31.679065 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (459) Sep 4 17:36:31.685036 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (454) Sep 4 17:36:31.793717 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:36:31.811970 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:31.823372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:36:31.858943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:36:31.868580 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:36:31.869110 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:36:31.889205 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:36:31.889343 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:36:31.898190 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:36:31.911047 disk-uuid[629]: Primary Header is updated. Sep 4 17:36:31.911047 disk-uuid[629]: Secondary Entries is updated. Sep 4 17:36:31.911047 disk-uuid[629]: Secondary Header is updated. Sep 4 17:36:31.916128 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:36:31.921107 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:36:31.930079 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:36:32.927373 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:36:32.933529 disk-uuid[630]: The operation has completed successfully. Sep 4 17:36:33.131503 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:36:33.131628 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:36:33.160318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:36:33.167154 sh[973]: Success Sep 4 17:36:33.196275 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:36:33.303589 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:36:33.319176 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:36:33.321548 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:36:33.358663 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:36:33.358732 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:36:33.358762 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:36:33.359652 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:36:33.360288 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:36:33.469049 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:36:33.513306 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:36:33.514276 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:36:33.522458 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:36:33.526083 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:36:33.558645 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:33.558721 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:36:33.558742 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:36:33.567214 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:36:33.596729 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:33.596002 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:36:33.612729 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:36:33.629404 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:36:33.778377 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:36:33.788237 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:36:33.826332 systemd-networkd[1178]: lo: Link UP Sep 4 17:36:33.826343 systemd-networkd[1178]: lo: Gained carrier Sep 4 17:36:33.829683 systemd-networkd[1178]: Enumeration completed Sep 4 17:36:33.830043 systemd-networkd[1178]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:36:33.830048 systemd-networkd[1178]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:36:33.833712 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:36:33.837275 systemd[1]: Reached target network.target - Network. Sep 4 17:36:33.837761 systemd-networkd[1178]: eth0: Link UP Sep 4 17:36:33.837765 systemd-networkd[1178]: eth0: Gained carrier Sep 4 17:36:33.837778 systemd-networkd[1178]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:36:33.852105 systemd-networkd[1178]: eth0: DHCPv4 address 172.31.29.194/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:36:34.074294 ignition[1097]: Ignition 2.19.0 Sep 4 17:36:34.074309 ignition[1097]: Stage: fetch-offline Sep 4 17:36:34.074587 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:34.074600 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:36:34.075367 ignition[1097]: Ignition finished successfully Sep 4 17:36:34.083009 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:36:34.093264 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:36:34.118332 ignition[1186]: Ignition 2.19.0 Sep 4 17:36:34.118354 ignition[1186]: Stage: fetch Sep 4 17:36:34.118961 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:34.119046 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:36:34.119283 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:36:34.129741 ignition[1186]: PUT result: OK Sep 4 17:36:34.132122 ignition[1186]: parsed url from cmdline: "" Sep 4 17:36:34.132132 ignition[1186]: no config URL provided Sep 4 17:36:34.132143 ignition[1186]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:36:34.132169 ignition[1186]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:36:34.132192 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:36:34.134546 ignition[1186]: PUT result: OK Sep 4 17:36:34.134759 ignition[1186]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:36:34.136533 ignition[1186]: GET result: OK Sep 4 17:36:34.136609 ignition[1186]: parsing config with SHA512: c36e381f8483d3b619835edc0f38cd3bbaefd013a74be3dd9271543042aa24d8667b6a03c492b56b59634cf605677f95005cf6060619485c2224dff0852369c0 Sep 4 17:36:34.145154 unknown[1186]: fetched base config from "system" Sep 4 17:36:34.145177 unknown[1186]: fetched base config from "system" Sep 4 17:36:34.145187 unknown[1186]: fetched user config from "aws" Sep 4 17:36:34.147735 ignition[1186]: fetch: fetch complete Sep 4 17:36:34.147744 ignition[1186]: fetch: fetch passed Sep 4 17:36:34.147819 ignition[1186]: Ignition finished successfully Sep 4 17:36:34.153914 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:36:34.161278 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:36:34.184619 ignition[1192]: Ignition 2.19.0 Sep 4 17:36:34.184633 ignition[1192]: Stage: kargs Sep 4 17:36:34.185125 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:34.185138 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:36:34.185252 ignition[1192]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:36:34.187999 ignition[1192]: PUT result: OK Sep 4 17:36:34.220878 ignition[1192]: kargs: kargs passed Sep 4 17:36:34.220994 ignition[1192]: Ignition finished successfully Sep 4 17:36:34.223542 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:36:34.230660 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:36:34.251714 ignition[1198]: Ignition 2.19.0 Sep 4 17:36:34.251728 ignition[1198]: Stage: disks Sep 4 17:36:34.252465 ignition[1198]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:34.252479 ignition[1198]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:36:34.252597 ignition[1198]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:36:34.253920 ignition[1198]: PUT result: OK Sep 4 17:36:34.261477 ignition[1198]: disks: disks passed Sep 4 17:36:34.261565 ignition[1198]: Ignition finished successfully Sep 4 17:36:34.264583 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:36:34.266951 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:36:34.271663 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:36:34.274461 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:36:34.277236 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:36:34.278982 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:36:34.290290 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:36:34.362232 systemd-fsck[1206]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:36:34.366985 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:36:34.381757 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:36:34.542043 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:36:34.542896 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:36:34.544841 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:36:34.566209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:36:34.594946 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:36:34.613772 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:36:34.614073 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:36:34.614112 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:36:34.625047 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1225) Sep 4 17:36:34.629049 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:34.629119 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:36:34.629199 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:36:34.634780 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:36:34.638210 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:36:34.640830 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:36:34.653396 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:36:34.925744 initrd-setup-root[1249]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:36:34.959603 initrd-setup-root[1256]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:36:34.969975 initrd-setup-root[1263]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:36:34.978567 initrd-setup-root[1270]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:36:35.290611 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:36:35.297184 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:36:35.306446 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:36:35.320324 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:35.319130 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:36:35.388973 ignition[1342]: INFO : Ignition 2.19.0 Sep 4 17:36:35.388973 ignition[1342]: INFO : Stage: mount Sep 4 17:36:35.391157 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:35.391157 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:36:35.391157 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:36:35.395657 ignition[1342]: INFO : PUT result: OK Sep 4 17:36:35.396891 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:36:35.401186 ignition[1342]: INFO : mount: mount passed Sep 4 17:36:35.403005 ignition[1342]: INFO : Ignition finished successfully Sep 4 17:36:35.404925 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:36:35.410382 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:36:35.549709 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:36:35.566099 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1356) Sep 4 17:36:35.568815 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:35.568882 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:36:35.568901 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:36:35.573110 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:36:35.573176 systemd-networkd[1178]: eth0: Gained IPv6LL Sep 4 17:36:35.575804 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:36:35.603854 ignition[1373]: INFO : Ignition 2.19.0 Sep 4 17:36:35.603854 ignition[1373]: INFO : Stage: files Sep 4 17:36:35.606224 ignition[1373]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:35.606224 ignition[1373]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:36:35.606224 ignition[1373]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:36:35.610985 ignition[1373]: INFO : PUT result: OK Sep 4 17:36:35.616730 ignition[1373]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:36:35.618243 ignition[1373]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:36:35.618243 ignition[1373]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:36:35.646460 ignition[1373]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:36:35.648296 ignition[1373]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:36:35.648296 ignition[1373]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:36:35.647054 unknown[1373]: wrote ssh authorized keys file for user: core Sep 4 17:36:35.661894 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:36:35.667445 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:36:35.763279 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:36:35.887752 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:36:35.887752 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:36:35.894099 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:36:36.194913 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:36:36.521480 ignition[1373]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:36:36.521480 ignition[1373]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:36:36.525253 ignition[1373]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:36:36.527746 ignition[1373]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:36:36.527746 ignition[1373]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:36:36.531904 ignition[1373]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:36:36.531904 ignition[1373]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:36:36.539457 ignition[1373]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:36:36.539457 ignition[1373]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:36:36.543392 ignition[1373]: INFO : files: files passed Sep 4 17:36:36.543392 ignition[1373]: INFO : Ignition finished successfully Sep 4 17:36:36.548505 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:36:36.555206 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:36:36.567343 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:36:36.574308 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:36:36.574423 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:36:36.589299 initrd-setup-root-after-ignition[1401]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:36:36.589299 initrd-setup-root-after-ignition[1401]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:36:36.593575 initrd-setup-root-after-ignition[1405]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:36:36.598929 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:36:36.599290 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:36:36.608410 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:36:36.648810 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:36:36.648923 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:36:36.653515 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:36:36.656073 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:36:36.658291 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:36:36.665222 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:36:36.681550 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:36:36.689262 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:36:36.735538 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:36:36.738102 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:36:36.738349 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:36:36.742458 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:36:36.742595 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:36:36.747081 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:36:36.748307 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:36:36.751575 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:36:36.754050 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:36:36.756514 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:36:36.758749 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:36:36.761005 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:36:36.763920 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:36:36.766201 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:36:36.768496 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:36:36.770385 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:36:36.771577 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:36:36.774033 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:36:36.776949 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:36:36.779735 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:36:36.781055 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:36:36.784136 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:36:36.785706 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:36:36.788421 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:36:36.789622 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:36:36.792199 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:36:36.793261 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:36:36.804301 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:36:36.811355 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:36:36.812959 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:36:36.813294 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:36:36.814964 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:36:36.815158 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:36:36.829082 ignition[1425]: INFO : Ignition 2.19.0 Sep 4 17:36:36.829082 ignition[1425]: INFO : Stage: umount Sep 4 17:36:36.832620 ignition[1425]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:36.832620 ignition[1425]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:36:36.832620 ignition[1425]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:36:36.832620 ignition[1425]: INFO : PUT result: OK Sep 4 17:36:36.839949 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:36:36.841303 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:36:36.849243 ignition[1425]: INFO : umount: umount passed Sep 4 17:36:36.849243 ignition[1425]: INFO : Ignition finished successfully Sep 4 17:36:36.854564 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:36:36.856125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:36:36.860728 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:36:36.862607 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:36:36.862702 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:36:36.865935 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:36:36.865990 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:36:36.867196 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:36:36.867239 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:36:36.868499 systemd[1]: Stopped target network.target - Network. Sep 4 17:36:36.869506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:36:36.869574 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:36:36.872324 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:36:36.873287 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:36:36.878483 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:36:36.881772 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:36:36.883676 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:36:36.885989 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:36:36.886072 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:36:36.888714 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:36:36.888760 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:36:36.889919 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:36:36.890151 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:36:36.892718 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:36:36.892766 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:36:36.901285 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:36:36.904246 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:36:36.907645 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:36:36.907747 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:36:36.913056 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:36:36.913155 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:36:36.916114 systemd-networkd[1178]: eth0: DHCPv6 lease lost Sep 4 17:36:36.918603 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:36:36.919947 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:36:36.925561 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:36:36.925733 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:36:36.934528 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:36:36.934600 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:36:36.943143 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:36:36.944215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:36:36.944282 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:36:36.945730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:36:36.945783 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:36:36.947080 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:36:36.947127 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:36:36.948292 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:36:36.948334 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:36:36.951266 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:36:37.006830 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:36:37.007048 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:36:37.013490 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:36:37.013571 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:36:37.015425 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:36:37.015483 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:36:37.017547 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:36:37.017621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:36:37.027037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:36:37.027101 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:36:37.028451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:36:37.028498 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:36:37.039188 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:36:37.040508 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:36:37.040572 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:36:37.042233 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:36:37.042285 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:36:37.043567 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:36:37.043615 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:36:37.045059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:36:37.045104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:37.046735 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:36:37.048112 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:36:37.053484 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:36:37.057283 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:36:37.059484 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:36:37.072840 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:36:37.080121 systemd[1]: Switching root. Sep 4 17:36:37.115517 systemd-journald[178]: Journal stopped Sep 4 17:36:39.711419 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 4 17:36:39.711530 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:36:39.711552 kernel: SELinux: policy capability open_perms=1 Sep 4 17:36:39.711571 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:36:39.711589 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:36:39.711613 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:36:39.711640 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:36:39.711658 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:36:39.711675 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:36:39.711693 kernel: audit: type=1403 audit(1725471398.078:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:36:39.711712 systemd[1]: Successfully loaded SELinux policy in 78.409ms. Sep 4 17:36:39.711737 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.840ms. Sep 4 17:36:39.711757 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:36:39.711777 systemd[1]: Detected virtualization amazon. Sep 4 17:36:39.711797 systemd[1]: Detected architecture x86-64. Sep 4 17:36:39.711819 systemd[1]: Detected first boot. Sep 4 17:36:39.711838 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:36:39.711856 zram_generator::config[1467]: No configuration found. Sep 4 17:36:39.711876 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:36:39.711896 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:36:39.711914 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:36:39.711933 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:36:39.711954 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:36:39.711975 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:36:39.711993 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:36:39.714055 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:36:39.714113 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:36:39.714135 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:36:39.714156 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:36:39.714175 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:36:39.714194 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:36:39.714221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:36:39.714240 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:36:39.714260 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:36:39.714281 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:36:39.714301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:36:39.714319 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:36:39.714336 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:36:39.714354 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:36:39.714378 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:36:39.714403 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:36:39.714422 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:36:39.714440 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:36:39.714459 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:36:39.714477 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:36:39.714496 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:36:39.714568 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:36:39.714588 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:36:39.714611 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:36:39.714630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:36:39.714649 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:36:39.714668 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:36:39.714688 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:36:39.714731 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:36:39.714751 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:36:39.714770 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:39.714790 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:36:39.714970 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:36:39.715002 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:36:39.715038 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:36:39.715058 systemd[1]: Reached target machines.target - Containers. Sep 4 17:36:39.715079 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:36:39.715101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:36:39.715119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:36:39.715184 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:36:39.715259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:36:39.715279 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:36:39.715298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:36:39.715317 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:36:39.715337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:36:39.715357 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:36:39.715377 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:36:39.715397 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:36:39.715416 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:36:39.715438 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:36:39.715458 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:36:39.715478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:36:39.715499 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:36:39.715525 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:36:39.715546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:36:39.715565 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:36:39.715586 systemd[1]: Stopped verity-setup.service. Sep 4 17:36:39.715609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:39.715634 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:36:39.715654 kernel: fuse: init (API version 7.39) Sep 4 17:36:39.715675 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:36:39.715696 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:36:39.715715 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:36:39.715735 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:36:39.715758 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:36:39.715778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:36:39.715799 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:36:39.715819 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:36:39.715839 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:36:39.715861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:36:39.715885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:36:39.715911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:36:39.715934 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:36:39.715956 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:36:39.715982 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:36:39.716004 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:36:39.722435 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:36:39.722522 systemd-journald[1541]: Collecting audit messages is disabled. Sep 4 17:36:39.722566 kernel: loop: module loaded Sep 4 17:36:39.722592 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:36:39.722616 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:36:39.722639 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:36:39.722664 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:36:39.722687 systemd-journald[1541]: Journal started Sep 4 17:36:39.722735 systemd-journald[1541]: Runtime Journal (/run/log/journal/ec2972fc79ee37158ec4ca05ce52ecb0) is 4.8M, max 38.6M, 33.7M free. Sep 4 17:36:39.132985 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:36:39.151864 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:36:39.152388 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:36:39.727029 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:36:39.742048 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:36:39.742135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:36:39.763678 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:36:39.763843 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:36:39.783238 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:36:39.792709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:36:39.809145 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:36:39.812827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:36:39.818038 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:36:39.822740 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:36:39.823109 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:36:39.825410 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:36:39.827177 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:36:39.828960 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:36:39.830824 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:36:39.838057 kernel: ACPI: bus type drm_connector registered Sep 4 17:36:39.841878 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:36:39.844475 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:36:39.869538 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:36:39.899583 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:36:39.921739 kernel: loop0: detected capacity change from 0 to 61336 Sep 4 17:36:39.909444 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:36:39.910938 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:36:39.925296 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:36:39.930413 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:36:39.931814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:36:39.950074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:36:39.965372 systemd-journald[1541]: Time spent on flushing to /var/log/journal/ec2972fc79ee37158ec4ca05ce52ecb0 is 87.930ms for 966 entries. Sep 4 17:36:39.965372 systemd-journald[1541]: System Journal (/var/log/journal/ec2972fc79ee37158ec4ca05ce52ecb0) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:36:40.077444 systemd-journald[1541]: Received client request to flush runtime journal. Sep 4 17:36:40.077508 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:36:40.077537 kernel: loop1: detected capacity change from 0 to 140728 Sep 4 17:36:39.976252 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:36:39.991251 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:36:40.036334 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Sep 4 17:36:40.036358 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Sep 4 17:36:40.044979 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:36:40.050500 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:36:40.064396 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:36:40.079763 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:36:40.082304 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:36:40.089125 udevadm[1604]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:36:40.200420 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:36:40.213432 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:36:40.227042 kernel: loop2: detected capacity change from 0 to 89336 Sep 4 17:36:40.241350 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Sep 4 17:36:40.241735 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Sep 4 17:36:40.250705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:36:40.352051 kernel: loop3: detected capacity change from 0 to 209816 Sep 4 17:36:40.398119 kernel: loop4: detected capacity change from 0 to 61336 Sep 4 17:36:40.422084 kernel: loop5: detected capacity change from 0 to 140728 Sep 4 17:36:40.452082 kernel: loop6: detected capacity change from 0 to 89336 Sep 4 17:36:40.470039 kernel: loop7: detected capacity change from 0 to 209816 Sep 4 17:36:40.485298 (sd-merge)[1621]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:36:40.486006 (sd-merge)[1621]: Merged extensions into '/usr'. Sep 4 17:36:40.492420 systemd[1]: Reloading requested from client PID 1574 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:36:40.492440 systemd[1]: Reloading... Sep 4 17:36:40.634294 zram_generator::config[1642]: No configuration found. Sep 4 17:36:41.004499 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:36:41.155139 systemd[1]: Reloading finished in 655 ms. Sep 4 17:36:41.185572 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:36:41.194281 systemd[1]: Starting ensure-sysext.service... Sep 4 17:36:41.206466 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:36:41.229997 systemd[1]: Reloading requested from client PID 1693 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:36:41.230040 systemd[1]: Reloading... Sep 4 17:36:41.264597 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:36:41.266641 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:36:41.271502 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:36:41.271968 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Sep 4 17:36:41.272101 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Sep 4 17:36:41.280565 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:36:41.280585 systemd-tmpfiles[1694]: Skipping /boot Sep 4 17:36:41.349490 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:36:41.365282 systemd-tmpfiles[1694]: Skipping /boot Sep 4 17:36:41.385041 zram_generator::config[1720]: No configuration found. Sep 4 17:36:41.575271 ldconfig[1563]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:36:41.663399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:36:41.756280 systemd[1]: Reloading finished in 525 ms. Sep 4 17:36:41.804286 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:36:41.816798 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:36:41.835679 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:36:41.848305 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:36:41.853804 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:36:41.860234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:36:41.873932 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:36:41.883874 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:36:41.906846 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:36:41.913283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:41.913587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:36:41.922505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:36:41.926965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:36:41.937713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:36:41.939218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:36:41.950488 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:36:41.951720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:41.953283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:36:41.954814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:36:41.971646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:41.972060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:36:41.984246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:36:41.985707 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:36:41.985995 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:41.991236 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:36:42.007855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:42.008961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:36:42.020377 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:36:42.022642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:36:42.022941 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:36:42.024940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:42.028746 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:36:42.028971 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:36:42.038140 systemd[1]: Finished ensure-sysext.service. Sep 4 17:36:42.040762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:36:42.040959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:36:42.046836 systemd-udevd[1779]: Using default interface naming scheme 'v255'. Sep 4 17:36:42.049434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:36:42.058272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:36:42.060112 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:36:42.061761 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:36:42.066059 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:36:42.067121 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:36:42.077870 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:36:42.079959 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:36:42.083304 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:36:42.096366 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:36:42.106009 augenrules[1809]: No rules Sep 4 17:36:42.109747 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:36:42.113184 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:36:42.139452 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:36:42.145687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:36:42.154388 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:36:42.279996 systemd-resolved[1775]: Positive Trust Anchors: Sep 4 17:36:42.283301 systemd-resolved[1775]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:36:42.283376 systemd-resolved[1775]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:36:42.306804 systemd-resolved[1775]: Defaulting to hostname 'linux'. Sep 4 17:36:42.316070 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:36:42.320636 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:36:42.332617 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:36:42.335072 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1823) Sep 4 17:36:42.337921 (udev-worker)[1827]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:36:42.342156 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1823) Sep 4 17:36:42.347579 systemd-networkd[1825]: lo: Link UP Sep 4 17:36:42.347591 systemd-networkd[1825]: lo: Gained carrier Sep 4 17:36:42.349967 systemd-networkd[1825]: Enumeration completed Sep 4 17:36:42.351332 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:36:42.351549 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:36:42.351555 systemd-networkd[1825]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:36:42.353025 systemd[1]: Reached target network.target - Network. Sep 4 17:36:42.359519 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:36:42.363700 systemd-networkd[1825]: eth0: Link UP Sep 4 17:36:42.365363 systemd-networkd[1825]: eth0: Gained carrier Sep 4 17:36:42.365458 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:36:42.374127 systemd-networkd[1825]: eth0: DHCPv4 address 172.31.29.194/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:36:42.481087 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Sep 4 17:36:42.485075 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 17:36:42.493731 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:36:42.507466 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 4 17:36:42.521072 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1826) Sep 4 17:36:42.532045 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:36:42.536046 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 4 17:36:42.556050 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 17:36:42.689319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:36:42.725043 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:36:42.757877 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:36:42.765289 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:36:42.767669 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:36:42.772368 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:36:42.817471 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:36:42.821308 lvm[1938]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:36:42.946522 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:36:42.948457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:42.952295 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:36:42.953702 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:36:42.954938 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:36:42.956656 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:36:42.958457 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:36:42.964694 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:36:42.966957 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:36:42.968527 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:36:42.968639 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:36:42.970926 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:36:42.976202 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:36:42.983743 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:36:42.996712 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:36:42.999686 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:36:43.001841 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:36:43.003231 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:36:43.004785 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:36:43.010167 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:36:43.010196 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:36:43.015981 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:36:43.023427 lvm[1946]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:36:43.037427 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:36:43.047368 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:36:43.051611 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:36:43.061653 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:36:43.062854 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:36:43.067655 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:36:43.076274 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:36:43.100352 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:36:43.112293 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:36:43.130394 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:36:43.152312 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:36:43.157855 jq[1950]: false Sep 4 17:36:43.164261 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:36:43.168412 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:36:43.169195 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:36:43.174240 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:36:43.193458 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:36:43.197630 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:36:43.207003 extend-filesystems[1951]: Found loop4 Sep 4 17:36:43.207003 extend-filesystems[1951]: Found loop5 Sep 4 17:36:43.207003 extend-filesystems[1951]: Found loop6 Sep 4 17:36:43.207003 extend-filesystems[1951]: Found loop7 Sep 4 17:36:43.207003 extend-filesystems[1951]: Found nvme0n1 Sep 4 17:36:43.207003 extend-filesystems[1951]: Found nvme0n1p3 Sep 4 17:36:43.207003 extend-filesystems[1951]: Found usr Sep 4 17:36:43.207003 extend-filesystems[1951]: Found nvme0n1p4 Sep 4 17:36:43.207003 extend-filesystems[1951]: Found nvme0n1p6 Sep 4 17:36:43.207003 extend-filesystems[1951]: Found nvme0n1p7 Sep 4 17:36:43.207003 extend-filesystems[1951]: Found nvme0n1p9 Sep 4 17:36:43.207003 extend-filesystems[1951]: Checking size of /dev/nvme0n1p9 Sep 4 17:36:43.260420 jq[1964]: true Sep 4 17:36:43.239311 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:17:38 UTC 2024 (1): Starting Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: ---------------------------------------------------- Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: corporation. Support and training for ntp-4 are Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: available at https://www.nwtime.org/support Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: ---------------------------------------------------- Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: proto: precision = 0.097 usec (-23) Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: basedate set to 2024-08-23 Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: gps base set to 2024-08-25 (week 2329) Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: Listen normally on 3 eth0 172.31.29.194:123 Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: Listen normally on 4 lo [::1]:123 Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: bind(21) AF_INET6 fe80::40d:faff:fe84:340b%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: unable to create socket on eth0 (5) for fe80::40d:faff:fe84:340b%2#123 Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: failed to init interface for address fe80::40d:faff:fe84:340b%2 Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:36:43.287914 ntpd[1953]: 4 Sep 17:36:43 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:36:43.208858 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:17:38 UTC 2024 (1): Starting Sep 4 17:36:43.239576 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:36:43.208885 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:36:43.349057 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:36:43.208896 ntpd[1953]: ---------------------------------------------------- Sep 4 17:36:43.350495 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:36:43.208905 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:36:43.208914 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:36:43.208925 ntpd[1953]: corporation. Support and training for ntp-4 are Sep 4 17:36:43.208935 ntpd[1953]: available at https://www.nwtime.org/support Sep 4 17:36:43.208944 ntpd[1953]: ---------------------------------------------------- Sep 4 17:36:43.382677 jq[1973]: true Sep 4 17:36:43.215706 ntpd[1953]: proto: precision = 0.097 usec (-23) Sep 4 17:36:43.419379 extend-filesystems[1951]: Resized partition /dev/nvme0n1p9 Sep 4 17:36:43.419138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:36:43.226385 ntpd[1953]: basedate set to 2024-08-23 Sep 4 17:36:43.432942 extend-filesystems[1998]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:36:43.420402 (ntainerd)[1981]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:36:43.226409 ntpd[1953]: gps base set to 2024-08-25 (week 2329) Sep 4 17:36:43.234238 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:36:43.234292 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:36:43.234486 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:36:43.234524 ntpd[1953]: Listen normally on 3 eth0 172.31.29.194:123 Sep 4 17:36:43.234565 ntpd[1953]: Listen normally on 4 lo [::1]:123 Sep 4 17:36:43.235634 ntpd[1953]: bind(21) AF_INET6 fe80::40d:faff:fe84:340b%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:36:43.235679 ntpd[1953]: unable to create socket on eth0 (5) for fe80::40d:faff:fe84:340b%2#123 Sep 4 17:36:43.235695 ntpd[1953]: failed to init interface for address fe80::40d:faff:fe84:340b%2 Sep 4 17:36:43.235739 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Sep 4 17:36:43.240925 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:36:43.464821 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:36:43.440402 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:36:43.471709 update_engine[1961]: I0904 17:36:43.440907 1961 main.cc:92] Flatcar Update Engine starting Sep 4 17:36:43.241110 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:36:43.440758 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:36:43.409994 dbus-daemon[1949]: [system] SELinux support is enabled Sep 4 17:36:43.447152 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:36:43.450837 dbus-daemon[1949]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1825 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:36:43.447230 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:36:43.464643 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:36:43.466105 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:36:43.466364 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:36:43.475631 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:36:43.485582 tar[1970]: linux-amd64/helm Sep 4 17:36:43.489110 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:36:43.493095 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:36:43.506050 coreos-metadata[1948]: Sep 04 17:36:43.504 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:36:43.513961 coreos-metadata[1948]: Sep 04 17:36:43.508 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:36:43.510678 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:36:43.519600 update_engine[1961]: I0904 17:36:43.519481 1961 update_check_scheduler.cc:74] Next update check in 8m44s Sep 4 17:36:43.528046 coreos-metadata[1948]: Sep 04 17:36:43.524 INFO Fetch successful Sep 4 17:36:43.528046 coreos-metadata[1948]: Sep 04 17:36:43.524 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:36:43.528046 coreos-metadata[1948]: Sep 04 17:36:43.525 INFO Fetch successful Sep 4 17:36:43.528046 coreos-metadata[1948]: Sep 04 17:36:43.525 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:36:43.528046 coreos-metadata[1948]: Sep 04 17:36:43.526 INFO Fetch successful Sep 4 17:36:43.528046 coreos-metadata[1948]: Sep 04 17:36:43.526 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:36:43.528046 coreos-metadata[1948]: Sep 04 17:36:43.526 INFO Fetch successful Sep 4 17:36:43.528046 coreos-metadata[1948]: Sep 04 17:36:43.526 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:36:43.530044 coreos-metadata[1948]: Sep 04 17:36:43.528 INFO Fetch failed with 404: resource not found Sep 4 17:36:43.530044 coreos-metadata[1948]: Sep 04 17:36:43.529 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:36:43.530044 coreos-metadata[1948]: Sep 04 17:36:43.529 INFO Fetch successful Sep 4 17:36:43.530044 coreos-metadata[1948]: Sep 04 17:36:43.529 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:36:43.530285 coreos-metadata[1948]: Sep 04 17:36:43.530 INFO Fetch successful Sep 4 17:36:43.530285 coreos-metadata[1948]: Sep 04 17:36:43.530 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:36:43.532048 coreos-metadata[1948]: Sep 04 17:36:43.531 INFO Fetch successful Sep 4 17:36:43.532048 coreos-metadata[1948]: Sep 04 17:36:43.531 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:36:43.532048 coreos-metadata[1948]: Sep 04 17:36:43.531 INFO Fetch successful Sep 4 17:36:43.532048 coreos-metadata[1948]: Sep 04 17:36:43.531 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:36:43.533611 coreos-metadata[1948]: Sep 04 17:36:43.532 INFO Fetch successful Sep 4 17:36:43.594713 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:36:43.640304 systemd-networkd[1825]: eth0: Gained IPv6LL Sep 4 17:36:43.646943 systemd-logind[1960]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 17:36:43.675904 extend-filesystems[1998]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:36:43.675904 extend-filesystems[1998]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:36:43.675904 extend-filesystems[1998]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:36:43.674586 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:36:43.691519 extend-filesystems[1951]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:36:43.691519 extend-filesystems[1951]: Found nvme0n1p1 Sep 4 17:36:43.691519 extend-filesystems[1951]: Found nvme0n1p2 Sep 4 17:36:43.675414 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:36:43.677805 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:36:43.680334 systemd-logind[1960]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 17:36:43.680363 systemd-logind[1960]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:36:43.683205 systemd-logind[1960]: New seat seat0. Sep 4 17:36:43.693830 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:36:43.697150 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:36:43.719542 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:36:43.753395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:36:43.761254 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:36:43.762881 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:36:43.769605 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:36:43.872635 bash[2035]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:36:43.875450 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:36:43.890305 systemd[1]: Starting sshkeys.service... Sep 4 17:36:43.933238 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:36:43.935129 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:36:43.947085 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2004 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:36:43.955598 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:36:44.002152 polkitd[2050]: Started polkitd version 121 Sep 4 17:36:44.033334 amazon-ssm-agent[2032]: Initializing new seelog logger Sep 4 17:36:44.033679 amazon-ssm-agent[2032]: New Seelog Logger Creation Complete Sep 4 17:36:44.033679 amazon-ssm-agent[2032]: 2024/09/04 17:36:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:36:44.033679 amazon-ssm-agent[2032]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:36:44.034470 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:36:44.045573 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:36:44.053458 amazon-ssm-agent[2032]: 2024/09/04 17:36:44 processing appconfig overrides Sep 4 17:36:44.057559 amazon-ssm-agent[2032]: 2024/09/04 17:36:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:36:44.057559 amazon-ssm-agent[2032]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:36:44.057559 amazon-ssm-agent[2032]: 2024/09/04 17:36:44 processing appconfig overrides Sep 4 17:36:44.057559 amazon-ssm-agent[2032]: 2024/09/04 17:36:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:36:44.057559 amazon-ssm-agent[2032]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:36:44.057559 amazon-ssm-agent[2032]: 2024/09/04 17:36:44 processing appconfig overrides Sep 4 17:36:44.074335 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO Proxy environment variables: Sep 4 17:36:44.075774 polkitd[2050]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:36:44.075865 polkitd[2050]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:36:44.077999 polkitd[2050]: Finished loading, compiling and executing 2 rules Sep 4 17:36:44.083484 amazon-ssm-agent[2032]: 2024/09/04 17:36:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:36:44.083622 amazon-ssm-agent[2032]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:36:44.084338 amazon-ssm-agent[2032]: 2024/09/04 17:36:44 processing appconfig overrides Sep 4 17:36:44.087532 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1836) Sep 4 17:36:44.088189 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:36:44.088411 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:36:44.091529 polkitd[2050]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:36:44.091293 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:36:44.181631 systemd-hostnamed[2004]: Hostname set to (transient) Sep 4 17:36:44.181647 systemd-resolved[1775]: System hostname changed to 'ip-172-31-29-194'. Sep 4 17:36:44.185420 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO no_proxy: Sep 4 17:36:44.310695 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO https_proxy: Sep 4 17:36:44.411931 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO http_proxy: Sep 4 17:36:44.433215 locksmithd[2005]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:36:44.466371 coreos-metadata[2061]: Sep 04 17:36:44.466 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:36:44.471705 coreos-metadata[2061]: Sep 04 17:36:44.469 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:36:44.471705 coreos-metadata[2061]: Sep 04 17:36:44.470 INFO Fetch successful Sep 4 17:36:44.471705 coreos-metadata[2061]: Sep 04 17:36:44.470 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:36:44.471705 coreos-metadata[2061]: Sep 04 17:36:44.470 INFO Fetch successful Sep 4 17:36:44.479524 unknown[2061]: wrote ssh authorized keys file for user: core Sep 4 17:36:44.513579 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:36:44.524045 update-ssh-keys[2141]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:36:44.522251 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:36:44.527485 systemd[1]: Finished sshkeys.service. Sep 4 17:36:44.612152 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:36:44.710046 containerd[1981]: time="2024-09-04T17:36:44.707909032Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:36:44.716221 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO Agent will take identity from EC2 Sep 4 17:36:44.822094 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:36:44.890382 containerd[1981]: time="2024-09-04T17:36:44.890068860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:44.894996 containerd[1981]: time="2024-09-04T17:36:44.894936030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:44.894996 containerd[1981]: time="2024-09-04T17:36:44.894992073Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:36:44.895173 containerd[1981]: time="2024-09-04T17:36:44.895027350Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:36:44.895266 containerd[1981]: time="2024-09-04T17:36:44.895229948Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:36:44.895316 containerd[1981]: time="2024-09-04T17:36:44.895272439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:44.895375 containerd[1981]: time="2024-09-04T17:36:44.895353339Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:44.895423 containerd[1981]: time="2024-09-04T17:36:44.895378197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:44.895634 containerd[1981]: time="2024-09-04T17:36:44.895607328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:44.895683 containerd[1981]: time="2024-09-04T17:36:44.895637242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:44.895683 containerd[1981]: time="2024-09-04T17:36:44.895667874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:44.895751 containerd[1981]: time="2024-09-04T17:36:44.895684502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:44.895814 containerd[1981]: time="2024-09-04T17:36:44.895789592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:44.896106 containerd[1981]: time="2024-09-04T17:36:44.896079564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:44.896281 containerd[1981]: time="2024-09-04T17:36:44.896254514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:44.896354 containerd[1981]: time="2024-09-04T17:36:44.896282820Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:36:44.896406 containerd[1981]: time="2024-09-04T17:36:44.896391195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:36:44.898363 containerd[1981]: time="2024-09-04T17:36:44.896450478Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:36:44.906250 containerd[1981]: time="2024-09-04T17:36:44.906192439Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:36:44.908095 containerd[1981]: time="2024-09-04T17:36:44.908057023Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:36:44.908170 containerd[1981]: time="2024-09-04T17:36:44.908147358Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:36:44.908207 containerd[1981]: time="2024-09-04T17:36:44.908174405Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:36:44.908260 containerd[1981]: time="2024-09-04T17:36:44.908235927Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:36:44.908454 containerd[1981]: time="2024-09-04T17:36:44.908432400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:36:44.910235 containerd[1981]: time="2024-09-04T17:36:44.910200788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:36:44.910406 containerd[1981]: time="2024-09-04T17:36:44.910385447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:36:44.910453 containerd[1981]: time="2024-09-04T17:36:44.910433398Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:36:44.910492 containerd[1981]: time="2024-09-04T17:36:44.910454954Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:36:44.910492 containerd[1981]: time="2024-09-04T17:36:44.910479762Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:36:44.910578 containerd[1981]: time="2024-09-04T17:36:44.910500976Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:36:44.910578 containerd[1981]: time="2024-09-04T17:36:44.910521169Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:36:44.910578 containerd[1981]: time="2024-09-04T17:36:44.910542907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:36:44.910578 containerd[1981]: time="2024-09-04T17:36:44.910570578Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:36:44.910729 containerd[1981]: time="2024-09-04T17:36:44.910590657Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:36:44.910729 containerd[1981]: time="2024-09-04T17:36:44.910610086Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:36:44.910729 containerd[1981]: time="2024-09-04T17:36:44.910627558Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:36:44.910729 containerd[1981]: time="2024-09-04T17:36:44.910656295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910729 containerd[1981]: time="2024-09-04T17:36:44.910678208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910729 containerd[1981]: time="2024-09-04T17:36:44.910696912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910729 containerd[1981]: time="2024-09-04T17:36:44.910717212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910735742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910771019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910800435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910820521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910840686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910862739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910879876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910899362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910922994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910945962Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:36:44.910982 containerd[1981]: time="2024-09-04T17:36:44.910975327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.911497 containerd[1981]: time="2024-09-04T17:36:44.910993554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.915592 containerd[1981]: time="2024-09-04T17:36:44.911010347Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:36:44.915592 containerd[1981]: time="2024-09-04T17:36:44.913608747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:36:44.915592 containerd[1981]: time="2024-09-04T17:36:44.913789796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:36:44.915592 containerd[1981]: time="2024-09-04T17:36:44.913812342Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:36:44.915592 containerd[1981]: time="2024-09-04T17:36:44.913833197Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:36:44.915592 containerd[1981]: time="2024-09-04T17:36:44.913850209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.915592 containerd[1981]: time="2024-09-04T17:36:44.913877507Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:36:44.915592 containerd[1981]: time="2024-09-04T17:36:44.913894489Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:36:44.915592 containerd[1981]: time="2024-09-04T17:36:44.913909589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:36:44.923036 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:36:44.924617 containerd[1981]: time="2024-09-04T17:36:44.924153218Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:36:44.927341 containerd[1981]: time="2024-09-04T17:36:44.924906528Z" level=info msg="Connect containerd service" Sep 4 17:36:44.927341 containerd[1981]: time="2024-09-04T17:36:44.924974963Z" level=info msg="using legacy CRI server" Sep 4 17:36:44.927341 containerd[1981]: time="2024-09-04T17:36:44.924986700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:36:44.927341 containerd[1981]: time="2024-09-04T17:36:44.925218648Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:36:44.927754 containerd[1981]: time="2024-09-04T17:36:44.927721824Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:36:44.927987 containerd[1981]: time="2024-09-04T17:36:44.927936597Z" level=info msg="Start subscribing containerd event" Sep 4 17:36:44.928108 containerd[1981]: time="2024-09-04T17:36:44.928092946Z" level=info msg="Start recovering state" Sep 4 17:36:44.929544 containerd[1981]: time="2024-09-04T17:36:44.929520976Z" level=info msg="Start event monitor" Sep 4 17:36:44.929650 containerd[1981]: time="2024-09-04T17:36:44.929637111Z" level=info msg="Start snapshots syncer" Sep 4 17:36:44.930827 containerd[1981]: time="2024-09-04T17:36:44.930799249Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:36:44.930919 containerd[1981]: time="2024-09-04T17:36:44.930906400Z" level=info msg="Start streaming server" Sep 4 17:36:44.931178 containerd[1981]: time="2024-09-04T17:36:44.929386158Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:36:44.931332 containerd[1981]: time="2024-09-04T17:36:44.931317784Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:36:44.931455 containerd[1981]: time="2024-09-04T17:36:44.931441023Z" level=info msg="containerd successfully booted in 0.237766s" Sep 4 17:36:44.931553 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:36:45.020492 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:36:45.080969 sshd_keygen[1997]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:36:45.120666 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:36:45.157000 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 4 17:36:45.157212 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:36:45.157296 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:36:45.157372 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO [Registrar] Starting registrar module Sep 4 17:36:45.157449 amazon-ssm-agent[2032]: 2024-09-04 17:36:44 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:36:45.157525 amazon-ssm-agent[2032]: 2024-09-04 17:36:45 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:36:45.157617 amazon-ssm-agent[2032]: 2024-09-04 17:36:45 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:36:45.157712 amazon-ssm-agent[2032]: 2024-09-04 17:36:45 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:36:45.157800 amazon-ssm-agent[2032]: 2024-09-04 17:36:45 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:36:45.157794 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:36:45.167509 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:36:45.198641 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:36:45.199381 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:36:45.210399 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:36:45.220096 amazon-ssm-agent[2032]: 2024-09-04 17:36:45 INFO [CredentialRefresher] Next credential rotation will be in 31.566644280433334 minutes Sep 4 17:36:45.235057 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:36:45.244450 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:36:45.251456 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:36:45.253465 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:36:45.402424 tar[1970]: linux-amd64/LICENSE Sep 4 17:36:45.402852 tar[1970]: linux-amd64/README.md Sep 4 17:36:45.418576 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:36:45.729373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:36:45.731653 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:36:45.737876 systemd[1]: Startup finished in 938ms (kernel) + 8.269s (initrd) + 7.736s (userspace) = 16.943s. Sep 4 17:36:45.936709 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:36:46.175244 amazon-ssm-agent[2032]: 2024-09-04 17:36:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:36:46.213190 ntpd[1953]: Listen normally on 6 eth0 [fe80::40d:faff:fe84:340b%2]:123 Sep 4 17:36:46.213952 ntpd[1953]: 4 Sep 17:36:46 ntpd[1953]: Listen normally on 6 eth0 [fe80::40d:faff:fe84:340b%2]:123 Sep 4 17:36:46.277142 amazon-ssm-agent[2032]: 2024-09-04 17:36:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2209) started Sep 4 17:36:46.377311 amazon-ssm-agent[2032]: 2024-09-04 17:36:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:36:46.793748 kubelet[2199]: E0904 17:36:46.793664 2199 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:36:46.796698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:36:46.796906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:36:46.797284 systemd[1]: kubelet.service: Consumed 1.094s CPU time. Sep 4 17:36:51.011952 systemd-resolved[1775]: Clock change detected. Flushing caches. Sep 4 17:36:53.524798 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:36:53.530672 systemd[1]: Started sshd@0-172.31.29.194:22-139.178.68.195:57068.service - OpenSSH per-connection server daemon (139.178.68.195:57068). Sep 4 17:36:53.713516 sshd[2223]: Accepted publickey for core from 139.178.68.195 port 57068 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:53.715798 sshd[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:53.732496 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:36:53.741702 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:36:53.746012 systemd-logind[1960]: New session 1 of user core. Sep 4 17:36:53.768710 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:36:53.779607 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:36:53.794434 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:36:54.027161 systemd[2227]: Queued start job for default target default.target. Sep 4 17:36:54.036475 systemd[2227]: Created slice app.slice - User Application Slice. Sep 4 17:36:54.036528 systemd[2227]: Reached target paths.target - Paths. Sep 4 17:36:54.036550 systemd[2227]: Reached target timers.target - Timers. Sep 4 17:36:54.038070 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:36:54.059861 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:36:54.060231 systemd[2227]: Reached target sockets.target - Sockets. Sep 4 17:36:54.060367 systemd[2227]: Reached target basic.target - Basic System. Sep 4 17:36:54.060584 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:36:54.061175 systemd[2227]: Reached target default.target - Main User Target. Sep 4 17:36:54.061260 systemd[2227]: Startup finished in 255ms. Sep 4 17:36:54.075458 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:36:54.251757 systemd[1]: Started sshd@1-172.31.29.194:22-139.178.68.195:57078.service - OpenSSH per-connection server daemon (139.178.68.195:57078). Sep 4 17:36:54.417945 sshd[2238]: Accepted publickey for core from 139.178.68.195 port 57078 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:54.420485 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:54.426536 systemd-logind[1960]: New session 2 of user core. Sep 4 17:36:54.434441 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:36:54.557912 sshd[2238]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:54.563052 systemd[1]: sshd@1-172.31.29.194:22-139.178.68.195:57078.service: Deactivated successfully. Sep 4 17:36:54.565509 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:36:54.566823 systemd-logind[1960]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:36:54.568083 systemd-logind[1960]: Removed session 2. Sep 4 17:36:54.597694 systemd[1]: Started sshd@2-172.31.29.194:22-139.178.68.195:57084.service - OpenSSH per-connection server daemon (139.178.68.195:57084). Sep 4 17:36:54.763255 sshd[2245]: Accepted publickey for core from 139.178.68.195 port 57084 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:54.764600 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:54.771050 systemd-logind[1960]: New session 3 of user core. Sep 4 17:36:54.776474 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:36:54.902235 sshd[2245]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:54.908009 systemd[1]: sshd@2-172.31.29.194:22-139.178.68.195:57084.service: Deactivated successfully. Sep 4 17:36:54.910761 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:36:54.912176 systemd-logind[1960]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:36:54.913707 systemd-logind[1960]: Removed session 3. Sep 4 17:36:54.937571 systemd[1]: Started sshd@3-172.31.29.194:22-139.178.68.195:57090.service - OpenSSH per-connection server daemon (139.178.68.195:57090). Sep 4 17:36:55.115362 sshd[2252]: Accepted publickey for core from 139.178.68.195 port 57090 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:55.117831 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:55.123374 systemd-logind[1960]: New session 4 of user core. Sep 4 17:36:55.137789 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:36:55.261945 sshd[2252]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:55.265280 systemd[1]: sshd@3-172.31.29.194:22-139.178.68.195:57090.service: Deactivated successfully. Sep 4 17:36:55.267136 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:36:55.268616 systemd-logind[1960]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:36:55.269917 systemd-logind[1960]: Removed session 4. Sep 4 17:36:55.295570 systemd[1]: Started sshd@4-172.31.29.194:22-139.178.68.195:57098.service - OpenSSH per-connection server daemon (139.178.68.195:57098). Sep 4 17:36:55.451924 sshd[2259]: Accepted publickey for core from 139.178.68.195 port 57098 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:55.453675 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:55.458322 systemd-logind[1960]: New session 5 of user core. Sep 4 17:36:55.469428 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:36:55.577948 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:36:55.578705 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:36:55.597072 sudo[2262]: pam_unix(sudo:session): session closed for user root Sep 4 17:36:55.619562 sshd[2259]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:55.623622 systemd[1]: sshd@4-172.31.29.194:22-139.178.68.195:57098.service: Deactivated successfully. Sep 4 17:36:55.625765 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:36:55.627238 systemd-logind[1960]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:36:55.628630 systemd-logind[1960]: Removed session 5. Sep 4 17:36:55.654061 systemd[1]: Started sshd@5-172.31.29.194:22-139.178.68.195:57108.service - OpenSSH per-connection server daemon (139.178.68.195:57108). Sep 4 17:36:55.816422 sshd[2267]: Accepted publickey for core from 139.178.68.195 port 57108 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:55.818125 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:55.824856 systemd-logind[1960]: New session 6 of user core. Sep 4 17:36:55.831377 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:36:55.929300 sudo[2271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:36:55.929692 sudo[2271]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:36:55.933347 sudo[2271]: pam_unix(sudo:session): session closed for user root Sep 4 17:36:55.938829 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:36:55.939227 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:36:55.964971 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:36:55.975665 auditctl[2274]: No rules Sep 4 17:36:55.976420 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:36:55.976725 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:36:55.989939 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:36:56.027246 augenrules[2292]: No rules Sep 4 17:36:56.028768 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:36:56.031069 sudo[2270]: pam_unix(sudo:session): session closed for user root Sep 4 17:36:56.053589 sshd[2267]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:56.060420 systemd[1]: sshd@5-172.31.29.194:22-139.178.68.195:57108.service: Deactivated successfully. Sep 4 17:36:56.067103 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:36:56.068066 systemd-logind[1960]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:36:56.092590 systemd[1]: Started sshd@6-172.31.29.194:22-139.178.68.195:57122.service - OpenSSH per-connection server daemon (139.178.68.195:57122). Sep 4 17:36:56.094040 systemd-logind[1960]: Removed session 6. Sep 4 17:36:56.273565 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 57122 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:56.276140 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:56.286479 systemd-logind[1960]: New session 7 of user core. Sep 4 17:36:56.296480 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:36:56.399355 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:36:56.400157 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:36:56.672860 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:36:56.673068 (dockerd)[2313]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:36:57.250394 dockerd[2313]: time="2024-09-04T17:36:57.250328608Z" level=info msg="Starting up" Sep 4 17:36:57.414388 dockerd[2313]: time="2024-09-04T17:36:57.414094616Z" level=info msg="Loading containers: start." Sep 4 17:36:57.605227 kernel: Initializing XFRM netlink socket Sep 4 17:36:57.639861 (udev-worker)[2334]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:36:57.665261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:36:57.673759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:36:57.730554 systemd-networkd[1825]: docker0: Link UP Sep 4 17:36:57.781252 dockerd[2313]: time="2024-09-04T17:36:57.781184654Z" level=info msg="Loading containers: done." Sep 4 17:36:57.805806 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4274647517-merged.mount: Deactivated successfully. Sep 4 17:36:57.932075 dockerd[2313]: time="2024-09-04T17:36:57.931795496Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:36:57.932464 dockerd[2313]: time="2024-09-04T17:36:57.932037792Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:36:57.932635 dockerd[2313]: time="2024-09-04T17:36:57.932610186Z" level=info msg="Daemon has completed initialization" Sep 4 17:36:58.089369 dockerd[2313]: time="2024-09-04T17:36:58.089291229Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:36:58.091330 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:36:58.123512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:36:58.123731 (kubelet)[2454]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:36:58.231329 kubelet[2454]: E0904 17:36:58.230796 2454 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:36:58.237226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:36:58.237387 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:36:59.318618 containerd[1981]: time="2024-09-04T17:36:59.318582249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:36:59.923309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066550015.mount: Deactivated successfully. Sep 4 17:37:03.786268 containerd[1981]: time="2024-09-04T17:37:03.786216854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:03.787859 containerd[1981]: time="2024-09-04T17:37:03.787664197Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530735" Sep 4 17:37:03.791213 containerd[1981]: time="2024-09-04T17:37:03.789794128Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:03.794424 containerd[1981]: time="2024-09-04T17:37:03.794381684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:03.795581 containerd[1981]: time="2024-09-04T17:37:03.795542919Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 4.476919937s" Sep 4 17:37:03.795701 containerd[1981]: time="2024-09-04T17:37:03.795681507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 4 17:37:03.823665 containerd[1981]: time="2024-09-04T17:37:03.822626729Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:37:06.742736 containerd[1981]: time="2024-09-04T17:37:06.742680412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:06.746089 containerd[1981]: time="2024-09-04T17:37:06.746023122Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849709" Sep 4 17:37:06.750551 containerd[1981]: time="2024-09-04T17:37:06.749875040Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:06.754326 containerd[1981]: time="2024-09-04T17:37:06.754279746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:06.755835 containerd[1981]: time="2024-09-04T17:37:06.755793778Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 2.932121816s" Sep 4 17:37:06.756000 containerd[1981]: time="2024-09-04T17:37:06.755975870Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 4 17:37:06.784035 containerd[1981]: time="2024-09-04T17:37:06.784002007Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:37:08.248054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:37:08.257513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:08.571792 containerd[1981]: time="2024-09-04T17:37:08.571389524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:08.598295 containerd[1981]: time="2024-09-04T17:37:08.598214086Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097777" Sep 4 17:37:08.600466 containerd[1981]: time="2024-09-04T17:37:08.600419677Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:08.635157 containerd[1981]: time="2024-09-04T17:37:08.635042955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:08.636537 containerd[1981]: time="2024-09-04T17:37:08.636490522Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.852067404s" Sep 4 17:37:08.636661 containerd[1981]: time="2024-09-04T17:37:08.636540536Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 4 17:37:08.675656 containerd[1981]: time="2024-09-04T17:37:08.675581778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:37:08.817655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:08.826833 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:37:08.906110 kubelet[2557]: E0904 17:37:08.905998 2557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:37:08.909234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:37:08.909445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:37:10.256964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570003109.mount: Deactivated successfully. Sep 4 17:37:11.028503 containerd[1981]: time="2024-09-04T17:37:11.028445373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:11.046741 containerd[1981]: time="2024-09-04T17:37:11.046496210Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303449" Sep 4 17:37:11.070439 containerd[1981]: time="2024-09-04T17:37:11.070342977Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:11.091227 containerd[1981]: time="2024-09-04T17:37:11.091080915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:11.092301 containerd[1981]: time="2024-09-04T17:37:11.092227065Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 2.416571121s" Sep 4 17:37:11.092301 containerd[1981]: time="2024-09-04T17:37:11.092298863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 4 17:37:11.169656 containerd[1981]: time="2024-09-04T17:37:11.169601676Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:37:11.743030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1493427392.mount: Deactivated successfully. Sep 4 17:37:11.757255 containerd[1981]: time="2024-09-04T17:37:11.757181991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:11.758595 containerd[1981]: time="2024-09-04T17:37:11.758413163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:37:11.760389 containerd[1981]: time="2024-09-04T17:37:11.760119257Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:11.763732 containerd[1981]: time="2024-09-04T17:37:11.763692178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:11.766632 containerd[1981]: time="2024-09-04T17:37:11.766592168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 596.942795ms" Sep 4 17:37:11.769219 containerd[1981]: time="2024-09-04T17:37:11.766835037Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:37:11.801419 containerd[1981]: time="2024-09-04T17:37:11.801376234Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:37:12.411098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160492842.mount: Deactivated successfully. Sep 4 17:37:15.019307 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:37:15.703566 containerd[1981]: time="2024-09-04T17:37:15.703504464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:15.705050 containerd[1981]: time="2024-09-04T17:37:15.704797425Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:37:15.708176 containerd[1981]: time="2024-09-04T17:37:15.708113355Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:15.714980 containerd[1981]: time="2024-09-04T17:37:15.714912673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:15.716823 containerd[1981]: time="2024-09-04T17:37:15.716322216Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.914904137s" Sep 4 17:37:15.716823 containerd[1981]: time="2024-09-04T17:37:15.716371425Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:37:15.746904 containerd[1981]: time="2024-09-04T17:37:15.746815040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:37:16.364108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3026824870.mount: Deactivated successfully. Sep 4 17:37:17.471993 containerd[1981]: time="2024-09-04T17:37:17.471933761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:17.473485 containerd[1981]: time="2024-09-04T17:37:17.473411105Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Sep 4 17:37:17.475411 containerd[1981]: time="2024-09-04T17:37:17.475347090Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:17.482273 containerd[1981]: time="2024-09-04T17:37:17.481102499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:17.482273 containerd[1981]: time="2024-09-04T17:37:17.481956958Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.734958435s" Sep 4 17:37:17.482273 containerd[1981]: time="2024-09-04T17:37:17.482060167Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 4 17:37:19.002712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:37:19.022262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:19.540374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:19.554251 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:37:19.648797 kubelet[2713]: E0904 17:37:19.648737 2713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:37:19.652579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:37:19.652769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:37:21.172324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:21.187695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:21.226301 systemd[1]: Reloading requested from client PID 2728 ('systemctl') (unit session-7.scope)... Sep 4 17:37:21.226319 systemd[1]: Reloading... Sep 4 17:37:21.430215 zram_generator::config[2766]: No configuration found. Sep 4 17:37:21.674550 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:37:21.842124 systemd[1]: Reloading finished in 615 ms. Sep 4 17:37:21.903340 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:37:21.903453 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:37:21.903830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:21.916609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:22.357421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:22.369776 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:37:22.443340 kubelet[2826]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:37:22.443340 kubelet[2826]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:37:22.443340 kubelet[2826]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:37:22.444808 kubelet[2826]: I0904 17:37:22.444753 2826 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:37:22.736691 kubelet[2826]: I0904 17:37:22.735280 2826 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:37:22.736691 kubelet[2826]: I0904 17:37:22.735621 2826 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:37:22.736863 kubelet[2826]: I0904 17:37:22.736759 2826 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:37:22.781311 kubelet[2826]: I0904 17:37:22.780973 2826 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:37:22.786345 kubelet[2826]: E0904 17:37:22.786148 2826 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:22.796858 kubelet[2826]: I0904 17:37:22.796829 2826 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:37:22.801862 kubelet[2826]: I0904 17:37:22.799071 2826 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:37:22.801862 kubelet[2826]: I0904 17:37:22.799351 2826 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:37:22.801862 kubelet[2826]: I0904 17:37:22.799374 2826 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:37:22.801862 kubelet[2826]: I0904 17:37:22.799386 2826 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:37:22.801862 kubelet[2826]: I0904 17:37:22.800614 2826 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:37:22.802597 kubelet[2826]: I0904 17:37:22.802575 2826 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:37:22.802673 kubelet[2826]: I0904 17:37:22.802604 2826 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:37:22.802673 kubelet[2826]: I0904 17:37:22.802640 2826 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:37:22.802673 kubelet[2826]: I0904 17:37:22.802661 2826 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:37:22.810225 kubelet[2826]: W0904 17:37:22.807559 2826 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:22.810225 kubelet[2826]: E0904 17:37:22.807862 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:22.810225 kubelet[2826]: W0904 17:37:22.807957 2826 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-194&limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:22.810225 kubelet[2826]: E0904 17:37:22.807997 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-194&limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:22.810225 kubelet[2826]: I0904 17:37:22.808183 2826 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:37:22.815048 kubelet[2826]: W0904 17:37:22.814038 2826 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:37:22.817178 kubelet[2826]: I0904 17:37:22.815172 2826 server.go:1232] "Started kubelet" Sep 4 17:37:22.817178 kubelet[2826]: I0904 17:37:22.816812 2826 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:37:22.819483 kubelet[2826]: I0904 17:37:22.818822 2826 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:37:22.819483 kubelet[2826]: I0904 17:37:22.819181 2826 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:37:22.819634 kubelet[2826]: E0904 17:37:22.819460 2826 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-29-194.17f21b22a313d5dc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-29-194", UID:"ip-172-31-29-194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-29-194"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 37, 22, 815145436, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 37, 22, 815145436, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-29-194"}': 'Post "https://172.31.29.194:6443/api/v1/namespaces/default/events": dial tcp 172.31.29.194:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:37:22.822854 kubelet[2826]: I0904 17:37:22.821477 2826 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:37:22.824595 kubelet[2826]: I0904 17:37:22.824575 2826 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:37:22.830487 kubelet[2826]: E0904 17:37:22.830460 2826 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:37:22.832339 kubelet[2826]: E0904 17:37:22.832320 2826 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:37:22.832339 kubelet[2826]: I0904 17:37:22.831810 2826 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:37:22.836264 kubelet[2826]: I0904 17:37:22.831829 2826 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:37:22.836264 kubelet[2826]: I0904 17:37:22.835994 2826 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:37:22.841014 kubelet[2826]: W0904 17:37:22.840249 2826 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:22.841014 kubelet[2826]: E0904 17:37:22.840815 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:22.842717 kubelet[2826]: E0904 17:37:22.841724 2826 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-194?timeout=10s\": dial tcp 172.31.29.194:6443: connect: connection refused" interval="200ms" Sep 4 17:37:22.885424 kubelet[2826]: I0904 17:37:22.885392 2826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:37:22.887228 kubelet[2826]: I0904 17:37:22.886870 2826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:37:22.887228 kubelet[2826]: I0904 17:37:22.886896 2826 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:37:22.887228 kubelet[2826]: I0904 17:37:22.886925 2826 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:37:22.887228 kubelet[2826]: E0904 17:37:22.886983 2826 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:37:22.906138 kubelet[2826]: W0904 17:37:22.905951 2826 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:22.906138 kubelet[2826]: E0904 17:37:22.906023 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:22.908215 kubelet[2826]: I0904 17:37:22.908067 2826 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:37:22.908215 kubelet[2826]: I0904 17:37:22.908091 2826 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:37:22.908215 kubelet[2826]: I0904 17:37:22.908113 2826 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:37:22.910909 kubelet[2826]: I0904 17:37:22.910883 2826 policy_none.go:49] "None policy: Start" Sep 4 17:37:22.911572 kubelet[2826]: I0904 17:37:22.911548 2826 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:37:22.911658 kubelet[2826]: I0904 17:37:22.911584 2826 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:37:22.918919 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:37:22.935472 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:37:22.935846 kubelet[2826]: I0904 17:37:22.935758 2826 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-194" Sep 4 17:37:22.936371 kubelet[2826]: E0904 17:37:22.936353 2826 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.194:6443/api/v1/nodes\": dial tcp 172.31.29.194:6443: connect: connection refused" node="ip-172-31-29-194" Sep 4 17:37:22.940850 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:37:22.947122 kubelet[2826]: I0904 17:37:22.947090 2826 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:37:22.947477 kubelet[2826]: I0904 17:37:22.947454 2826 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:37:22.948634 kubelet[2826]: E0904 17:37:22.948613 2826 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-194\" not found" Sep 4 17:37:22.987776 kubelet[2826]: I0904 17:37:22.987624 2826 topology_manager.go:215] "Topology Admit Handler" podUID="525d3c74a8b2b4041317e94c1184d976" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-194" Sep 4 17:37:22.992769 kubelet[2826]: I0904 17:37:22.992511 2826 topology_manager.go:215] "Topology Admit Handler" podUID="277ba268417d6762509caa4b779f15be" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:22.993998 kubelet[2826]: I0904 17:37:22.993977 2826 topology_manager.go:215] "Topology Admit Handler" podUID="ed68147347f310f75b2ec2c5d8790318" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-194" Sep 4 17:37:23.004900 systemd[1]: Created slice kubepods-burstable-pod525d3c74a8b2b4041317e94c1184d976.slice - libcontainer container kubepods-burstable-pod525d3c74a8b2b4041317e94c1184d976.slice. Sep 4 17:37:23.022431 systemd[1]: Created slice kubepods-burstable-pod277ba268417d6762509caa4b779f15be.slice - libcontainer container kubepods-burstable-pod277ba268417d6762509caa4b779f15be.slice. Sep 4 17:37:23.037850 kubelet[2826]: I0904 17:37:23.036787 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/525d3c74a8b2b4041317e94c1184d976-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-194\" (UID: \"525d3c74a8b2b4041317e94c1184d976\") " pod="kube-system/kube-apiserver-ip-172-31-29-194" Sep 4 17:37:23.037850 kubelet[2826]: I0904 17:37:23.036870 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:23.037850 kubelet[2826]: I0904 17:37:23.037064 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:23.037850 kubelet[2826]: I0904 17:37:23.037111 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed68147347f310f75b2ec2c5d8790318-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-194\" (UID: \"ed68147347f310f75b2ec2c5d8790318\") " pod="kube-system/kube-scheduler-ip-172-31-29-194" Sep 4 17:37:23.037850 kubelet[2826]: I0904 17:37:23.037146 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:23.037613 systemd[1]: Created slice kubepods-burstable-poded68147347f310f75b2ec2c5d8790318.slice - libcontainer container kubepods-burstable-poded68147347f310f75b2ec2c5d8790318.slice. Sep 4 17:37:23.038274 kubelet[2826]: I0904 17:37:23.037173 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/525d3c74a8b2b4041317e94c1184d976-ca-certs\") pod \"kube-apiserver-ip-172-31-29-194\" (UID: \"525d3c74a8b2b4041317e94c1184d976\") " pod="kube-system/kube-apiserver-ip-172-31-29-194" Sep 4 17:37:23.038274 kubelet[2826]: I0904 17:37:23.037220 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/525d3c74a8b2b4041317e94c1184d976-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-194\" (UID: \"525d3c74a8b2b4041317e94c1184d976\") " pod="kube-system/kube-apiserver-ip-172-31-29-194" Sep 4 17:37:23.038274 kubelet[2826]: I0904 17:37:23.037252 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:23.038274 kubelet[2826]: I0904 17:37:23.037288 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:23.042623 kubelet[2826]: E0904 17:37:23.042582 2826 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-194?timeout=10s\": dial tcp 172.31.29.194:6443: connect: connection refused" interval="400ms" Sep 4 17:37:23.138698 kubelet[2826]: I0904 17:37:23.138661 2826 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-194" Sep 4 17:37:23.139362 kubelet[2826]: E0904 17:37:23.139306 2826 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.194:6443/api/v1/nodes\": dial tcp 172.31.29.194:6443: connect: connection refused" node="ip-172-31-29-194" Sep 4 17:37:23.321360 containerd[1981]: time="2024-09-04T17:37:23.321314335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-194,Uid:525d3c74a8b2b4041317e94c1184d976,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:23.335027 containerd[1981]: time="2024-09-04T17:37:23.334928989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-194,Uid:277ba268417d6762509caa4b779f15be,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:23.347086 containerd[1981]: time="2024-09-04T17:37:23.347035541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-194,Uid:ed68147347f310f75b2ec2c5d8790318,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:23.444065 kubelet[2826]: E0904 17:37:23.443971 2826 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-194?timeout=10s\": dial tcp 172.31.29.194:6443: connect: connection refused" interval="800ms" Sep 4 17:37:23.541371 kubelet[2826]: I0904 17:37:23.541339 2826 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-194" Sep 4 17:37:23.541699 kubelet[2826]: E0904 17:37:23.541683 2826 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.194:6443/api/v1/nodes\": dial tcp 172.31.29.194:6443: connect: connection refused" node="ip-172-31-29-194" Sep 4 17:37:23.674435 kubelet[2826]: W0904 17:37:23.674296 2826 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-194&limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:23.674435 kubelet[2826]: E0904 17:37:23.674363 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-194&limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:23.757977 kubelet[2826]: W0904 17:37:23.757864 2826 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:23.757977 kubelet[2826]: E0904 17:37:23.757936 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:23.917721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269030862.mount: Deactivated successfully. Sep 4 17:37:23.934737 containerd[1981]: time="2024-09-04T17:37:23.934597362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:23.936390 containerd[1981]: time="2024-09-04T17:37:23.936347163Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:23.940164 containerd[1981]: time="2024-09-04T17:37:23.940106823Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:37:23.943746 containerd[1981]: time="2024-09-04T17:37:23.943438518Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:23.945132 containerd[1981]: time="2024-09-04T17:37:23.945087201Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:37:23.946793 containerd[1981]: time="2024-09-04T17:37:23.946754117Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:23.949167 containerd[1981]: time="2024-09-04T17:37:23.949010646Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:37:23.952154 containerd[1981]: time="2024-09-04T17:37:23.951085033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:23.952154 containerd[1981]: time="2024-09-04T17:37:23.951876287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.819521ms" Sep 4 17:37:23.969561 containerd[1981]: time="2024-09-04T17:37:23.969505361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 622.375065ms" Sep 4 17:37:23.972909 containerd[1981]: time="2024-09-04T17:37:23.972688472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.280816ms" Sep 4 17:37:23.994350 kubelet[2826]: W0904 17:37:23.994118 2826 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:23.994350 kubelet[2826]: E0904 17:37:23.994359 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:24.221736 containerd[1981]: time="2024-09-04T17:37:24.221428660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:24.221736 containerd[1981]: time="2024-09-04T17:37:24.221577696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:24.229303 containerd[1981]: time="2024-09-04T17:37:24.222571511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:24.229303 containerd[1981]: time="2024-09-04T17:37:24.226676232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:24.229303 containerd[1981]: time="2024-09-04T17:37:24.227825250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:24.229303 containerd[1981]: time="2024-09-04T17:37:24.228235067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:24.229303 containerd[1981]: time="2024-09-04T17:37:24.228368093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:24.229303 containerd[1981]: time="2024-09-04T17:37:24.228769272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:24.230801 containerd[1981]: time="2024-09-04T17:37:24.230356742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:24.230801 containerd[1981]: time="2024-09-04T17:37:24.230474017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:24.230801 containerd[1981]: time="2024-09-04T17:37:24.230554507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:24.234762 containerd[1981]: time="2024-09-04T17:37:24.234571191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:24.245064 kubelet[2826]: E0904 17:37:24.244892 2826 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-194?timeout=10s\": dial tcp 172.31.29.194:6443: connect: connection refused" interval="1.6s" Sep 4 17:37:24.271450 systemd[1]: Started cri-containerd-b6568f5c5f2dd1e01374b5b4ce1091fcac940eb5b86c05438134cf8a49345cdc.scope - libcontainer container b6568f5c5f2dd1e01374b5b4ce1091fcac940eb5b86c05438134cf8a49345cdc. Sep 4 17:37:24.312550 kubelet[2826]: W0904 17:37:24.312507 2826 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:24.312690 kubelet[2826]: E0904 17:37:24.312561 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:24.316464 systemd[1]: Started cri-containerd-bf2f28e6efa7f01e00f2ab6b17b1473dde7a8f5777a169051c13dd55d27eae8b.scope - libcontainer container bf2f28e6efa7f01e00f2ab6b17b1473dde7a8f5777a169051c13dd55d27eae8b. Sep 4 17:37:24.322935 systemd[1]: Started cri-containerd-e3c3d38f86caf721b1cfc5b660fb94becc97236984f9126081334c3124d23251.scope - libcontainer container e3c3d38f86caf721b1cfc5b660fb94becc97236984f9126081334c3124d23251. Sep 4 17:37:24.346878 kubelet[2826]: I0904 17:37:24.346833 2826 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-194" Sep 4 17:37:24.347916 kubelet[2826]: E0904 17:37:24.347869 2826 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.194:6443/api/v1/nodes\": dial tcp 172.31.29.194:6443: connect: connection refused" node="ip-172-31-29-194" Sep 4 17:37:24.445798 containerd[1981]: time="2024-09-04T17:37:24.445627742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-194,Uid:277ba268417d6762509caa4b779f15be,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6568f5c5f2dd1e01374b5b4ce1091fcac940eb5b86c05438134cf8a49345cdc\"" Sep 4 17:37:24.455804 containerd[1981]: time="2024-09-04T17:37:24.455767807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-194,Uid:525d3c74a8b2b4041317e94c1184d976,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf2f28e6efa7f01e00f2ab6b17b1473dde7a8f5777a169051c13dd55d27eae8b\"" Sep 4 17:37:24.464659 containerd[1981]: time="2024-09-04T17:37:24.464626201Z" level=info msg="CreateContainer within sandbox \"bf2f28e6efa7f01e00f2ab6b17b1473dde7a8f5777a169051c13dd55d27eae8b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:37:24.465183 containerd[1981]: time="2024-09-04T17:37:24.465146970Z" level=info msg="CreateContainer within sandbox \"b6568f5c5f2dd1e01374b5b4ce1091fcac940eb5b86c05438134cf8a49345cdc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:37:24.465753 containerd[1981]: time="2024-09-04T17:37:24.465378838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-194,Uid:ed68147347f310f75b2ec2c5d8790318,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3c3d38f86caf721b1cfc5b660fb94becc97236984f9126081334c3124d23251\"" Sep 4 17:37:24.473807 containerd[1981]: time="2024-09-04T17:37:24.473692105Z" level=info msg="CreateContainer within sandbox \"e3c3d38f86caf721b1cfc5b660fb94becc97236984f9126081334c3124d23251\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:37:24.517284 containerd[1981]: time="2024-09-04T17:37:24.517242439Z" level=info msg="CreateContainer within sandbox \"bf2f28e6efa7f01e00f2ab6b17b1473dde7a8f5777a169051c13dd55d27eae8b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c917643c161300098d48c7b1e223ca6cafe53ca989a846ad07002c974825340\"" Sep 4 17:37:24.518572 containerd[1981]: time="2024-09-04T17:37:24.518537459Z" level=info msg="StartContainer for \"2c917643c161300098d48c7b1e223ca6cafe53ca989a846ad07002c974825340\"" Sep 4 17:37:24.521583 containerd[1981]: time="2024-09-04T17:37:24.521542855Z" level=info msg="CreateContainer within sandbox \"b6568f5c5f2dd1e01374b5b4ce1091fcac940eb5b86c05438134cf8a49345cdc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca\"" Sep 4 17:37:24.523470 containerd[1981]: time="2024-09-04T17:37:24.521587675Z" level=info msg="CreateContainer within sandbox \"e3c3d38f86caf721b1cfc5b660fb94becc97236984f9126081334c3124d23251\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d\"" Sep 4 17:37:24.525369 containerd[1981]: time="2024-09-04T17:37:24.524362320Z" level=info msg="StartContainer for \"06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca\"" Sep 4 17:37:24.525539 containerd[1981]: time="2024-09-04T17:37:24.525515939Z" level=info msg="StartContainer for \"c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d\"" Sep 4 17:37:24.577456 systemd[1]: Started cri-containerd-2c917643c161300098d48c7b1e223ca6cafe53ca989a846ad07002c974825340.scope - libcontainer container 2c917643c161300098d48c7b1e223ca6cafe53ca989a846ad07002c974825340. Sep 4 17:37:24.598564 systemd[1]: Started cri-containerd-06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca.scope - libcontainer container 06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca. Sep 4 17:37:24.641413 systemd[1]: Started cri-containerd-c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d.scope - libcontainer container c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d. Sep 4 17:37:24.716483 containerd[1981]: time="2024-09-04T17:37:24.716325093Z" level=info msg="StartContainer for \"2c917643c161300098d48c7b1e223ca6cafe53ca989a846ad07002c974825340\" returns successfully" Sep 4 17:37:24.731876 containerd[1981]: time="2024-09-04T17:37:24.731370784Z" level=info msg="StartContainer for \"06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca\" returns successfully" Sep 4 17:37:24.796858 containerd[1981]: time="2024-09-04T17:37:24.796509363Z" level=info msg="StartContainer for \"c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d\" returns successfully" Sep 4 17:37:24.895218 kubelet[2826]: E0904 17:37:24.893552 2826 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.194:6443: connect: connection refused Sep 4 17:37:25.950761 kubelet[2826]: I0904 17:37:25.950730 2826 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-194" Sep 4 17:37:28.076013 kubelet[2826]: E0904 17:37:28.075950 2826 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-194\" not found" node="ip-172-31-29-194" Sep 4 17:37:28.168495 kubelet[2826]: I0904 17:37:28.168269 2826 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-194" Sep 4 17:37:28.189869 kubelet[2826]: E0904 17:37:28.189719 2826 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-29-194.17f21b22a313d5dc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-29-194", UID:"ip-172-31-29-194", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-29-194"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 37, 22, 815145436, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 37, 22, 815145436, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-29-194"}': 'namespaces "default" not found' (will not retry!) Sep 4 17:37:28.808464 kubelet[2826]: I0904 17:37:28.808422 2826 apiserver.go:52] "Watching apiserver" Sep 4 17:37:28.833618 kubelet[2826]: I0904 17:37:28.833532 2826 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:37:29.601216 update_engine[1961]: I0904 17:37:29.599247 1961 update_attempter.cc:509] Updating boot flags... Sep 4 17:37:29.707301 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3118) Sep 4 17:37:29.982241 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3121) Sep 4 17:37:31.078722 systemd[1]: Reloading requested from client PID 3287 ('systemctl') (unit session-7.scope)... Sep 4 17:37:31.078741 systemd[1]: Reloading... Sep 4 17:37:31.223232 zram_generator::config[3329]: No configuration found. Sep 4 17:37:31.418056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:37:31.559115 systemd[1]: Reloading finished in 479 ms. Sep 4 17:37:31.623179 kubelet[2826]: I0904 17:37:31.622966 2826 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:37:31.623406 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:31.631524 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:37:31.632101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:31.636719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:32.059158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:32.073917 (kubelet)[3382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:37:32.204473 kubelet[3382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:37:32.205284 kubelet[3382]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:37:32.205360 kubelet[3382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:37:32.205488 kubelet[3382]: I0904 17:37:32.205445 3382 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:37:32.213499 kubelet[3382]: I0904 17:37:32.213470 3382 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:37:32.214866 kubelet[3382]: I0904 17:37:32.213663 3382 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:37:32.214866 kubelet[3382]: I0904 17:37:32.214037 3382 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:37:32.220591 kubelet[3382]: I0904 17:37:32.220447 3382 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:37:32.224655 kubelet[3382]: I0904 17:37:32.224623 3382 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:37:32.243443 kubelet[3382]: I0904 17:37:32.243410 3382 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:37:32.244058 kubelet[3382]: I0904 17:37:32.244023 3382 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:37:32.245634 kubelet[3382]: I0904 17:37:32.245065 3382 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:37:32.245634 kubelet[3382]: I0904 17:37:32.245101 3382 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:37:32.245634 kubelet[3382]: I0904 17:37:32.245119 3382 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:37:32.245634 kubelet[3382]: I0904 17:37:32.245166 3382 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:37:32.245634 kubelet[3382]: I0904 17:37:32.245327 3382 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:37:32.245634 kubelet[3382]: I0904 17:37:32.245346 3382 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:37:32.245634 kubelet[3382]: I0904 17:37:32.245380 3382 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:37:32.246570 kubelet[3382]: I0904 17:37:32.245399 3382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:37:32.250279 kubelet[3382]: I0904 17:37:32.249409 3382 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:37:32.250279 kubelet[3382]: I0904 17:37:32.250003 3382 server.go:1232] "Started kubelet" Sep 4 17:37:32.262419 kubelet[3382]: I0904 17:37:32.262244 3382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:37:32.276551 kubelet[3382]: E0904 17:37:32.275619 3382 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:37:32.276551 kubelet[3382]: E0904 17:37:32.275683 3382 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:37:32.280337 kubelet[3382]: I0904 17:37:32.280309 3382 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:37:32.281907 kubelet[3382]: I0904 17:37:32.281887 3382 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:37:32.285297 kubelet[3382]: I0904 17:37:32.285274 3382 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:37:32.289344 kubelet[3382]: I0904 17:37:32.286737 3382 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:37:32.291744 kubelet[3382]: I0904 17:37:32.290851 3382 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:37:32.292423 kubelet[3382]: I0904 17:37:32.292403 3382 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:37:32.293539 kubelet[3382]: I0904 17:37:32.293522 3382 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:37:32.332459 kubelet[3382]: I0904 17:37:32.332269 3382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:37:32.340843 kubelet[3382]: I0904 17:37:32.340811 3382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:37:32.340843 kubelet[3382]: I0904 17:37:32.340845 3382 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:37:32.341022 kubelet[3382]: I0904 17:37:32.340866 3382 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:37:32.341022 kubelet[3382]: E0904 17:37:32.340926 3382 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:37:32.398536 kubelet[3382]: I0904 17:37:32.398502 3382 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-194" Sep 4 17:37:32.414010 kubelet[3382]: I0904 17:37:32.412902 3382 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-29-194" Sep 4 17:37:32.414010 kubelet[3382]: I0904 17:37:32.413090 3382 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-194" Sep 4 17:37:32.442785 kubelet[3382]: E0904 17:37:32.441546 3382 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:37:32.470003 kubelet[3382]: I0904 17:37:32.469979 3382 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:37:32.470637 kubelet[3382]: I0904 17:37:32.470297 3382 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:37:32.470637 kubelet[3382]: I0904 17:37:32.470326 3382 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:37:32.470637 kubelet[3382]: I0904 17:37:32.470516 3382 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:37:32.470637 kubelet[3382]: I0904 17:37:32.470542 3382 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:37:32.470637 kubelet[3382]: I0904 17:37:32.470551 3382 policy_none.go:49] "None policy: Start" Sep 4 17:37:32.475234 kubelet[3382]: I0904 17:37:32.472484 3382 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:37:32.475234 kubelet[3382]: I0904 17:37:32.472507 3382 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:37:32.475234 kubelet[3382]: I0904 17:37:32.472768 3382 state_mem.go:75] "Updated machine memory state" Sep 4 17:37:32.481679 kubelet[3382]: I0904 17:37:32.481658 3382 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:37:32.484396 kubelet[3382]: I0904 17:37:32.484376 3382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:37:32.642475 kubelet[3382]: I0904 17:37:32.642344 3382 topology_manager.go:215] "Topology Admit Handler" podUID="525d3c74a8b2b4041317e94c1184d976" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-194" Sep 4 17:37:32.642475 kubelet[3382]: I0904 17:37:32.642471 3382 topology_manager.go:215] "Topology Admit Handler" podUID="277ba268417d6762509caa4b779f15be" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:32.642653 kubelet[3382]: I0904 17:37:32.642529 3382 topology_manager.go:215] "Topology Admit Handler" podUID="ed68147347f310f75b2ec2c5d8790318" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-194" Sep 4 17:37:32.664354 kubelet[3382]: E0904 17:37:32.664265 3382 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-194\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-194" Sep 4 17:37:32.664570 kubelet[3382]: E0904 17:37:32.664551 3382 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-194\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:32.801266 kubelet[3382]: I0904 17:37:32.800889 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:32.801266 kubelet[3382]: I0904 17:37:32.800958 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed68147347f310f75b2ec2c5d8790318-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-194\" (UID: \"ed68147347f310f75b2ec2c5d8790318\") " pod="kube-system/kube-scheduler-ip-172-31-29-194" Sep 4 17:37:32.801266 kubelet[3382]: I0904 17:37:32.800996 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/525d3c74a8b2b4041317e94c1184d976-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-194\" (UID: \"525d3c74a8b2b4041317e94c1184d976\") " pod="kube-system/kube-apiserver-ip-172-31-29-194" Sep 4 17:37:32.801266 kubelet[3382]: I0904 17:37:32.801036 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/525d3c74a8b2b4041317e94c1184d976-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-194\" (UID: \"525d3c74a8b2b4041317e94c1184d976\") " pod="kube-system/kube-apiserver-ip-172-31-29-194" Sep 4 17:37:32.801266 kubelet[3382]: I0904 17:37:32.801064 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:32.802351 kubelet[3382]: I0904 17:37:32.801083 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/525d3c74a8b2b4041317e94c1184d976-ca-certs\") pod \"kube-apiserver-ip-172-31-29-194\" (UID: \"525d3c74a8b2b4041317e94c1184d976\") " pod="kube-system/kube-apiserver-ip-172-31-29-194" Sep 4 17:37:32.802351 kubelet[3382]: I0904 17:37:32.801104 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:32.802351 kubelet[3382]: I0904 17:37:32.801137 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:32.802351 kubelet[3382]: I0904 17:37:32.801161 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/277ba268417d6762509caa4b779f15be-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-194\" (UID: \"277ba268417d6762509caa4b779f15be\") " pod="kube-system/kube-controller-manager-ip-172-31-29-194" Sep 4 17:37:33.248440 kubelet[3382]: I0904 17:37:33.248404 3382 apiserver.go:52] "Watching apiserver" Sep 4 17:37:33.294144 kubelet[3382]: I0904 17:37:33.294096 3382 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:37:33.344981 kubelet[3382]: I0904 17:37:33.344927 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-194" podStartSLOduration=1.344874548 podCreationTimestamp="2024-09-04 17:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:33.33276871 +0000 UTC m=+1.236529717" watchObservedRunningTime="2024-09-04 17:37:33.344874548 +0000 UTC m=+1.248635556" Sep 4 17:37:33.345148 kubelet[3382]: I0904 17:37:33.345069 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-194" podStartSLOduration=2.34504416 podCreationTimestamp="2024-09-04 17:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:33.342921071 +0000 UTC m=+1.246682079" watchObservedRunningTime="2024-09-04 17:37:33.34504416 +0000 UTC m=+1.248805168" Sep 4 17:37:33.360225 kubelet[3382]: I0904 17:37:33.359924 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-194" podStartSLOduration=4.359771891 podCreationTimestamp="2024-09-04 17:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:33.357424889 +0000 UTC m=+1.261185908" watchObservedRunningTime="2024-09-04 17:37:33.359771891 +0000 UTC m=+1.263532901" Sep 4 17:37:38.962156 sudo[2303]: pam_unix(sudo:session): session closed for user root Sep 4 17:37:38.986395 sshd[2300]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:38.991207 systemd[1]: sshd@6-172.31.29.194:22-139.178.68.195:57122.service: Deactivated successfully. Sep 4 17:37:38.995042 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:37:38.995363 systemd[1]: session-7.scope: Consumed 5.283s CPU time, 135.0M memory peak, 0B memory swap peak. Sep 4 17:37:38.997251 systemd-logind[1960]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:37:38.998733 systemd-logind[1960]: Removed session 7. Sep 4 17:37:44.372291 kubelet[3382]: I0904 17:37:44.371925 3382 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:37:44.374378 containerd[1981]: time="2024-09-04T17:37:44.373624974Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:37:44.374770 kubelet[3382]: I0904 17:37:44.374273 3382 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:37:45.132051 kubelet[3382]: I0904 17:37:45.132000 3382 topology_manager.go:215] "Topology Admit Handler" podUID="1c49ad8c-eb91-476d-b4f3-320643017b1a" podNamespace="kube-system" podName="kube-proxy-7mnk9" Sep 4 17:37:45.146487 systemd[1]: Created slice kubepods-besteffort-pod1c49ad8c_eb91_476d_b4f3_320643017b1a.slice - libcontainer container kubepods-besteffort-pod1c49ad8c_eb91_476d_b4f3_320643017b1a.slice. Sep 4 17:37:45.270982 kubelet[3382]: I0904 17:37:45.270150 3382 topology_manager.go:215] "Topology Admit Handler" podUID="4c7f7b14-2040-431a-aa86-7dc991ea7f7f" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-tx7s6" Sep 4 17:37:45.279638 systemd[1]: Created slice kubepods-besteffort-pod4c7f7b14_2040_431a_aa86_7dc991ea7f7f.slice - libcontainer container kubepods-besteffort-pod4c7f7b14_2040_431a_aa86_7dc991ea7f7f.slice. Sep 4 17:37:45.282729 kubelet[3382]: W0904 17:37:45.282051 3382 reflector.go:535] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-29-194" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-29-194' and this object Sep 4 17:37:45.284949 kubelet[3382]: E0904 17:37:45.284344 3382 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-29-194" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-29-194' and this object Sep 4 17:37:45.286077 kubelet[3382]: W0904 17:37:45.283779 3382 reflector.go:535] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-29-194" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-29-194' and this object Sep 4 17:37:45.286258 kubelet[3382]: E0904 17:37:45.286245 3382 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-29-194" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-29-194' and this object Sep 4 17:37:45.312621 kubelet[3382]: I0904 17:37:45.312572 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c49ad8c-eb91-476d-b4f3-320643017b1a-xtables-lock\") pod \"kube-proxy-7mnk9\" (UID: \"1c49ad8c-eb91-476d-b4f3-320643017b1a\") " pod="kube-system/kube-proxy-7mnk9" Sep 4 17:37:45.312621 kubelet[3382]: I0904 17:37:45.312629 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c49ad8c-eb91-476d-b4f3-320643017b1a-lib-modules\") pod \"kube-proxy-7mnk9\" (UID: \"1c49ad8c-eb91-476d-b4f3-320643017b1a\") " pod="kube-system/kube-proxy-7mnk9" Sep 4 17:37:45.312861 kubelet[3382]: I0904 17:37:45.312661 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkp4c\" (UniqueName: \"kubernetes.io/projected/1c49ad8c-eb91-476d-b4f3-320643017b1a-kube-api-access-nkp4c\") pod \"kube-proxy-7mnk9\" (UID: \"1c49ad8c-eb91-476d-b4f3-320643017b1a\") " pod="kube-system/kube-proxy-7mnk9" Sep 4 17:37:45.312861 kubelet[3382]: I0904 17:37:45.312713 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c49ad8c-eb91-476d-b4f3-320643017b1a-kube-proxy\") pod \"kube-proxy-7mnk9\" (UID: \"1c49ad8c-eb91-476d-b4f3-320643017b1a\") " pod="kube-system/kube-proxy-7mnk9" Sep 4 17:37:45.413239 kubelet[3382]: I0904 17:37:45.412972 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pk46\" (UniqueName: \"kubernetes.io/projected/4c7f7b14-2040-431a-aa86-7dc991ea7f7f-kube-api-access-5pk46\") pod \"tigera-operator-5d56685c77-tx7s6\" (UID: \"4c7f7b14-2040-431a-aa86-7dc991ea7f7f\") " pod="tigera-operator/tigera-operator-5d56685c77-tx7s6" Sep 4 17:37:45.413239 kubelet[3382]: I0904 17:37:45.413027 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c7f7b14-2040-431a-aa86-7dc991ea7f7f-var-lib-calico\") pod \"tigera-operator-5d56685c77-tx7s6\" (UID: \"4c7f7b14-2040-431a-aa86-7dc991ea7f7f\") " pod="tigera-operator/tigera-operator-5d56685c77-tx7s6" Sep 4 17:37:45.461963 containerd[1981]: time="2024-09-04T17:37:45.461685317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7mnk9,Uid:1c49ad8c-eb91-476d-b4f3-320643017b1a,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:45.522700 containerd[1981]: time="2024-09-04T17:37:45.519587234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:45.522700 containerd[1981]: time="2024-09-04T17:37:45.519653201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:45.522700 containerd[1981]: time="2024-09-04T17:37:45.519700127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:45.522700 containerd[1981]: time="2024-09-04T17:37:45.519826544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:45.561405 systemd[1]: Started cri-containerd-c3fcbb797abbb964f21896bccab36cc0bb7c4a18d0a7a271c1b4e84a2449f746.scope - libcontainer container c3fcbb797abbb964f21896bccab36cc0bb7c4a18d0a7a271c1b4e84a2449f746. Sep 4 17:37:45.597953 containerd[1981]: time="2024-09-04T17:37:45.597893365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7mnk9,Uid:1c49ad8c-eb91-476d-b4f3-320643017b1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3fcbb797abbb964f21896bccab36cc0bb7c4a18d0a7a271c1b4e84a2449f746\"" Sep 4 17:37:45.609374 containerd[1981]: time="2024-09-04T17:37:45.609225738Z" level=info msg="CreateContainer within sandbox \"c3fcbb797abbb964f21896bccab36cc0bb7c4a18d0a7a271c1b4e84a2449f746\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:37:45.652655 containerd[1981]: time="2024-09-04T17:37:45.652594550Z" level=info msg="CreateContainer within sandbox \"c3fcbb797abbb964f21896bccab36cc0bb7c4a18d0a7a271c1b4e84a2449f746\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c6665af87875d6018dc4166d4c27fe04c7c97630dd7822c60357073faca0a3a3\"" Sep 4 17:37:45.662173 containerd[1981]: time="2024-09-04T17:37:45.657794981Z" level=info msg="StartContainer for \"c6665af87875d6018dc4166d4c27fe04c7c97630dd7822c60357073faca0a3a3\"" Sep 4 17:37:45.712683 systemd[1]: Started cri-containerd-c6665af87875d6018dc4166d4c27fe04c7c97630dd7822c60357073faca0a3a3.scope - libcontainer container c6665af87875d6018dc4166d4c27fe04c7c97630dd7822c60357073faca0a3a3. Sep 4 17:37:45.782444 containerd[1981]: time="2024-09-04T17:37:45.782234052Z" level=info msg="StartContainer for \"c6665af87875d6018dc4166d4c27fe04c7c97630dd7822c60357073faca0a3a3\" returns successfully" Sep 4 17:37:46.448426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1784043407.mount: Deactivated successfully. Sep 4 17:37:46.493065 containerd[1981]: time="2024-09-04T17:37:46.491333899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-tx7s6,Uid:4c7f7b14-2040-431a-aa86-7dc991ea7f7f,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:37:46.546312 containerd[1981]: time="2024-09-04T17:37:46.545965528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:46.546312 containerd[1981]: time="2024-09-04T17:37:46.546043779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:46.546312 containerd[1981]: time="2024-09-04T17:37:46.546064809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:46.546631 containerd[1981]: time="2024-09-04T17:37:46.546200583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:46.582419 systemd[1]: Started cri-containerd-123135c607475e8adb8ec0d1a7254c66b0bec83291857e0b11baf39aa78debc9.scope - libcontainer container 123135c607475e8adb8ec0d1a7254c66b0bec83291857e0b11baf39aa78debc9. Sep 4 17:37:46.703129 containerd[1981]: time="2024-09-04T17:37:46.702946211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-tx7s6,Uid:4c7f7b14-2040-431a-aa86-7dc991ea7f7f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"123135c607475e8adb8ec0d1a7254c66b0bec83291857e0b11baf39aa78debc9\"" Sep 4 17:37:46.708315 containerd[1981]: time="2024-09-04T17:37:46.707710102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:37:48.011353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291904192.mount: Deactivated successfully. Sep 4 17:37:48.898785 containerd[1981]: time="2024-09-04T17:37:48.898676079Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:48.900219 containerd[1981]: time="2024-09-04T17:37:48.900120078Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136489" Sep 4 17:37:48.902118 containerd[1981]: time="2024-09-04T17:37:48.902047979Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:48.906838 containerd[1981]: time="2024-09-04T17:37:48.905850761Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:48.906838 containerd[1981]: time="2024-09-04T17:37:48.906602952Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.198065703s" Sep 4 17:37:48.906838 containerd[1981]: time="2024-09-04T17:37:48.906640447Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:37:48.909151 containerd[1981]: time="2024-09-04T17:37:48.909098491Z" level=info msg="CreateContainer within sandbox \"123135c607475e8adb8ec0d1a7254c66b0bec83291857e0b11baf39aa78debc9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:37:48.933844 containerd[1981]: time="2024-09-04T17:37:48.933797834Z" level=info msg="CreateContainer within sandbox \"123135c607475e8adb8ec0d1a7254c66b0bec83291857e0b11baf39aa78debc9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783\"" Sep 4 17:37:48.934502 containerd[1981]: time="2024-09-04T17:37:48.934462707Z" level=info msg="StartContainer for \"0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783\"" Sep 4 17:37:48.979432 systemd[1]: Started cri-containerd-0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783.scope - libcontainer container 0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783. Sep 4 17:37:49.019084 containerd[1981]: time="2024-09-04T17:37:49.019024400Z" level=info msg="StartContainer for \"0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783\" returns successfully" Sep 4 17:37:49.468499 kubelet[3382]: I0904 17:37:49.468441 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7mnk9" podStartSLOduration=4.466980975 podCreationTimestamp="2024-09-04 17:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:46.476926624 +0000 UTC m=+14.380687629" watchObservedRunningTime="2024-09-04 17:37:49.466980975 +0000 UTC m=+17.370742016" Sep 4 17:37:49.469709 kubelet[3382]: I0904 17:37:49.469548 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-tx7s6" podStartSLOduration=2.269730526 podCreationTimestamp="2024-09-04 17:37:45 +0000 UTC" firstStartedPulling="2024-09-04 17:37:46.707286626 +0000 UTC m=+14.611047622" lastFinishedPulling="2024-09-04 17:37:48.907044431 +0000 UTC m=+16.810805423" observedRunningTime="2024-09-04 17:37:49.466836556 +0000 UTC m=+17.370597561" watchObservedRunningTime="2024-09-04 17:37:49.469488327 +0000 UTC m=+17.373249337" Sep 4 17:37:52.803995 kubelet[3382]: I0904 17:37:52.803947 3382 topology_manager.go:215] "Topology Admit Handler" podUID="a758f0be-96ff-40f8-973f-0b97a32373ac" podNamespace="calico-system" podName="calico-typha-5659444fc-j6r74" Sep 4 17:37:52.815517 systemd[1]: Created slice kubepods-besteffort-poda758f0be_96ff_40f8_973f_0b97a32373ac.slice - libcontainer container kubepods-besteffort-poda758f0be_96ff_40f8_973f_0b97a32373ac.slice. Sep 4 17:37:52.946394 kubelet[3382]: I0904 17:37:52.944927 3382 topology_manager.go:215] "Topology Admit Handler" podUID="fb4fb3a6-82f9-463e-a51e-de089f1f4983" podNamespace="calico-system" podName="calico-node-2k582" Sep 4 17:37:52.966874 systemd[1]: Created slice kubepods-besteffort-podfb4fb3a6_82f9_463e_a51e_de089f1f4983.slice - libcontainer container kubepods-besteffort-podfb4fb3a6_82f9_463e_a51e_de089f1f4983.slice. Sep 4 17:37:52.980690 kubelet[3382]: I0904 17:37:52.980374 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zh4v\" (UniqueName: \"kubernetes.io/projected/a758f0be-96ff-40f8-973f-0b97a32373ac-kube-api-access-7zh4v\") pod \"calico-typha-5659444fc-j6r74\" (UID: \"a758f0be-96ff-40f8-973f-0b97a32373ac\") " pod="calico-system/calico-typha-5659444fc-j6r74" Sep 4 17:37:52.980690 kubelet[3382]: I0904 17:37:52.980476 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a758f0be-96ff-40f8-973f-0b97a32373ac-typha-certs\") pod \"calico-typha-5659444fc-j6r74\" (UID: \"a758f0be-96ff-40f8-973f-0b97a32373ac\") " pod="calico-system/calico-typha-5659444fc-j6r74" Sep 4 17:37:52.980690 kubelet[3382]: I0904 17:37:52.980519 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a758f0be-96ff-40f8-973f-0b97a32373ac-tigera-ca-bundle\") pod \"calico-typha-5659444fc-j6r74\" (UID: \"a758f0be-96ff-40f8-973f-0b97a32373ac\") " pod="calico-system/calico-typha-5659444fc-j6r74" Sep 4 17:37:53.076420 kubelet[3382]: I0904 17:37:53.076298 3382 topology_manager.go:215] "Topology Admit Handler" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" podNamespace="calico-system" podName="csi-node-driver-qrdz7" Sep 4 17:37:53.078583 kubelet[3382]: E0904 17:37:53.078131 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrdz7" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" Sep 4 17:37:53.083385 kubelet[3382]: I0904 17:37:53.081512 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/69a2d1ad-1774-4773-ab86-418e1662aaff-varrun\") pod \"csi-node-driver-qrdz7\" (UID: \"69a2d1ad-1774-4773-ab86-418e1662aaff\") " pod="calico-system/csi-node-driver-qrdz7" Sep 4 17:37:53.083385 kubelet[3382]: I0904 17:37:53.081574 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fb4fb3a6-82f9-463e-a51e-de089f1f4983-cni-bin-dir\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083385 kubelet[3382]: I0904 17:37:53.081603 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fb4fb3a6-82f9-463e-a51e-de089f1f4983-cni-log-dir\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083385 kubelet[3382]: I0904 17:37:53.081631 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fb4fb3a6-82f9-463e-a51e-de089f1f4983-var-run-calico\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083385 kubelet[3382]: I0904 17:37:53.081657 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb4fb3a6-82f9-463e-a51e-de089f1f4983-var-lib-calico\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083702 kubelet[3382]: I0904 17:37:53.081687 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmzj\" (UniqueName: \"kubernetes.io/projected/fb4fb3a6-82f9-463e-a51e-de089f1f4983-kube-api-access-rgmzj\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083702 kubelet[3382]: I0904 17:37:53.081716 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb4fb3a6-82f9-463e-a51e-de089f1f4983-tigera-ca-bundle\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083702 kubelet[3382]: I0904 17:37:53.081745 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb4fb3a6-82f9-463e-a51e-de089f1f4983-xtables-lock\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083702 kubelet[3382]: I0904 17:37:53.081772 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb4fb3a6-82f9-463e-a51e-de089f1f4983-lib-modules\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083702 kubelet[3382]: I0904 17:37:53.081807 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69a2d1ad-1774-4773-ab86-418e1662aaff-registration-dir\") pod \"csi-node-driver-qrdz7\" (UID: \"69a2d1ad-1774-4773-ab86-418e1662aaff\") " pod="calico-system/csi-node-driver-qrdz7" Sep 4 17:37:53.083993 kubelet[3382]: I0904 17:37:53.081929 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fb4fb3a6-82f9-463e-a51e-de089f1f4983-node-certs\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083993 kubelet[3382]: I0904 17:37:53.081961 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fb4fb3a6-82f9-463e-a51e-de089f1f4983-flexvol-driver-host\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.083993 kubelet[3382]: I0904 17:37:53.081988 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69a2d1ad-1774-4773-ab86-418e1662aaff-socket-dir\") pod \"csi-node-driver-qrdz7\" (UID: \"69a2d1ad-1774-4773-ab86-418e1662aaff\") " pod="calico-system/csi-node-driver-qrdz7" Sep 4 17:37:53.083993 kubelet[3382]: I0904 17:37:53.082019 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxl4\" (UniqueName: \"kubernetes.io/projected/69a2d1ad-1774-4773-ab86-418e1662aaff-kube-api-access-lxxl4\") pod \"csi-node-driver-qrdz7\" (UID: \"69a2d1ad-1774-4773-ab86-418e1662aaff\") " pod="calico-system/csi-node-driver-qrdz7" Sep 4 17:37:53.083993 kubelet[3382]: I0904 17:37:53.082050 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fb4fb3a6-82f9-463e-a51e-de089f1f4983-policysync\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.084204 kubelet[3382]: I0904 17:37:53.082082 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fb4fb3a6-82f9-463e-a51e-de089f1f4983-cni-net-dir\") pod \"calico-node-2k582\" (UID: \"fb4fb3a6-82f9-463e-a51e-de089f1f4983\") " pod="calico-system/calico-node-2k582" Sep 4 17:37:53.084204 kubelet[3382]: I0904 17:37:53.082130 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69a2d1ad-1774-4773-ab86-418e1662aaff-kubelet-dir\") pod \"csi-node-driver-qrdz7\" (UID: \"69a2d1ad-1774-4773-ab86-418e1662aaff\") " pod="calico-system/csi-node-driver-qrdz7" Sep 4 17:37:53.127407 containerd[1981]: time="2024-09-04T17:37:53.127363401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5659444fc-j6r74,Uid:a758f0be-96ff-40f8-973f-0b97a32373ac,Namespace:calico-system,Attempt:0,}" Sep 4 17:37:53.190958 kubelet[3382]: E0904 17:37:53.190083 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:53.190958 kubelet[3382]: W0904 17:37:53.190117 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:53.190958 kubelet[3382]: E0904 17:37:53.190145 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:53.207379 kubelet[3382]: E0904 17:37:53.207350 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:53.207379 kubelet[3382]: W0904 17:37:53.207377 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:53.207570 kubelet[3382]: E0904 17:37:53.207408 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:53.234262 containerd[1981]: time="2024-09-04T17:37:53.231519632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:53.234840 containerd[1981]: time="2024-09-04T17:37:53.234607640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:53.234840 containerd[1981]: time="2024-09-04T17:37:53.234684266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:53.236069 containerd[1981]: time="2024-09-04T17:37:53.235132787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:53.250374 kubelet[3382]: E0904 17:37:53.250119 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:53.250374 kubelet[3382]: W0904 17:37:53.250144 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:53.250374 kubelet[3382]: E0904 17:37:53.250177 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:53.255471 kubelet[3382]: E0904 17:37:53.255103 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:53.255471 kubelet[3382]: W0904 17:37:53.255125 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:53.255471 kubelet[3382]: E0904 17:37:53.255208 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:53.278422 systemd[1]: Started cri-containerd-678952a62edd3056137bfae24ee9da525520aaedbed23648699854c8d84d82cf.scope - libcontainer container 678952a62edd3056137bfae24ee9da525520aaedbed23648699854c8d84d82cf. Sep 4 17:37:53.282086 containerd[1981]: time="2024-09-04T17:37:53.282029373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2k582,Uid:fb4fb3a6-82f9-463e-a51e-de089f1f4983,Namespace:calico-system,Attempt:0,}" Sep 4 17:37:53.427036 containerd[1981]: time="2024-09-04T17:37:53.418722899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:53.427036 containerd[1981]: time="2024-09-04T17:37:53.419239660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:53.427036 containerd[1981]: time="2024-09-04T17:37:53.420868009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:53.427036 containerd[1981]: time="2024-09-04T17:37:53.421479560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:53.457869 systemd[1]: Started cri-containerd-6a0a4bec5dfa3d11b4c9fdca35f2ad42859b5b8d0cf938e3080017610159d75f.scope - libcontainer container 6a0a4bec5dfa3d11b4c9fdca35f2ad42859b5b8d0cf938e3080017610159d75f. Sep 4 17:37:53.490428 containerd[1981]: time="2024-09-04T17:37:53.490377587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5659444fc-j6r74,Uid:a758f0be-96ff-40f8-973f-0b97a32373ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"678952a62edd3056137bfae24ee9da525520aaedbed23648699854c8d84d82cf\"" Sep 4 17:37:53.493776 containerd[1981]: time="2024-09-04T17:37:53.493728345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:37:53.560540 containerd[1981]: time="2024-09-04T17:37:53.560492531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2k582,Uid:fb4fb3a6-82f9-463e-a51e-de089f1f4983,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a0a4bec5dfa3d11b4c9fdca35f2ad42859b5b8d0cf938e3080017610159d75f\"" Sep 4 17:37:54.343621 kubelet[3382]: E0904 17:37:54.343585 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrdz7" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" Sep 4 17:37:56.345959 kubelet[3382]: E0904 17:37:56.345919 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrdz7" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" Sep 4 17:37:56.640233 containerd[1981]: time="2024-09-04T17:37:56.640086480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:56.642214 containerd[1981]: time="2024-09-04T17:37:56.642083505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:37:56.651973 containerd[1981]: time="2024-09-04T17:37:56.650853996Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:56.659063 containerd[1981]: time="2024-09-04T17:37:56.658996804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:56.660072 containerd[1981]: time="2024-09-04T17:37:56.659838903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.166056352s" Sep 4 17:37:56.660072 containerd[1981]: time="2024-09-04T17:37:56.659941323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:37:56.664807 containerd[1981]: time="2024-09-04T17:37:56.664409754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:37:56.690817 containerd[1981]: time="2024-09-04T17:37:56.690754251Z" level=info msg="CreateContainer within sandbox \"678952a62edd3056137bfae24ee9da525520aaedbed23648699854c8d84d82cf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:37:56.732185 containerd[1981]: time="2024-09-04T17:37:56.732133394Z" level=info msg="CreateContainer within sandbox \"678952a62edd3056137bfae24ee9da525520aaedbed23648699854c8d84d82cf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"157ef76e46669ef565b8dd2aa60f22c4c8a72a5932ccd61265522c6c90127674\"" Sep 4 17:37:56.734246 containerd[1981]: time="2024-09-04T17:37:56.733167854Z" level=info msg="StartContainer for \"157ef76e46669ef565b8dd2aa60f22c4c8a72a5932ccd61265522c6c90127674\"" Sep 4 17:37:56.901641 systemd[1]: Started cri-containerd-157ef76e46669ef565b8dd2aa60f22c4c8a72a5932ccd61265522c6c90127674.scope - libcontainer container 157ef76e46669ef565b8dd2aa60f22c4c8a72a5932ccd61265522c6c90127674. Sep 4 17:37:57.010230 containerd[1981]: time="2024-09-04T17:37:57.009103661Z" level=info msg="StartContainer for \"157ef76e46669ef565b8dd2aa60f22c4c8a72a5932ccd61265522c6c90127674\" returns successfully" Sep 4 17:37:57.527567 kubelet[3382]: E0904 17:37:57.527538 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.530469 kubelet[3382]: W0904 17:37:57.528645 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.530469 kubelet[3382]: E0904 17:37:57.529035 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.530918 kubelet[3382]: E0904 17:37:57.530710 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.530918 kubelet[3382]: W0904 17:37:57.530726 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.530918 kubelet[3382]: E0904 17:37:57.530752 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.531229 kubelet[3382]: E0904 17:37:57.531152 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.531361 kubelet[3382]: W0904 17:37:57.531292 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.531361 kubelet[3382]: E0904 17:37:57.531319 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.531967 kubelet[3382]: E0904 17:37:57.531775 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.531967 kubelet[3382]: W0904 17:37:57.531803 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.531967 kubelet[3382]: E0904 17:37:57.531899 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.532634 kubelet[3382]: E0904 17:37:57.532379 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.532634 kubelet[3382]: W0904 17:37:57.532413 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.532634 kubelet[3382]: E0904 17:37:57.532605 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.533335 kubelet[3382]: E0904 17:37:57.533311 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.533335 kubelet[3382]: W0904 17:37:57.533325 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.533648 kubelet[3382]: E0904 17:37:57.533344 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.533648 kubelet[3382]: E0904 17:37:57.533629 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.533648 kubelet[3382]: W0904 17:37:57.533641 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.533774 kubelet[3382]: E0904 17:37:57.533670 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.534592 kubelet[3382]: E0904 17:37:57.534422 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.534592 kubelet[3382]: W0904 17:37:57.534435 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.534592 kubelet[3382]: E0904 17:37:57.534453 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.535120 kubelet[3382]: E0904 17:37:57.534816 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.535120 kubelet[3382]: W0904 17:37:57.534827 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.535365 kubelet[3382]: E0904 17:37:57.534846 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.535778 kubelet[3382]: E0904 17:37:57.535701 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.535778 kubelet[3382]: W0904 17:37:57.535713 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.535778 kubelet[3382]: E0904 17:37:57.535730 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.536543 kubelet[3382]: E0904 17:37:57.536407 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.536543 kubelet[3382]: W0904 17:37:57.536420 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.536543 kubelet[3382]: E0904 17:37:57.536437 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.537041 kubelet[3382]: E0904 17:37:57.536743 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.537041 kubelet[3382]: W0904 17:37:57.536753 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.537041 kubelet[3382]: E0904 17:37:57.536855 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.537391 kubelet[3382]: E0904 17:37:57.537367 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.537674 kubelet[3382]: W0904 17:37:57.537587 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.537674 kubelet[3382]: E0904 17:37:57.537615 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.539015 kubelet[3382]: E0904 17:37:57.537939 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.539015 kubelet[3382]: W0904 17:37:57.537954 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.539015 kubelet[3382]: E0904 17:37:57.538299 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.539015 kubelet[3382]: E0904 17:37:57.538667 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.539015 kubelet[3382]: W0904 17:37:57.538679 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.539015 kubelet[3382]: E0904 17:37:57.538694 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.627181 kubelet[3382]: E0904 17:37:57.627150 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.627181 kubelet[3382]: W0904 17:37:57.627176 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.627405 kubelet[3382]: E0904 17:37:57.627225 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.627623 kubelet[3382]: E0904 17:37:57.627602 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.627623 kubelet[3382]: W0904 17:37:57.627621 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.627726 kubelet[3382]: E0904 17:37:57.627641 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.629431 kubelet[3382]: E0904 17:37:57.629412 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.629979 kubelet[3382]: W0904 17:37:57.629666 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.630601 kubelet[3382]: E0904 17:37:57.630400 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.630919 kubelet[3382]: E0904 17:37:57.630740 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.630919 kubelet[3382]: W0904 17:37:57.630752 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.630919 kubelet[3382]: E0904 17:37:57.630778 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.631303 kubelet[3382]: E0904 17:37:57.631282 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.631583 kubelet[3382]: W0904 17:37:57.631368 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.632024 kubelet[3382]: E0904 17:37:57.631744 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.632794 kubelet[3382]: E0904 17:37:57.632721 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.632794 kubelet[3382]: W0904 17:37:57.632734 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.633112 kubelet[3382]: E0904 17:37:57.633085 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.633446 kubelet[3382]: E0904 17:37:57.633430 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.633661 kubelet[3382]: W0904 17:37:57.633453 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.633661 kubelet[3382]: E0904 17:37:57.633609 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.634461 kubelet[3382]: E0904 17:37:57.633939 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.634461 kubelet[3382]: W0904 17:37:57.633952 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.634461 kubelet[3382]: E0904 17:37:57.634004 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.634623 kubelet[3382]: E0904 17:37:57.634491 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.634623 kubelet[3382]: W0904 17:37:57.634502 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.634623 kubelet[3382]: E0904 17:37:57.634525 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.635953 kubelet[3382]: E0904 17:37:57.634867 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.635953 kubelet[3382]: W0904 17:37:57.634885 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.635953 kubelet[3382]: E0904 17:37:57.634905 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.635953 kubelet[3382]: E0904 17:37:57.635655 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.635953 kubelet[3382]: W0904 17:37:57.635666 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.635953 kubelet[3382]: E0904 17:37:57.635716 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.636582 kubelet[3382]: E0904 17:37:57.636312 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.636582 kubelet[3382]: W0904 17:37:57.636326 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.637436 kubelet[3382]: E0904 17:37:57.636645 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.637436 kubelet[3382]: E0904 17:37:57.636711 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.637436 kubelet[3382]: W0904 17:37:57.636720 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.637436 kubelet[3382]: E0904 17:37:57.636800 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.637780 kubelet[3382]: E0904 17:37:57.637696 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.637780 kubelet[3382]: W0904 17:37:57.637707 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.637780 kubelet[3382]: E0904 17:37:57.637755 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.638570 kubelet[3382]: E0904 17:37:57.638394 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.638570 kubelet[3382]: W0904 17:37:57.638407 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.638760 kubelet[3382]: E0904 17:37:57.638648 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.640371 kubelet[3382]: E0904 17:37:57.639259 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.640371 kubelet[3382]: W0904 17:37:57.639273 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.640371 kubelet[3382]: E0904 17:37:57.639292 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.640371 kubelet[3382]: E0904 17:37:57.640061 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.640371 kubelet[3382]: W0904 17:37:57.640073 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.640371 kubelet[3382]: E0904 17:37:57.640090 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.640725 kubelet[3382]: E0904 17:37:57.640590 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:57.640725 kubelet[3382]: W0904 17:37:57.640602 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:57.640725 kubelet[3382]: E0904 17:37:57.640620 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:57.668753 systemd[1]: run-containerd-runc-k8s.io-157ef76e46669ef565b8dd2aa60f22c4c8a72a5932ccd61265522c6c90127674-runc.od3vZV.mount: Deactivated successfully. Sep 4 17:37:58.191220 containerd[1981]: time="2024-09-04T17:37:58.191164506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:58.197329 containerd[1981]: time="2024-09-04T17:37:58.197247762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:37:58.203486 containerd[1981]: time="2024-09-04T17:37:58.201239410Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:58.211580 containerd[1981]: time="2024-09-04T17:37:58.211215382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:58.214303 containerd[1981]: time="2024-09-04T17:37:58.214070282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.549618801s" Sep 4 17:37:58.214494 containerd[1981]: time="2024-09-04T17:37:58.214477297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:37:58.265849 containerd[1981]: time="2024-09-04T17:37:58.265816586Z" level=info msg="CreateContainer within sandbox \"6a0a4bec5dfa3d11b4c9fdca35f2ad42859b5b8d0cf938e3080017610159d75f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:37:58.346605 kubelet[3382]: E0904 17:37:58.344089 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrdz7" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" Sep 4 17:37:58.461957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount474701986.mount: Deactivated successfully. Sep 4 17:37:58.470452 containerd[1981]: time="2024-09-04T17:37:58.470400092Z" level=info msg="CreateContainer within sandbox \"6a0a4bec5dfa3d11b4c9fdca35f2ad42859b5b8d0cf938e3080017610159d75f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a0e15bd9cdb718a746b48703342f8875c2f585a7d3f936d0749caf717b1f37b7\"" Sep 4 17:37:58.472427 containerd[1981]: time="2024-09-04T17:37:58.472387514Z" level=info msg="StartContainer for \"a0e15bd9cdb718a746b48703342f8875c2f585a7d3f936d0749caf717b1f37b7\"" Sep 4 17:37:58.514829 kubelet[3382]: I0904 17:37:58.514163 3382 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:37:58.550722 systemd[1]: Started cri-containerd-a0e15bd9cdb718a746b48703342f8875c2f585a7d3f936d0749caf717b1f37b7.scope - libcontainer container a0e15bd9cdb718a746b48703342f8875c2f585a7d3f936d0749caf717b1f37b7. Sep 4 17:37:58.552586 kubelet[3382]: E0904 17:37:58.552564 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.553685 kubelet[3382]: W0904 17:37:58.552650 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.553685 kubelet[3382]: E0904 17:37:58.552683 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.554279 kubelet[3382]: E0904 17:37:58.553944 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.554279 kubelet[3382]: W0904 17:37:58.553961 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.554279 kubelet[3382]: E0904 17:37:58.553983 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.555712 kubelet[3382]: E0904 17:37:58.555476 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.555712 kubelet[3382]: W0904 17:37:58.555493 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.555712 kubelet[3382]: E0904 17:37:58.555517 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.556271 kubelet[3382]: E0904 17:37:58.556253 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.556380 kubelet[3382]: W0904 17:37:58.556271 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.556380 kubelet[3382]: E0904 17:37:58.556292 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.556834 kubelet[3382]: E0904 17:37:58.556814 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.556834 kubelet[3382]: W0904 17:37:58.556829 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.557238 kubelet[3382]: E0904 17:37:58.556937 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.557527 kubelet[3382]: E0904 17:37:58.557389 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.557527 kubelet[3382]: W0904 17:37:58.557417 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.558252 kubelet[3382]: E0904 17:37:58.557702 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.558531 kubelet[3382]: E0904 17:37:58.558459 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.558531 kubelet[3382]: W0904 17:37:58.558469 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.558887 kubelet[3382]: E0904 17:37:58.558594 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.559351 kubelet[3382]: E0904 17:37:58.559166 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.559351 kubelet[3382]: W0904 17:37:58.559243 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.561774 kubelet[3382]: E0904 17:37:58.561528 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.564474 kubelet[3382]: E0904 17:37:58.563992 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.564474 kubelet[3382]: W0904 17:37:58.564265 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.565515 kubelet[3382]: E0904 17:37:58.565126 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.567712 kubelet[3382]: E0904 17:37:58.567137 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.567712 kubelet[3382]: W0904 17:37:58.567158 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.567712 kubelet[3382]: E0904 17:37:58.567638 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.568540 kubelet[3382]: E0904 17:37:58.568523 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.568540 kubelet[3382]: W0904 17:37:58.568541 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.569247 kubelet[3382]: E0904 17:37:58.568563 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.570325 kubelet[3382]: E0904 17:37:58.570307 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.570404 kubelet[3382]: W0904 17:37:58.570325 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.570404 kubelet[3382]: E0904 17:37:58.570347 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.570760 kubelet[3382]: E0904 17:37:58.570745 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.570825 kubelet[3382]: W0904 17:37:58.570761 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.570825 kubelet[3382]: E0904 17:37:58.570791 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.572236 kubelet[3382]: E0904 17:37:58.571262 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.572236 kubelet[3382]: W0904 17:37:58.571276 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.572236 kubelet[3382]: E0904 17:37:58.571294 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.572501 kubelet[3382]: E0904 17:37:58.572489 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.572902 kubelet[3382]: W0904 17:37:58.572782 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.572902 kubelet[3382]: E0904 17:37:58.572809 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.574505 kubelet[3382]: E0904 17:37:58.574351 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.574505 kubelet[3382]: W0904 17:37:58.574367 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.574505 kubelet[3382]: E0904 17:37:58.574387 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.576480 kubelet[3382]: E0904 17:37:58.576385 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.576480 kubelet[3382]: W0904 17:37:58.576400 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.576480 kubelet[3382]: E0904 17:37:58.576433 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.577991 kubelet[3382]: E0904 17:37:58.577615 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.577991 kubelet[3382]: W0904 17:37:58.577629 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.577991 kubelet[3382]: E0904 17:37:58.577903 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.578650 kubelet[3382]: E0904 17:37:58.578558 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.578650 kubelet[3382]: W0904 17:37:58.578572 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.578966 kubelet[3382]: E0904 17:37:58.578802 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.579275 kubelet[3382]: E0904 17:37:58.579150 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.579275 kubelet[3382]: W0904 17:37:58.579162 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.579727 kubelet[3382]: E0904 17:37:58.579184 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.581219 kubelet[3382]: E0904 17:37:58.580457 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.581219 kubelet[3382]: W0904 17:37:58.580471 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.582566 kubelet[3382]: E0904 17:37:58.581840 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.583471 kubelet[3382]: E0904 17:37:58.583458 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.584296 kubelet[3382]: W0904 17:37:58.584111 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.584966 kubelet[3382]: E0904 17:37:58.584953 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.585523 kubelet[3382]: W0904 17:37:58.585064 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.587141 kubelet[3382]: E0904 17:37:58.586595 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.587629 kubelet[3382]: W0904 17:37:58.587344 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.587629 kubelet[3382]: E0904 17:37:58.587377 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.593119 kubelet[3382]: E0904 17:37:58.587852 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.593119 kubelet[3382]: E0904 17:37:58.587895 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.593119 kubelet[3382]: E0904 17:37:58.591016 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.593119 kubelet[3382]: W0904 17:37:58.591294 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.593119 kubelet[3382]: E0904 17:37:58.591337 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.599166 kubelet[3382]: E0904 17:37:58.598061 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.599166 kubelet[3382]: W0904 17:37:58.598114 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.599166 kubelet[3382]: E0904 17:37:58.598440 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.599166 kubelet[3382]: E0904 17:37:58.598730 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.599166 kubelet[3382]: W0904 17:37:58.598742 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.599166 kubelet[3382]: E0904 17:37:58.598779 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.601286 kubelet[3382]: E0904 17:37:58.601264 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.603848 kubelet[3382]: W0904 17:37:58.601281 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.603848 kubelet[3382]: E0904 17:37:58.601491 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.603848 kubelet[3382]: E0904 17:37:58.602892 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.603848 kubelet[3382]: W0904 17:37:58.602906 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.603848 kubelet[3382]: E0904 17:37:58.603343 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.603848 kubelet[3382]: E0904 17:37:58.603788 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.603848 kubelet[3382]: W0904 17:37:58.603800 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.604267 kubelet[3382]: E0904 17:37:58.603952 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.604552 kubelet[3382]: E0904 17:37:58.604440 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.604552 kubelet[3382]: W0904 17:37:58.604476 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.604552 kubelet[3382]: E0904 17:37:58.604500 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.611939 kubelet[3382]: E0904 17:37:58.611910 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.612563 kubelet[3382]: W0904 17:37:58.612216 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.612563 kubelet[3382]: E0904 17:37:58.612377 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.612968 kubelet[3382]: E0904 17:37:58.612877 3382 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:37:58.612968 kubelet[3382]: W0904 17:37:58.612903 3382 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:37:58.612968 kubelet[3382]: E0904 17:37:58.612931 3382 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:37:58.652365 containerd[1981]: time="2024-09-04T17:37:58.652138802Z" level=info msg="StartContainer for \"a0e15bd9cdb718a746b48703342f8875c2f585a7d3f936d0749caf717b1f37b7\" returns successfully" Sep 4 17:37:58.715373 systemd[1]: cri-containerd-a0e15bd9cdb718a746b48703342f8875c2f585a7d3f936d0749caf717b1f37b7.scope: Deactivated successfully. Sep 4 17:37:58.799396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0e15bd9cdb718a746b48703342f8875c2f585a7d3f936d0749caf717b1f37b7-rootfs.mount: Deactivated successfully. Sep 4 17:37:59.112456 containerd[1981]: time="2024-09-04T17:37:59.069356336Z" level=info msg="shim disconnected" id=a0e15bd9cdb718a746b48703342f8875c2f585a7d3f936d0749caf717b1f37b7 namespace=k8s.io Sep 4 17:37:59.113903 containerd[1981]: time="2024-09-04T17:37:59.112459678Z" level=warning msg="cleaning up after shim disconnected" id=a0e15bd9cdb718a746b48703342f8875c2f585a7d3f936d0749caf717b1f37b7 namespace=k8s.io Sep 4 17:37:59.113903 containerd[1981]: time="2024-09-04T17:37:59.112481882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:59.519051 containerd[1981]: time="2024-09-04T17:37:59.518829643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:37:59.540586 kubelet[3382]: I0904 17:37:59.538537 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5659444fc-j6r74" podStartSLOduration=4.370631423 podCreationTimestamp="2024-09-04 17:37:52 +0000 UTC" firstStartedPulling="2024-09-04 17:37:53.49244947 +0000 UTC m=+21.396210464" lastFinishedPulling="2024-09-04 17:37:56.660300955 +0000 UTC m=+24.564061955" observedRunningTime="2024-09-04 17:37:57.521243743 +0000 UTC m=+25.425004749" watchObservedRunningTime="2024-09-04 17:37:59.538482914 +0000 UTC m=+27.442243921" Sep 4 17:38:00.344114 kubelet[3382]: E0904 17:38:00.342710 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrdz7" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" Sep 4 17:38:02.347985 kubelet[3382]: E0904 17:38:02.347936 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrdz7" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" Sep 4 17:38:04.342888 kubelet[3382]: E0904 17:38:04.342669 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qrdz7" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" Sep 4 17:38:04.635763 containerd[1981]: time="2024-09-04T17:38:04.635391451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:04.637455 containerd[1981]: time="2024-09-04T17:38:04.637387141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:38:04.639509 containerd[1981]: time="2024-09-04T17:38:04.639468247Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:04.642415 containerd[1981]: time="2024-09-04T17:38:04.642215050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:04.644019 containerd[1981]: time="2024-09-04T17:38:04.643980606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.125100079s" Sep 4 17:38:04.644132 containerd[1981]: time="2024-09-04T17:38:04.644066492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:38:04.648120 containerd[1981]: time="2024-09-04T17:38:04.648078746Z" level=info msg="CreateContainer within sandbox \"6a0a4bec5dfa3d11b4c9fdca35f2ad42859b5b8d0cf938e3080017610159d75f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:38:04.679146 containerd[1981]: time="2024-09-04T17:38:04.679101668Z" level=info msg="CreateContainer within sandbox \"6a0a4bec5dfa3d11b4c9fdca35f2ad42859b5b8d0cf938e3080017610159d75f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"67cff015b04cdae58469999b94b26bb9f8e4d74f8585b7ef21998af059cc169b\"" Sep 4 17:38:04.681985 containerd[1981]: time="2024-09-04T17:38:04.680184796Z" level=info msg="StartContainer for \"67cff015b04cdae58469999b94b26bb9f8e4d74f8585b7ef21998af059cc169b\"" Sep 4 17:38:04.849416 systemd[1]: Started cri-containerd-67cff015b04cdae58469999b94b26bb9f8e4d74f8585b7ef21998af059cc169b.scope - libcontainer container 67cff015b04cdae58469999b94b26bb9f8e4d74f8585b7ef21998af059cc169b. Sep 4 17:38:04.889577 containerd[1981]: time="2024-09-04T17:38:04.889465442Z" level=info msg="StartContainer for \"67cff015b04cdae58469999b94b26bb9f8e4d74f8585b7ef21998af059cc169b\" returns successfully" Sep 4 17:38:05.901578 systemd[1]: cri-containerd-67cff015b04cdae58469999b94b26bb9f8e4d74f8585b7ef21998af059cc169b.scope: Deactivated successfully. Sep 4 17:38:05.976538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67cff015b04cdae58469999b94b26bb9f8e4d74f8585b7ef21998af059cc169b-rootfs.mount: Deactivated successfully. Sep 4 17:38:05.992594 kubelet[3382]: I0904 17:38:05.992554 3382 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:38:05.993545 containerd[1981]: time="2024-09-04T17:38:05.993473956Z" level=info msg="shim disconnected" id=67cff015b04cdae58469999b94b26bb9f8e4d74f8585b7ef21998af059cc169b namespace=k8s.io Sep 4 17:38:05.994344 containerd[1981]: time="2024-09-04T17:38:05.994135745Z" level=warning msg="cleaning up after shim disconnected" id=67cff015b04cdae58469999b94b26bb9f8e4d74f8585b7ef21998af059cc169b namespace=k8s.io Sep 4 17:38:05.994344 containerd[1981]: time="2024-09-04T17:38:05.994162399Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:06.021252 containerd[1981]: time="2024-09-04T17:38:06.021019738Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:38:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:38:06.046460 kubelet[3382]: I0904 17:38:06.046396 3382 topology_manager.go:215] "Topology Admit Handler" podUID="dfc3a0dc-44f6-439b-9637-624a1c16320c" podNamespace="kube-system" podName="coredns-5dd5756b68-xcl8p" Sep 4 17:38:06.058062 systemd[1]: Created slice kubepods-burstable-poddfc3a0dc_44f6_439b_9637_624a1c16320c.slice - libcontainer container kubepods-burstable-poddfc3a0dc_44f6_439b_9637_624a1c16320c.slice. Sep 4 17:38:06.061010 kubelet[3382]: I0904 17:38:06.060061 3382 topology_manager.go:215] "Topology Admit Handler" podUID="0e1f12cd-e8de-4552-9978-e9886ee78d4e" podNamespace="kube-system" podName="coredns-5dd5756b68-7fc8n" Sep 4 17:38:06.066216 kubelet[3382]: I0904 17:38:06.061992 3382 topology_manager.go:215] "Topology Admit Handler" podUID="ea34c68a-28f8-4302-9bc8-31267e5610bf" podNamespace="calico-system" podName="calico-kube-controllers-7b8d99456c-4chwr" Sep 4 17:38:06.081165 systemd[1]: Created slice kubepods-besteffort-podea34c68a_28f8_4302_9bc8_31267e5610bf.slice - libcontainer container kubepods-besteffort-podea34c68a_28f8_4302_9bc8_31267e5610bf.slice. Sep 4 17:38:06.092095 systemd[1]: Created slice kubepods-burstable-pod0e1f12cd_e8de_4552_9978_e9886ee78d4e.slice - libcontainer container kubepods-burstable-pod0e1f12cd_e8de_4552_9978_e9886ee78d4e.slice. Sep 4 17:38:06.163164 kubelet[3382]: I0904 17:38:06.162871 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxqbp\" (UniqueName: \"kubernetes.io/projected/ea34c68a-28f8-4302-9bc8-31267e5610bf-kube-api-access-vxqbp\") pod \"calico-kube-controllers-7b8d99456c-4chwr\" (UID: \"ea34c68a-28f8-4302-9bc8-31267e5610bf\") " pod="calico-system/calico-kube-controllers-7b8d99456c-4chwr" Sep 4 17:38:06.163164 kubelet[3382]: I0904 17:38:06.162936 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfc3a0dc-44f6-439b-9637-624a1c16320c-config-volume\") pod \"coredns-5dd5756b68-xcl8p\" (UID: \"dfc3a0dc-44f6-439b-9637-624a1c16320c\") " pod="kube-system/coredns-5dd5756b68-xcl8p" Sep 4 17:38:06.163164 kubelet[3382]: I0904 17:38:06.163007 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea34c68a-28f8-4302-9bc8-31267e5610bf-tigera-ca-bundle\") pod \"calico-kube-controllers-7b8d99456c-4chwr\" (UID: \"ea34c68a-28f8-4302-9bc8-31267e5610bf\") " pod="calico-system/calico-kube-controllers-7b8d99456c-4chwr" Sep 4 17:38:06.163164 kubelet[3382]: I0904 17:38:06.163033 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvpjp\" (UniqueName: \"kubernetes.io/projected/dfc3a0dc-44f6-439b-9637-624a1c16320c-kube-api-access-pvpjp\") pod \"coredns-5dd5756b68-xcl8p\" (UID: \"dfc3a0dc-44f6-439b-9637-624a1c16320c\") " pod="kube-system/coredns-5dd5756b68-xcl8p" Sep 4 17:38:06.163164 kubelet[3382]: I0904 17:38:06.163063 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e1f12cd-e8de-4552-9978-e9886ee78d4e-config-volume\") pod \"coredns-5dd5756b68-7fc8n\" (UID: \"0e1f12cd-e8de-4552-9978-e9886ee78d4e\") " pod="kube-system/coredns-5dd5756b68-7fc8n" Sep 4 17:38:06.163827 kubelet[3382]: I0904 17:38:06.163109 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9q4l\" (UniqueName: \"kubernetes.io/projected/0e1f12cd-e8de-4552-9978-e9886ee78d4e-kube-api-access-h9q4l\") pod \"coredns-5dd5756b68-7fc8n\" (UID: \"0e1f12cd-e8de-4552-9978-e9886ee78d4e\") " pod="kube-system/coredns-5dd5756b68-7fc8n" Sep 4 17:38:06.350737 systemd[1]: Created slice kubepods-besteffort-pod69a2d1ad_1774_4773_ab86_418e1662aaff.slice - libcontainer container kubepods-besteffort-pod69a2d1ad_1774_4773_ab86_418e1662aaff.slice. Sep 4 17:38:06.360781 containerd[1981]: time="2024-09-04T17:38:06.360739615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qrdz7,Uid:69a2d1ad-1774-4773-ab86-418e1662aaff,Namespace:calico-system,Attempt:0,}" Sep 4 17:38:06.374268 containerd[1981]: time="2024-09-04T17:38:06.373876505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xcl8p,Uid:dfc3a0dc-44f6-439b-9637-624a1c16320c,Namespace:kube-system,Attempt:0,}" Sep 4 17:38:06.390184 containerd[1981]: time="2024-09-04T17:38:06.389817261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8d99456c-4chwr,Uid:ea34c68a-28f8-4302-9bc8-31267e5610bf,Namespace:calico-system,Attempt:0,}" Sep 4 17:38:06.406288 containerd[1981]: time="2024-09-04T17:38:06.406232978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7fc8n,Uid:0e1f12cd-e8de-4552-9978-e9886ee78d4e,Namespace:kube-system,Attempt:0,}" Sep 4 17:38:06.557514 containerd[1981]: time="2024-09-04T17:38:06.557475310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:38:06.836845 containerd[1981]: time="2024-09-04T17:38:06.836706422Z" level=error msg="Failed to destroy network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.846147 containerd[1981]: time="2024-09-04T17:38:06.846085233Z" level=error msg="Failed to destroy network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.849177 containerd[1981]: time="2024-09-04T17:38:06.849104670Z" level=error msg="encountered an error cleaning up failed sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.849340 containerd[1981]: time="2024-09-04T17:38:06.849221756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qrdz7,Uid:69a2d1ad-1774-4773-ab86-418e1662aaff,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.849464 containerd[1981]: time="2024-09-04T17:38:06.849432610Z" level=error msg="encountered an error cleaning up failed sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.849606 containerd[1981]: time="2024-09-04T17:38:06.849576982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xcl8p,Uid:dfc3a0dc-44f6-439b-9637-624a1c16320c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.882252 kubelet[3382]: E0904 17:38:06.880437 3382 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.882252 kubelet[3382]: E0904 17:38:06.880524 3382 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xcl8p" Sep 4 17:38:06.882252 kubelet[3382]: E0904 17:38:06.880552 3382 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xcl8p" Sep 4 17:38:06.882252 kubelet[3382]: E0904 17:38:06.880436 3382 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.883266 kubelet[3382]: E0904 17:38:06.880606 3382 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qrdz7" Sep 4 17:38:06.883266 kubelet[3382]: E0904 17:38:06.880620 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-xcl8p_kube-system(dfc3a0dc-44f6-439b-9637-624a1c16320c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-xcl8p_kube-system(dfc3a0dc-44f6-439b-9637-624a1c16320c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xcl8p" podUID="dfc3a0dc-44f6-439b-9637-624a1c16320c" Sep 4 17:38:06.883266 kubelet[3382]: E0904 17:38:06.880630 3382 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qrdz7" Sep 4 17:38:06.883478 kubelet[3382]: E0904 17:38:06.880671 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qrdz7_calico-system(69a2d1ad-1774-4773-ab86-418e1662aaff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qrdz7_calico-system(69a2d1ad-1774-4773-ab86-418e1662aaff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qrdz7" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" Sep 4 17:38:06.885149 containerd[1981]: time="2024-09-04T17:38:06.885104264Z" level=error msg="Failed to destroy network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.887004 containerd[1981]: time="2024-09-04T17:38:06.886661409Z" level=error msg="encountered an error cleaning up failed sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.888521 containerd[1981]: time="2024-09-04T17:38:06.888481271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7fc8n,Uid:0e1f12cd-e8de-4552-9978-e9886ee78d4e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.889755 kubelet[3382]: E0904 17:38:06.889733 3382 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.889948 containerd[1981]: time="2024-09-04T17:38:06.889836246Z" level=error msg="Failed to destroy network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.890109 kubelet[3382]: E0904 17:38:06.890095 3382 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-7fc8n" Sep 4 17:38:06.890237 kubelet[3382]: E0904 17:38:06.890226 3382 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-7fc8n" Sep 4 17:38:06.890484 containerd[1981]: time="2024-09-04T17:38:06.890427088Z" level=error msg="encountered an error cleaning up failed sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.890648 containerd[1981]: time="2024-09-04T17:38:06.890514397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8d99456c-4chwr,Uid:ea34c68a-28f8-4302-9bc8-31267e5610bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.892208 kubelet[3382]: E0904 17:38:06.890559 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-7fc8n_kube-system(0e1f12cd-e8de-4552-9978-e9886ee78d4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-7fc8n_kube-system(0e1f12cd-e8de-4552-9978-e9886ee78d4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-7fc8n" podUID="0e1f12cd-e8de-4552-9978-e9886ee78d4e" Sep 4 17:38:06.892208 kubelet[3382]: E0904 17:38:06.891344 3382 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:06.892208 kubelet[3382]: E0904 17:38:06.891473 3382 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b8d99456c-4chwr" Sep 4 17:38:06.892606 kubelet[3382]: E0904 17:38:06.891627 3382 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b8d99456c-4chwr" Sep 4 17:38:06.892606 kubelet[3382]: E0904 17:38:06.891856 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b8d99456c-4chwr_calico-system(ea34c68a-28f8-4302-9bc8-31267e5610bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b8d99456c-4chwr_calico-system(ea34c68a-28f8-4302-9bc8-31267e5610bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b8d99456c-4chwr" podUID="ea34c68a-28f8-4302-9bc8-31267e5610bf" Sep 4 17:38:06.978375 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b-shm.mount: Deactivated successfully. Sep 4 17:38:07.560268 kubelet[3382]: I0904 17:38:07.559492 3382 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:07.564284 kubelet[3382]: I0904 17:38:07.563184 3382 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:07.597525 containerd[1981]: time="2024-09-04T17:38:07.597480663Z" level=info msg="StopPodSandbox for \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\"" Sep 4 17:38:07.600224 containerd[1981]: time="2024-09-04T17:38:07.599088476Z" level=info msg="StopPodSandbox for \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\"" Sep 4 17:38:07.600422 kubelet[3382]: I0904 17:38:07.599842 3382 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:07.600799 containerd[1981]: time="2024-09-04T17:38:07.600768044Z" level=info msg="Ensure that sandbox 966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07 in task-service has been cleanup successfully" Sep 4 17:38:07.601363 containerd[1981]: time="2024-09-04T17:38:07.601261666Z" level=info msg="Ensure that sandbox a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b in task-service has been cleanup successfully" Sep 4 17:38:07.603603 containerd[1981]: time="2024-09-04T17:38:07.603571471Z" level=info msg="StopPodSandbox for \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\"" Sep 4 17:38:07.605848 containerd[1981]: time="2024-09-04T17:38:07.604959008Z" level=info msg="Ensure that sandbox b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824 in task-service has been cleanup successfully" Sep 4 17:38:07.608439 kubelet[3382]: I0904 17:38:07.608172 3382 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:07.611728 containerd[1981]: time="2024-09-04T17:38:07.611366793Z" level=info msg="StopPodSandbox for \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\"" Sep 4 17:38:07.611728 containerd[1981]: time="2024-09-04T17:38:07.611575781Z" level=info msg="Ensure that sandbox b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338 in task-service has been cleanup successfully" Sep 4 17:38:07.851637 containerd[1981]: time="2024-09-04T17:38:07.850481361Z" level=error msg="StopPodSandbox for \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\" failed" error="failed to destroy network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:07.858306 kubelet[3382]: E0904 17:38:07.858258 3382 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:07.868617 kubelet[3382]: E0904 17:38:07.868233 3382 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07"} Sep 4 17:38:07.869220 kubelet[3382]: E0904 17:38:07.868963 3382 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e1f12cd-e8de-4552-9978-e9886ee78d4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:38:07.869220 kubelet[3382]: E0904 17:38:07.869036 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e1f12cd-e8de-4552-9978-e9886ee78d4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-7fc8n" podUID="0e1f12cd-e8de-4552-9978-e9886ee78d4e" Sep 4 17:38:07.890572 containerd[1981]: time="2024-09-04T17:38:07.890501105Z" level=error msg="StopPodSandbox for \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\" failed" error="failed to destroy network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:07.891050 kubelet[3382]: E0904 17:38:07.890994 3382 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:07.891394 kubelet[3382]: E0904 17:38:07.891259 3382 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824"} Sep 4 17:38:07.891394 kubelet[3382]: E0904 17:38:07.891317 3382 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea34c68a-28f8-4302-9bc8-31267e5610bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:38:07.891394 kubelet[3382]: E0904 17:38:07.891373 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea34c68a-28f8-4302-9bc8-31267e5610bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b8d99456c-4chwr" podUID="ea34c68a-28f8-4302-9bc8-31267e5610bf" Sep 4 17:38:07.898405 containerd[1981]: time="2024-09-04T17:38:07.898314333Z" level=error msg="StopPodSandbox for \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\" failed" error="failed to destroy network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:07.898664 kubelet[3382]: E0904 17:38:07.898639 3382 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:07.898766 kubelet[3382]: E0904 17:38:07.898687 3382 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b"} Sep 4 17:38:07.898766 kubelet[3382]: E0904 17:38:07.898746 3382 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69a2d1ad-1774-4773-ab86-418e1662aaff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:38:07.898907 kubelet[3382]: E0904 17:38:07.898789 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69a2d1ad-1774-4773-ab86-418e1662aaff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qrdz7" podUID="69a2d1ad-1774-4773-ab86-418e1662aaff" Sep 4 17:38:07.899718 containerd[1981]: time="2024-09-04T17:38:07.899321675Z" level=error msg="StopPodSandbox for \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\" failed" error="failed to destroy network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:07.899815 kubelet[3382]: E0904 17:38:07.899569 3382 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:07.899815 kubelet[3382]: E0904 17:38:07.899604 3382 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338"} Sep 4 17:38:07.899815 kubelet[3382]: E0904 17:38:07.899654 3382 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dfc3a0dc-44f6-439b-9637-624a1c16320c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:38:07.899815 kubelet[3382]: E0904 17:38:07.899694 3382 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dfc3a0dc-44f6-439b-9637-624a1c16320c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xcl8p" podUID="dfc3a0dc-44f6-439b-9637-624a1c16320c" Sep 4 17:38:11.870512 kubelet[3382]: I0904 17:38:11.870410 3382 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:38:13.899744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1624682160.mount: Deactivated successfully. Sep 4 17:38:14.229369 containerd[1981]: time="2024-09-04T17:38:14.228084740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 7.670350097s" Sep 4 17:38:14.229369 containerd[1981]: time="2024-09-04T17:38:14.228152410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:38:14.229369 containerd[1981]: time="2024-09-04T17:38:14.158110759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:38:14.281545 containerd[1981]: time="2024-09-04T17:38:14.281310385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:14.356713 containerd[1981]: time="2024-09-04T17:38:14.352184221Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:14.398600 containerd[1981]: time="2024-09-04T17:38:14.397989169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:14.444025 containerd[1981]: time="2024-09-04T17:38:14.443979412Z" level=info msg="CreateContainer within sandbox \"6a0a4bec5dfa3d11b4c9fdca35f2ad42859b5b8d0cf938e3080017610159d75f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:38:14.552364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931163982.mount: Deactivated successfully. Sep 4 17:38:14.610741 containerd[1981]: time="2024-09-04T17:38:14.610691420Z" level=info msg="CreateContainer within sandbox \"6a0a4bec5dfa3d11b4c9fdca35f2ad42859b5b8d0cf938e3080017610159d75f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"93e56748b3621468b9c8fd1773c6c50f975b85da555361010eb175ad63a80565\"" Sep 4 17:38:14.617976 containerd[1981]: time="2024-09-04T17:38:14.617921251Z" level=info msg="StartContainer for \"93e56748b3621468b9c8fd1773c6c50f975b85da555361010eb175ad63a80565\"" Sep 4 17:38:14.802482 systemd[1]: Started cri-containerd-93e56748b3621468b9c8fd1773c6c50f975b85da555361010eb175ad63a80565.scope - libcontainer container 93e56748b3621468b9c8fd1773c6c50f975b85da555361010eb175ad63a80565. Sep 4 17:38:14.884504 containerd[1981]: time="2024-09-04T17:38:14.884453943Z" level=info msg="StartContainer for \"93e56748b3621468b9c8fd1773c6c50f975b85da555361010eb175ad63a80565\" returns successfully" Sep 4 17:38:15.190873 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:38:15.191644 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:38:15.802765 systemd[1]: run-containerd-runc-k8s.io-93e56748b3621468b9c8fd1773c6c50f975b85da555361010eb175ad63a80565-runc.81NmAX.mount: Deactivated successfully. Sep 4 17:38:16.745386 systemd[1]: run-containerd-runc-k8s.io-93e56748b3621468b9c8fd1773c6c50f975b85da555361010eb175ad63a80565-runc.S3eyWG.mount: Deactivated successfully. Sep 4 17:38:17.507878 kernel: bpftool[4541]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:38:18.009018 systemd-networkd[1825]: vxlan.calico: Link UP Sep 4 17:38:18.010883 (udev-worker)[4344]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:38:18.011046 systemd-networkd[1825]: vxlan.calico: Gained carrier Sep 4 17:38:18.036678 (udev-worker)[4346]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:38:18.039827 (udev-worker)[4345]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:38:18.350865 containerd[1981]: time="2024-09-04T17:38:18.350754584Z" level=info msg="StopPodSandbox for \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\"" Sep 4 17:38:18.528805 kubelet[3382]: I0904 17:38:18.528684 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2k582" podStartSLOduration=5.835008718 podCreationTimestamp="2024-09-04 17:37:52 +0000 UTC" firstStartedPulling="2024-09-04 17:37:53.565753349 +0000 UTC m=+21.469514346" lastFinishedPulling="2024-09-04 17:38:14.236377489 +0000 UTC m=+42.140138489" observedRunningTime="2024-09-04 17:38:15.712054294 +0000 UTC m=+43.615815299" watchObservedRunningTime="2024-09-04 17:38:18.505632861 +0000 UTC m=+46.409393867" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.505 [INFO][4644] k8s.go 608: Cleaning up netns ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.515 [INFO][4644] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" iface="eth0" netns="/var/run/netns/cni-091fc88a-028c-ce21-ffa6-b719e45a3926" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.517 [INFO][4644] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" iface="eth0" netns="/var/run/netns/cni-091fc88a-028c-ce21-ffa6-b719e45a3926" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.519 [INFO][4644] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" iface="eth0" netns="/var/run/netns/cni-091fc88a-028c-ce21-ffa6-b719e45a3926" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.519 [INFO][4644] k8s.go 615: Releasing IP address(es) ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.519 [INFO][4644] utils.go 188: Calico CNI releasing IP address ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.694 [INFO][4651] ipam_plugin.go 417: Releasing address using handleID ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" HandleID="k8s-pod-network.b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.695 [INFO][4651] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.696 [INFO][4651] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.717 [WARNING][4651] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" HandleID="k8s-pod-network.b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.717 [INFO][4651] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" HandleID="k8s-pod-network.b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.719 [INFO][4651] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:18.727886 containerd[1981]: 2024-09-04 17:38:18.723 [INFO][4644] k8s.go 621: Teardown processing complete. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:18.739030 systemd[1]: run-netns-cni\x2d091fc88a\x2d028c\x2dce21\x2dffa6\x2db719e45a3926.mount: Deactivated successfully. Sep 4 17:38:18.749155 containerd[1981]: time="2024-09-04T17:38:18.749087189Z" level=info msg="TearDown network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\" successfully" Sep 4 17:38:18.749155 containerd[1981]: time="2024-09-04T17:38:18.749134225Z" level=info msg="StopPodSandbox for \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\" returns successfully" Sep 4 17:38:18.750229 containerd[1981]: time="2024-09-04T17:38:18.750176370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xcl8p,Uid:dfc3a0dc-44f6-439b-9637-624a1c16320c,Namespace:kube-system,Attempt:1,}" Sep 4 17:38:18.989659 (udev-worker)[4602]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:38:18.991063 systemd-networkd[1825]: caliadafa3be521: Link UP Sep 4 17:38:18.998000 systemd-networkd[1825]: caliadafa3be521: Gained carrier Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.864 [INFO][4657] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0 coredns-5dd5756b68- kube-system dfc3a0dc-44f6-439b-9637-624a1c16320c 721 0 2024-09-04 17:37:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-194 coredns-5dd5756b68-xcl8p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliadafa3be521 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Namespace="kube-system" Pod="coredns-5dd5756b68-xcl8p" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.864 [INFO][4657] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Namespace="kube-system" Pod="coredns-5dd5756b68-xcl8p" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.913 [INFO][4669] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" HandleID="k8s-pod-network.2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.932 [INFO][4669] ipam_plugin.go 270: Auto assigning IP ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" HandleID="k8s-pod-network.2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-194", "pod":"coredns-5dd5756b68-xcl8p", "timestamp":"2024-09-04 17:38:18.913576779 +0000 UTC"}, Hostname:"ip-172-31-29-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.933 [INFO][4669] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.935 [INFO][4669] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.935 [INFO][4669] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-194' Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.938 [INFO][4669] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" host="ip-172-31-29-194" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.950 [INFO][4669] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-194" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.956 [INFO][4669] ipam.go 489: Trying affinity for 192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.958 [INFO][4669] ipam.go 155: Attempting to load block cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.960 [INFO][4669] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.961 [INFO][4669] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.7.0/26 handle="k8s-pod-network.2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" host="ip-172-31-29-194" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.963 [INFO][4669] ipam.go 1685: Creating new handle: k8s-pod-network.2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149 Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.967 [INFO][4669] ipam.go 1203: Writing block in order to claim IPs block=192.168.7.0/26 handle="k8s-pod-network.2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" host="ip-172-31-29-194" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.974 [INFO][4669] ipam.go 1216: Successfully claimed IPs: [192.168.7.1/26] block=192.168.7.0/26 handle="k8s-pod-network.2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" host="ip-172-31-29-194" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.974 [INFO][4669] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.7.1/26] handle="k8s-pod-network.2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" host="ip-172-31-29-194" Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.974 [INFO][4669] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:19.035703 containerd[1981]: 2024-09-04 17:38:18.974 [INFO][4669] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.7.1/26] IPv6=[] ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" HandleID="k8s-pod-network.2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:19.039110 containerd[1981]: 2024-09-04 17:38:18.983 [INFO][4657] k8s.go 386: Populated endpoint ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Namespace="kube-system" Pod="coredns-5dd5756b68-xcl8p" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"dfc3a0dc-44f6-439b-9637-624a1c16320c", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"", Pod:"coredns-5dd5756b68-xcl8p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadafa3be521", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:19.039110 containerd[1981]: 2024-09-04 17:38:18.983 [INFO][4657] k8s.go 387: Calico CNI using IPs: [192.168.7.1/32] ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Namespace="kube-system" Pod="coredns-5dd5756b68-xcl8p" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:19.039110 containerd[1981]: 2024-09-04 17:38:18.983 [INFO][4657] dataplane_linux.go 68: Setting the host side veth name to caliadafa3be521 ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Namespace="kube-system" Pod="coredns-5dd5756b68-xcl8p" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:19.039110 containerd[1981]: 2024-09-04 17:38:19.001 [INFO][4657] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Namespace="kube-system" Pod="coredns-5dd5756b68-xcl8p" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:19.039110 containerd[1981]: 2024-09-04 17:38:19.004 [INFO][4657] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Namespace="kube-system" Pod="coredns-5dd5756b68-xcl8p" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"dfc3a0dc-44f6-439b-9637-624a1c16320c", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149", Pod:"coredns-5dd5756b68-xcl8p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadafa3be521", MAC:"fa:ca:17:8b:f7:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:19.039110 containerd[1981]: 2024-09-04 17:38:19.028 [INFO][4657] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149" Namespace="kube-system" Pod="coredns-5dd5756b68-xcl8p" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:19.114756 containerd[1981]: time="2024-09-04T17:38:19.114647495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:19.114756 containerd[1981]: time="2024-09-04T17:38:19.114715851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:19.114974 containerd[1981]: time="2024-09-04T17:38:19.114736036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:19.114974 containerd[1981]: time="2024-09-04T17:38:19.114867363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:19.167447 systemd[1]: run-containerd-runc-k8s.io-2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149-runc.5YpPh0.mount: Deactivated successfully. Sep 4 17:38:19.177415 systemd[1]: Started cri-containerd-2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149.scope - libcontainer container 2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149. Sep 4 17:38:19.234302 containerd[1981]: time="2024-09-04T17:38:19.234231050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xcl8p,Uid:dfc3a0dc-44f6-439b-9637-624a1c16320c,Namespace:kube-system,Attempt:1,} returns sandbox id \"2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149\"" Sep 4 17:38:19.238839 containerd[1981]: time="2024-09-04T17:38:19.238718800Z" level=info msg="CreateContainer within sandbox \"2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:38:19.296169 containerd[1981]: time="2024-09-04T17:38:19.296054303Z" level=info msg="CreateContainer within sandbox \"2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a3af21b1cfd02a29750db2ef62f9e7b49c15dd06ef20fa03635057cb930e1fb\"" Sep 4 17:38:19.297317 containerd[1981]: time="2024-09-04T17:38:19.297065760Z" level=info msg="StartContainer for \"4a3af21b1cfd02a29750db2ef62f9e7b49c15dd06ef20fa03635057cb930e1fb\"" Sep 4 17:38:19.339424 systemd[1]: Started cri-containerd-4a3af21b1cfd02a29750db2ef62f9e7b49c15dd06ef20fa03635057cb930e1fb.scope - libcontainer container 4a3af21b1cfd02a29750db2ef62f9e7b49c15dd06ef20fa03635057cb930e1fb. Sep 4 17:38:19.347911 containerd[1981]: time="2024-09-04T17:38:19.347869752Z" level=info msg="StopPodSandbox for \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\"" Sep 4 17:38:19.414541 systemd-networkd[1825]: vxlan.calico: Gained IPv6LL Sep 4 17:38:19.433473 containerd[1981]: time="2024-09-04T17:38:19.431377403Z" level=info msg="StartContainer for \"4a3af21b1cfd02a29750db2ef62f9e7b49c15dd06ef20fa03635057cb930e1fb\" returns successfully" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.455 [INFO][4766] k8s.go 608: Cleaning up netns ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.455 [INFO][4766] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" iface="eth0" netns="/var/run/netns/cni-c3ee0ac5-cc14-50c7-2cff-d56ca6b4be93" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.457 [INFO][4766] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" iface="eth0" netns="/var/run/netns/cni-c3ee0ac5-cc14-50c7-2cff-d56ca6b4be93" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.458 [INFO][4766] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" iface="eth0" netns="/var/run/netns/cni-c3ee0ac5-cc14-50c7-2cff-d56ca6b4be93" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.458 [INFO][4766] k8s.go 615: Releasing IP address(es) ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.458 [INFO][4766] utils.go 188: Calico CNI releasing IP address ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.510 [INFO][4779] ipam_plugin.go 417: Releasing address using handleID ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" HandleID="k8s-pod-network.966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.510 [INFO][4779] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.510 [INFO][4779] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.525 [WARNING][4779] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" HandleID="k8s-pod-network.966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.525 [INFO][4779] ipam_plugin.go 445: Releasing address using workloadID ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" HandleID="k8s-pod-network.966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.527 [INFO][4779] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:19.531725 containerd[1981]: 2024-09-04 17:38:19.529 [INFO][4766] k8s.go 621: Teardown processing complete. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:19.532799 containerd[1981]: time="2024-09-04T17:38:19.531895584Z" level=info msg="TearDown network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\" successfully" Sep 4 17:38:19.532799 containerd[1981]: time="2024-09-04T17:38:19.531945662Z" level=info msg="StopPodSandbox for \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\" returns successfully" Sep 4 17:38:19.533121 containerd[1981]: time="2024-09-04T17:38:19.533093205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7fc8n,Uid:0e1f12cd-e8de-4552-9978-e9886ee78d4e,Namespace:kube-system,Attempt:1,}" Sep 4 17:38:19.776711 systemd-networkd[1825]: cali38bd30fc0d2: Link UP Sep 4 17:38:19.778029 systemd-networkd[1825]: cali38bd30fc0d2: Gained carrier Sep 4 17:38:19.787516 systemd[1]: run-netns-cni\x2dc3ee0ac5\x2dcc14\x2d50c7\x2d2cff\x2dd56ca6b4be93.mount: Deactivated successfully. Sep 4 17:38:19.807277 kubelet[3382]: I0904 17:38:19.807005 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xcl8p" podStartSLOduration=34.806941418 podCreationTimestamp="2024-09-04 17:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:38:19.761788757 +0000 UTC m=+47.665549763" watchObservedRunningTime="2024-09-04 17:38:19.806941418 +0000 UTC m=+47.710702426" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.621 [INFO][4789] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0 coredns-5dd5756b68- kube-system 0e1f12cd-e8de-4552-9978-e9886ee78d4e 732 0 2024-09-04 17:37:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-194 coredns-5dd5756b68-7fc8n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali38bd30fc0d2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Namespace="kube-system" Pod="coredns-5dd5756b68-7fc8n" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.623 [INFO][4789] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Namespace="kube-system" Pod="coredns-5dd5756b68-7fc8n" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.689 [INFO][4803] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" HandleID="k8s-pod-network.31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.705 [INFO][4803] ipam_plugin.go 270: Auto assigning IP ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" HandleID="k8s-pod-network.31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290110), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-194", "pod":"coredns-5dd5756b68-7fc8n", "timestamp":"2024-09-04 17:38:19.689954165 +0000 UTC"}, Hostname:"ip-172-31-29-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.705 [INFO][4803] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.705 [INFO][4803] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.705 [INFO][4803] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-194' Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.708 [INFO][4803] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" host="ip-172-31-29-194" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.726 [INFO][4803] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-194" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.739 [INFO][4803] ipam.go 489: Trying affinity for 192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.742 [INFO][4803] ipam.go 155: Attempting to load block cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.746 [INFO][4803] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.746 [INFO][4803] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.7.0/26 handle="k8s-pod-network.31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" host="ip-172-31-29-194" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.749 [INFO][4803] ipam.go 1685: Creating new handle: k8s-pod-network.31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9 Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.755 [INFO][4803] ipam.go 1203: Writing block in order to claim IPs block=192.168.7.0/26 handle="k8s-pod-network.31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" host="ip-172-31-29-194" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.765 [INFO][4803] ipam.go 1216: Successfully claimed IPs: [192.168.7.2/26] block=192.168.7.0/26 handle="k8s-pod-network.31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" host="ip-172-31-29-194" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.766 [INFO][4803] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.7.2/26] handle="k8s-pod-network.31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" host="ip-172-31-29-194" Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.766 [INFO][4803] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:19.816767 containerd[1981]: 2024-09-04 17:38:19.766 [INFO][4803] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.7.2/26] IPv6=[] ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" HandleID="k8s-pod-network.31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.818072 containerd[1981]: 2024-09-04 17:38:19.771 [INFO][4789] k8s.go 386: Populated endpoint ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Namespace="kube-system" Pod="coredns-5dd5756b68-7fc8n" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0e1f12cd-e8de-4552-9978-e9886ee78d4e", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"", Pod:"coredns-5dd5756b68-7fc8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38bd30fc0d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:19.818072 containerd[1981]: 2024-09-04 17:38:19.771 [INFO][4789] k8s.go 387: Calico CNI using IPs: [192.168.7.2/32] ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Namespace="kube-system" Pod="coredns-5dd5756b68-7fc8n" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.818072 containerd[1981]: 2024-09-04 17:38:19.772 [INFO][4789] dataplane_linux.go 68: Setting the host side veth name to cali38bd30fc0d2 ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Namespace="kube-system" Pod="coredns-5dd5756b68-7fc8n" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.818072 containerd[1981]: 2024-09-04 17:38:19.776 [INFO][4789] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Namespace="kube-system" Pod="coredns-5dd5756b68-7fc8n" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.818072 containerd[1981]: 2024-09-04 17:38:19.777 [INFO][4789] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Namespace="kube-system" Pod="coredns-5dd5756b68-7fc8n" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0e1f12cd-e8de-4552-9978-e9886ee78d4e", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9", Pod:"coredns-5dd5756b68-7fc8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38bd30fc0d2", MAC:"2e:28:1c:00:a2:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:19.818072 containerd[1981]: 2024-09-04 17:38:19.804 [INFO][4789] k8s.go 500: Wrote updated endpoint to datastore ContainerID="31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9" Namespace="kube-system" Pod="coredns-5dd5756b68-7fc8n" WorkloadEndpoint="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:19.986675 containerd[1981]: time="2024-09-04T17:38:19.986167117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:19.986675 containerd[1981]: time="2024-09-04T17:38:19.986691717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:19.986675 containerd[1981]: time="2024-09-04T17:38:19.986728894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:19.987343 containerd[1981]: time="2024-09-04T17:38:19.986859282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:20.025671 systemd[1]: Started cri-containerd-31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9.scope - libcontainer container 31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9. Sep 4 17:38:20.147889 containerd[1981]: time="2024-09-04T17:38:20.147839083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7fc8n,Uid:0e1f12cd-e8de-4552-9978-e9886ee78d4e,Namespace:kube-system,Attempt:1,} returns sandbox id \"31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9\"" Sep 4 17:38:20.158768 containerd[1981]: time="2024-09-04T17:38:20.158724204Z" level=info msg="CreateContainer within sandbox \"31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:38:20.184879 systemd-networkd[1825]: caliadafa3be521: Gained IPv6LL Sep 4 17:38:20.250920 containerd[1981]: time="2024-09-04T17:38:20.250843110Z" level=info msg="CreateContainer within sandbox \"31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad96f462bdca6eb4b2c74e31d687a43ffcb962b72efcf415b9af264c20e9e34a\"" Sep 4 17:38:20.255866 containerd[1981]: time="2024-09-04T17:38:20.255383065Z" level=info msg="StartContainer for \"ad96f462bdca6eb4b2c74e31d687a43ffcb962b72efcf415b9af264c20e9e34a\"" Sep 4 17:38:20.323954 systemd[1]: Started cri-containerd-ad96f462bdca6eb4b2c74e31d687a43ffcb962b72efcf415b9af264c20e9e34a.scope - libcontainer container ad96f462bdca6eb4b2c74e31d687a43ffcb962b72efcf415b9af264c20e9e34a. Sep 4 17:38:20.397745 containerd[1981]: time="2024-09-04T17:38:20.397690540Z" level=info msg="StartContainer for \"ad96f462bdca6eb4b2c74e31d687a43ffcb962b72efcf415b9af264c20e9e34a\" returns successfully" Sep 4 17:38:20.793719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059858779.mount: Deactivated successfully. Sep 4 17:38:20.820387 kubelet[3382]: I0904 17:38:20.820331 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7fc8n" podStartSLOduration=35.820277961 podCreationTimestamp="2024-09-04 17:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:38:20.777993254 +0000 UTC m=+48.681754270" watchObservedRunningTime="2024-09-04 17:38:20.820277961 +0000 UTC m=+48.724038965" Sep 4 17:38:21.589363 systemd-networkd[1825]: cali38bd30fc0d2: Gained IPv6LL Sep 4 17:38:21.938545 systemd[1]: Started sshd@7-172.31.29.194:22-139.178.68.195:51418.service - OpenSSH per-connection server daemon (139.178.68.195:51418). Sep 4 17:38:22.161341 sshd[4916]: Accepted publickey for core from 139.178.68.195 port 51418 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:38:22.168019 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:22.181148 systemd-logind[1960]: New session 8 of user core. Sep 4 17:38:22.184420 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:38:22.344755 containerd[1981]: time="2024-09-04T17:38:22.343962401Z" level=info msg="StopPodSandbox for \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\"" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.445 [INFO][4938] k8s.go 608: Cleaning up netns ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.451 [INFO][4938] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" iface="eth0" netns="/var/run/netns/cni-c74e2ae0-3318-e786-db8a-f0f1587f07f1" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.452 [INFO][4938] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" iface="eth0" netns="/var/run/netns/cni-c74e2ae0-3318-e786-db8a-f0f1587f07f1" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.453 [INFO][4938] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" iface="eth0" netns="/var/run/netns/cni-c74e2ae0-3318-e786-db8a-f0f1587f07f1" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.453 [INFO][4938] k8s.go 615: Releasing IP address(es) ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.453 [INFO][4938] utils.go 188: Calico CNI releasing IP address ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.543 [INFO][4947] ipam_plugin.go 417: Releasing address using handleID ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" HandleID="k8s-pod-network.a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.543 [INFO][4947] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.543 [INFO][4947] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.559 [WARNING][4947] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" HandleID="k8s-pod-network.a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.559 [INFO][4947] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" HandleID="k8s-pod-network.a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.561 [INFO][4947] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:22.568666 containerd[1981]: 2024-09-04 17:38:22.565 [INFO][4938] k8s.go 621: Teardown processing complete. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:22.573438 containerd[1981]: time="2024-09-04T17:38:22.569310180Z" level=info msg="TearDown network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\" successfully" Sep 4 17:38:22.573438 containerd[1981]: time="2024-09-04T17:38:22.569345082Z" level=info msg="StopPodSandbox for \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\" returns successfully" Sep 4 17:38:22.573438 containerd[1981]: time="2024-09-04T17:38:22.572487157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qrdz7,Uid:69a2d1ad-1774-4773-ab86-418e1662aaff,Namespace:calico-system,Attempt:1,}" Sep 4 17:38:22.579181 systemd[1]: run-netns-cni\x2dc74e2ae0\x2d3318\x2de786\x2ddb8a\x2df0f1587f07f1.mount: Deactivated successfully. Sep 4 17:38:22.949705 systemd-networkd[1825]: calib901bdce216: Link UP Sep 4 17:38:22.953375 systemd-networkd[1825]: calib901bdce216: Gained carrier Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.724 [INFO][4956] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0 csi-node-driver- calico-system 69a2d1ad-1774-4773-ab86-418e1662aaff 792 0 2024-09-04 17:37:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-29-194 csi-node-driver-qrdz7 eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib901bdce216 [] []}} ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Namespace="calico-system" Pod="csi-node-driver-qrdz7" WorkloadEndpoint="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.730 [INFO][4956] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Namespace="calico-system" Pod="csi-node-driver-qrdz7" WorkloadEndpoint="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.845 [INFO][4967] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" HandleID="k8s-pod-network.e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.872 [INFO][4967] ipam_plugin.go 270: Auto assigning IP ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" HandleID="k8s-pod-network.e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f0630), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-194", "pod":"csi-node-driver-qrdz7", "timestamp":"2024-09-04 17:38:22.845505498 +0000 UTC"}, Hostname:"ip-172-31-29-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.872 [INFO][4967] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.872 [INFO][4967] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.872 [INFO][4967] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-194' Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.881 [INFO][4967] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" host="ip-172-31-29-194" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.893 [INFO][4967] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-194" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.902 [INFO][4967] ipam.go 489: Trying affinity for 192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.904 [INFO][4967] ipam.go 155: Attempting to load block cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.910 [INFO][4967] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.910 [INFO][4967] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.7.0/26 handle="k8s-pod-network.e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" host="ip-172-31-29-194" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.913 [INFO][4967] ipam.go 1685: Creating new handle: k8s-pod-network.e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030 Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.921 [INFO][4967] ipam.go 1203: Writing block in order to claim IPs block=192.168.7.0/26 handle="k8s-pod-network.e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" host="ip-172-31-29-194" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.935 [INFO][4967] ipam.go 1216: Successfully claimed IPs: [192.168.7.3/26] block=192.168.7.0/26 handle="k8s-pod-network.e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" host="ip-172-31-29-194" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.935 [INFO][4967] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.7.3/26] handle="k8s-pod-network.e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" host="ip-172-31-29-194" Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.935 [INFO][4967] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:23.033464 containerd[1981]: 2024-09-04 17:38:22.935 [INFO][4967] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.7.3/26] IPv6=[] ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" HandleID="k8s-pod-network.e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:23.037455 containerd[1981]: 2024-09-04 17:38:22.940 [INFO][4956] k8s.go 386: Populated endpoint ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Namespace="calico-system" Pod="csi-node-driver-qrdz7" WorkloadEndpoint="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69a2d1ad-1774-4773-ab86-418e1662aaff", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"", Pod:"csi-node-driver-qrdz7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.7.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib901bdce216", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:23.037455 containerd[1981]: 2024-09-04 17:38:22.940 [INFO][4956] k8s.go 387: Calico CNI using IPs: [192.168.7.3/32] ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Namespace="calico-system" Pod="csi-node-driver-qrdz7" WorkloadEndpoint="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:23.037455 containerd[1981]: 2024-09-04 17:38:22.940 [INFO][4956] dataplane_linux.go 68: Setting the host side veth name to calib901bdce216 ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Namespace="calico-system" Pod="csi-node-driver-qrdz7" WorkloadEndpoint="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:23.037455 containerd[1981]: 2024-09-04 17:38:22.953 [INFO][4956] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Namespace="calico-system" Pod="csi-node-driver-qrdz7" WorkloadEndpoint="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:23.037455 containerd[1981]: 2024-09-04 17:38:22.954 [INFO][4956] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Namespace="calico-system" Pod="csi-node-driver-qrdz7" WorkloadEndpoint="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69a2d1ad-1774-4773-ab86-418e1662aaff", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030", Pod:"csi-node-driver-qrdz7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.7.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib901bdce216", MAC:"4e:30:7c:d3:37:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:23.037455 containerd[1981]: 2024-09-04 17:38:22.996 [INFO][4956] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030" Namespace="calico-system" Pod="csi-node-driver-qrdz7" WorkloadEndpoint="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:23.115666 sshd[4916]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:23.126606 systemd[1]: sshd@7-172.31.29.194:22-139.178.68.195:51418.service: Deactivated successfully. Sep 4 17:38:23.134469 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:38:23.138694 containerd[1981]: time="2024-09-04T17:38:23.138358196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:23.138694 containerd[1981]: time="2024-09-04T17:38:23.138640616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:23.139362 containerd[1981]: time="2024-09-04T17:38:23.138676017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:23.139915 containerd[1981]: time="2024-09-04T17:38:23.139151901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:23.143414 systemd-logind[1960]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:38:23.157698 systemd-logind[1960]: Removed session 8. Sep 4 17:38:23.193450 systemd[1]: Started cri-containerd-e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030.scope - libcontainer container e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030. Sep 4 17:38:23.249690 containerd[1981]: time="2024-09-04T17:38:23.249650182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qrdz7,Uid:69a2d1ad-1774-4773-ab86-418e1662aaff,Namespace:calico-system,Attempt:1,} returns sandbox id \"e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030\"" Sep 4 17:38:23.253150 containerd[1981]: time="2024-09-04T17:38:23.252629540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:38:23.350891 containerd[1981]: time="2024-09-04T17:38:23.350835937Z" level=info msg="StopPodSandbox for \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\"" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.481 [INFO][5047] k8s.go 608: Cleaning up netns ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.482 [INFO][5047] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" iface="eth0" netns="/var/run/netns/cni-90d7a1c3-89e0-c4cb-b789-c7b5e9079230" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.483 [INFO][5047] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" iface="eth0" netns="/var/run/netns/cni-90d7a1c3-89e0-c4cb-b789-c7b5e9079230" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.484 [INFO][5047] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" iface="eth0" netns="/var/run/netns/cni-90d7a1c3-89e0-c4cb-b789-c7b5e9079230" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.484 [INFO][5047] k8s.go 615: Releasing IP address(es) ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.484 [INFO][5047] utils.go 188: Calico CNI releasing IP address ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.541 [INFO][5059] ipam_plugin.go 417: Releasing address using handleID ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" HandleID="k8s-pod-network.b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.542 [INFO][5059] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.543 [INFO][5059] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.556 [WARNING][5059] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" HandleID="k8s-pod-network.b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.557 [INFO][5059] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" HandleID="k8s-pod-network.b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.560 [INFO][5059] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:23.567667 containerd[1981]: 2024-09-04 17:38:23.563 [INFO][5047] k8s.go 621: Teardown processing complete. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:23.569089 containerd[1981]: time="2024-09-04T17:38:23.567804424Z" level=info msg="TearDown network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\" successfully" Sep 4 17:38:23.569089 containerd[1981]: time="2024-09-04T17:38:23.567852526Z" level=info msg="StopPodSandbox for \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\" returns successfully" Sep 4 17:38:23.570455 containerd[1981]: time="2024-09-04T17:38:23.569543756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8d99456c-4chwr,Uid:ea34c68a-28f8-4302-9bc8-31267e5610bf,Namespace:calico-system,Attempt:1,}" Sep 4 17:38:23.582782 systemd[1]: run-containerd-runc-k8s.io-e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030-runc.AEnOnf.mount: Deactivated successfully. Sep 4 17:38:23.583009 systemd[1]: run-netns-cni\x2d90d7a1c3\x2d89e0\x2dc4cb\x2db789\x2dc7b5e9079230.mount: Deactivated successfully. Sep 4 17:38:23.874956 systemd-networkd[1825]: cali58059f63258: Link UP Sep 4 17:38:23.876592 systemd-networkd[1825]: cali58059f63258: Gained carrier Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.737 [INFO][5066] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0 calico-kube-controllers-7b8d99456c- calico-system ea34c68a-28f8-4302-9bc8-31267e5610bf 802 0 2024-09-04 17:37:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b8d99456c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-194 calico-kube-controllers-7b8d99456c-4chwr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali58059f63258 [] []}} ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Namespace="calico-system" Pod="calico-kube-controllers-7b8d99456c-4chwr" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.737 [INFO][5066] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Namespace="calico-system" Pod="calico-kube-controllers-7b8d99456c-4chwr" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.804 [INFO][5078] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" HandleID="k8s-pod-network.a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.818 [INFO][5078] ipam_plugin.go 270: Auto assigning IP ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" HandleID="k8s-pod-network.a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000fc900), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-194", "pod":"calico-kube-controllers-7b8d99456c-4chwr", "timestamp":"2024-09-04 17:38:23.80466512 +0000 UTC"}, Hostname:"ip-172-31-29-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.818 [INFO][5078] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.818 [INFO][5078] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.818 [INFO][5078] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-194' Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.826 [INFO][5078] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" host="ip-172-31-29-194" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.835 [INFO][5078] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-194" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.843 [INFO][5078] ipam.go 489: Trying affinity for 192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.846 [INFO][5078] ipam.go 155: Attempting to load block cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.851 [INFO][5078] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.851 [INFO][5078] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.7.0/26 handle="k8s-pod-network.a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" host="ip-172-31-29-194" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.853 [INFO][5078] ipam.go 1685: Creating new handle: k8s-pod-network.a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01 Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.858 [INFO][5078] ipam.go 1203: Writing block in order to claim IPs block=192.168.7.0/26 handle="k8s-pod-network.a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" host="ip-172-31-29-194" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.865 [INFO][5078] ipam.go 1216: Successfully claimed IPs: [192.168.7.4/26] block=192.168.7.0/26 handle="k8s-pod-network.a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" host="ip-172-31-29-194" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.865 [INFO][5078] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.7.4/26] handle="k8s-pod-network.a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" host="ip-172-31-29-194" Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.865 [INFO][5078] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:23.902608 containerd[1981]: 2024-09-04 17:38:23.866 [INFO][5078] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.7.4/26] IPv6=[] ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" HandleID="k8s-pod-network.a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.905685 containerd[1981]: 2024-09-04 17:38:23.868 [INFO][5066] k8s.go 386: Populated endpoint ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Namespace="calico-system" Pod="calico-kube-controllers-7b8d99456c-4chwr" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0", GenerateName:"calico-kube-controllers-7b8d99456c-", Namespace:"calico-system", SelfLink:"", UID:"ea34c68a-28f8-4302-9bc8-31267e5610bf", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b8d99456c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"", Pod:"calico-kube-controllers-7b8d99456c-4chwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58059f63258", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:23.905685 containerd[1981]: 2024-09-04 17:38:23.868 [INFO][5066] k8s.go 387: Calico CNI using IPs: [192.168.7.4/32] ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Namespace="calico-system" Pod="calico-kube-controllers-7b8d99456c-4chwr" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.905685 containerd[1981]: 2024-09-04 17:38:23.868 [INFO][5066] dataplane_linux.go 68: Setting the host side veth name to cali58059f63258 ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Namespace="calico-system" Pod="calico-kube-controllers-7b8d99456c-4chwr" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.905685 containerd[1981]: 2024-09-04 17:38:23.872 [INFO][5066] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Namespace="calico-system" Pod="calico-kube-controllers-7b8d99456c-4chwr" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.905685 containerd[1981]: 2024-09-04 17:38:23.873 [INFO][5066] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Namespace="calico-system" Pod="calico-kube-controllers-7b8d99456c-4chwr" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0", GenerateName:"calico-kube-controllers-7b8d99456c-", Namespace:"calico-system", SelfLink:"", UID:"ea34c68a-28f8-4302-9bc8-31267e5610bf", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b8d99456c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01", Pod:"calico-kube-controllers-7b8d99456c-4chwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58059f63258", MAC:"7e:bd:52:f3:f1:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:23.905685 containerd[1981]: 2024-09-04 17:38:23.899 [INFO][5066] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01" Namespace="calico-system" Pod="calico-kube-controllers-7b8d99456c-4chwr" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:23.949102 containerd[1981]: time="2024-09-04T17:38:23.948912179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:23.949274 containerd[1981]: time="2024-09-04T17:38:23.949140642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:23.950228 containerd[1981]: time="2024-09-04T17:38:23.949233310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:23.950670 containerd[1981]: time="2024-09-04T17:38:23.950528790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:24.010585 systemd[1]: Started cri-containerd-a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01.scope - libcontainer container a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01. Sep 4 17:38:24.131764 containerd[1981]: time="2024-09-04T17:38:24.131626214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b8d99456c-4chwr,Uid:ea34c68a-28f8-4302-9bc8-31267e5610bf,Namespace:calico-system,Attempt:1,} returns sandbox id \"a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01\"" Sep 4 17:38:24.497749 systemd-networkd[1825]: calib901bdce216: Gained IPv6LL Sep 4 17:38:24.994295 containerd[1981]: time="2024-09-04T17:38:24.993604687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:25.000867 containerd[1981]: time="2024-09-04T17:38:24.997428433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:38:25.005143 containerd[1981]: time="2024-09-04T17:38:25.001818548Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:25.020027 containerd[1981]: time="2024-09-04T17:38:25.019892516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:25.026431 containerd[1981]: time="2024-09-04T17:38:25.026373757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.773695351s" Sep 4 17:38:25.026667 containerd[1981]: time="2024-09-04T17:38:25.026434125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:38:25.036866 containerd[1981]: time="2024-09-04T17:38:25.035902975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:38:25.050677 containerd[1981]: time="2024-09-04T17:38:25.050527484Z" level=info msg="CreateContainer within sandbox \"e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:38:25.147925 containerd[1981]: time="2024-09-04T17:38:25.147871879Z" level=info msg="CreateContainer within sandbox \"e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"73a69611099391e6b9c117b08ebeedaadaf15251e692df96059b0b20d579f042\"" Sep 4 17:38:25.150098 containerd[1981]: time="2024-09-04T17:38:25.149125575Z" level=info msg="StartContainer for \"73a69611099391e6b9c117b08ebeedaadaf15251e692df96059b0b20d579f042\"" Sep 4 17:38:25.233481 systemd[1]: Started cri-containerd-73a69611099391e6b9c117b08ebeedaadaf15251e692df96059b0b20d579f042.scope - libcontainer container 73a69611099391e6b9c117b08ebeedaadaf15251e692df96059b0b20d579f042. Sep 4 17:38:25.309641 containerd[1981]: time="2024-09-04T17:38:25.309562244Z" level=info msg="StartContainer for \"73a69611099391e6b9c117b08ebeedaadaf15251e692df96059b0b20d579f042\" returns successfully" Sep 4 17:38:25.557617 systemd-networkd[1825]: cali58059f63258: Gained IPv6LL Sep 4 17:38:28.012812 ntpd[1953]: Listen normally on 7 vxlan.calico 192.168.7.0:123 Sep 4 17:38:28.013408 ntpd[1953]: 4 Sep 17:38:28 ntpd[1953]: Listen normally on 7 vxlan.calico 192.168.7.0:123 Sep 4 17:38:28.013408 ntpd[1953]: 4 Sep 17:38:28 ntpd[1953]: Listen normally on 8 vxlan.calico [fe80::64c5:1aff:fe01:e7bd%4]:123 Sep 4 17:38:28.013408 ntpd[1953]: 4 Sep 17:38:28 ntpd[1953]: Listen normally on 9 caliadafa3be521 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:38:28.013408 ntpd[1953]: 4 Sep 17:38:28 ntpd[1953]: Listen normally on 10 cali38bd30fc0d2 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:38:28.013408 ntpd[1953]: 4 Sep 17:38:28 ntpd[1953]: Listen normally on 11 calib901bdce216 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:38:28.013408 ntpd[1953]: 4 Sep 17:38:28 ntpd[1953]: Listen normally on 12 cali58059f63258 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:38:28.012901 ntpd[1953]: Listen normally on 8 vxlan.calico [fe80::64c5:1aff:fe01:e7bd%4]:123 Sep 4 17:38:28.012953 ntpd[1953]: Listen normally on 9 caliadafa3be521 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:38:28.012991 ntpd[1953]: Listen normally on 10 cali38bd30fc0d2 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:38:28.013030 ntpd[1953]: Listen normally on 11 calib901bdce216 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:38:28.013068 ntpd[1953]: Listen normally on 12 cali58059f63258 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:38:28.166136 systemd[1]: Started sshd@8-172.31.29.194:22-139.178.68.195:58672.service - OpenSSH per-connection server daemon (139.178.68.195:58672). Sep 4 17:38:28.274687 containerd[1981]: time="2024-09-04T17:38:28.273372531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:28.283758 containerd[1981]: time="2024-09-04T17:38:28.283635262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:38:28.286493 containerd[1981]: time="2024-09-04T17:38:28.285943128Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:28.304679 containerd[1981]: time="2024-09-04T17:38:28.304598580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:28.311753 containerd[1981]: time="2024-09-04T17:38:28.310295703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.274340972s" Sep 4 17:38:28.311753 containerd[1981]: time="2024-09-04T17:38:28.310357957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:38:28.319034 containerd[1981]: time="2024-09-04T17:38:28.318921807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:38:28.373447 containerd[1981]: time="2024-09-04T17:38:28.373400162Z" level=info msg="CreateContainer within sandbox \"a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:38:28.403474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460022959.mount: Deactivated successfully. Sep 4 17:38:28.407645 containerd[1981]: time="2024-09-04T17:38:28.407378350Z" level=info msg="CreateContainer within sandbox \"a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c9872396ff21855ab54ec6f3260508fa895d27c87571dd122a9195daf094203e\"" Sep 4 17:38:28.408448 containerd[1981]: time="2024-09-04T17:38:28.408412218Z" level=info msg="StartContainer for \"c9872396ff21855ab54ec6f3260508fa895d27c87571dd122a9195daf094203e\"" Sep 4 17:38:28.512461 systemd[1]: Started cri-containerd-c9872396ff21855ab54ec6f3260508fa895d27c87571dd122a9195daf094203e.scope - libcontainer container c9872396ff21855ab54ec6f3260508fa895d27c87571dd122a9195daf094203e. Sep 4 17:38:28.517297 sshd[5180]: Accepted publickey for core from 139.178.68.195 port 58672 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:38:28.521438 sshd[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:28.532463 systemd-logind[1960]: New session 9 of user core. Sep 4 17:38:28.541737 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:38:28.715089 containerd[1981]: time="2024-09-04T17:38:28.715029091Z" level=info msg="StartContainer for \"c9872396ff21855ab54ec6f3260508fa895d27c87571dd122a9195daf094203e\" returns successfully" Sep 4 17:38:29.146487 sshd[5180]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:29.157951 systemd[1]: sshd@8-172.31.29.194:22-139.178.68.195:58672.service: Deactivated successfully. Sep 4 17:38:29.165686 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:38:29.173263 systemd-logind[1960]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:38:29.178202 systemd-logind[1960]: Removed session 9. Sep 4 17:38:29.995341 kubelet[3382]: I0904 17:38:29.995306 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b8d99456c-4chwr" podStartSLOduration=32.813999369 podCreationTimestamp="2024-09-04 17:37:53 +0000 UTC" firstStartedPulling="2024-09-04 17:38:24.134629383 +0000 UTC m=+52.038390370" lastFinishedPulling="2024-09-04 17:38:28.315457743 +0000 UTC m=+56.219218731" observedRunningTime="2024-09-04 17:38:28.871982488 +0000 UTC m=+56.775743494" watchObservedRunningTime="2024-09-04 17:38:29.99482773 +0000 UTC m=+57.898588738" Sep 4 17:38:30.520405 containerd[1981]: time="2024-09-04T17:38:30.520359185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:30.523947 containerd[1981]: time="2024-09-04T17:38:30.523790716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:38:30.526390 containerd[1981]: time="2024-09-04T17:38:30.526342156Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:30.533915 containerd[1981]: time="2024-09-04T17:38:30.533855992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:30.535072 containerd[1981]: time="2024-09-04T17:38:30.535028339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.216059679s" Sep 4 17:38:30.535072 containerd[1981]: time="2024-09-04T17:38:30.535068617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:38:30.545237 containerd[1981]: time="2024-09-04T17:38:30.544658090Z" level=info msg="CreateContainer within sandbox \"e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:38:30.595298 containerd[1981]: time="2024-09-04T17:38:30.595249624Z" level=info msg="CreateContainer within sandbox \"e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5e059e5d15b6a9a0d26206f9d7fc21af683da498189f1e88957e845e31732677\"" Sep 4 17:38:30.597235 containerd[1981]: time="2024-09-04T17:38:30.596283453Z" level=info msg="StartContainer for \"5e059e5d15b6a9a0d26206f9d7fc21af683da498189f1e88957e845e31732677\"" Sep 4 17:38:30.655748 systemd[1]: Started cri-containerd-5e059e5d15b6a9a0d26206f9d7fc21af683da498189f1e88957e845e31732677.scope - libcontainer container 5e059e5d15b6a9a0d26206f9d7fc21af683da498189f1e88957e845e31732677. Sep 4 17:38:30.711415 containerd[1981]: time="2024-09-04T17:38:30.711344162Z" level=info msg="StartContainer for \"5e059e5d15b6a9a0d26206f9d7fc21af683da498189f1e88957e845e31732677\" returns successfully" Sep 4 17:38:30.834623 kubelet[3382]: I0904 17:38:30.833790 3382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-qrdz7" podStartSLOduration=30.550222326 podCreationTimestamp="2024-09-04 17:37:53 +0000 UTC" firstStartedPulling="2024-09-04 17:38:23.251883153 +0000 UTC m=+51.155644141" lastFinishedPulling="2024-09-04 17:38:30.535404036 +0000 UTC m=+58.439165021" observedRunningTime="2024-09-04 17:38:30.833631115 +0000 UTC m=+58.737392121" watchObservedRunningTime="2024-09-04 17:38:30.833743206 +0000 UTC m=+58.737504213" Sep 4 17:38:31.787792 kubelet[3382]: I0904 17:38:31.787633 3382 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:38:31.787792 kubelet[3382]: I0904 17:38:31.787703 3382 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:38:32.360331 containerd[1981]: time="2024-09-04T17:38:32.360289708Z" level=info msg="StopPodSandbox for \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\"" Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.530 [WARNING][5340] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69a2d1ad-1774-4773-ab86-418e1662aaff", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030", Pod:"csi-node-driver-qrdz7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.7.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib901bdce216", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.531 [INFO][5340] k8s.go 608: Cleaning up netns ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.531 [INFO][5340] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" iface="eth0" netns="" Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.531 [INFO][5340] k8s.go 615: Releasing IP address(es) ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.531 [INFO][5340] utils.go 188: Calico CNI releasing IP address ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.606 [INFO][5348] ipam_plugin.go 417: Releasing address using handleID ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" HandleID="k8s-pod-network.a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.606 [INFO][5348] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.607 [INFO][5348] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.621 [WARNING][5348] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" HandleID="k8s-pod-network.a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.623 [INFO][5348] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" HandleID="k8s-pod-network.a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.627 [INFO][5348] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:32.643475 containerd[1981]: 2024-09-04 17:38:32.632 [INFO][5340] k8s.go 621: Teardown processing complete. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:32.643475 containerd[1981]: time="2024-09-04T17:38:32.642240308Z" level=info msg="TearDown network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\" successfully" Sep 4 17:38:32.643475 containerd[1981]: time="2024-09-04T17:38:32.643122291Z" level=info msg="StopPodSandbox for \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\" returns successfully" Sep 4 17:38:32.645016 containerd[1981]: time="2024-09-04T17:38:32.644988316Z" level=info msg="RemovePodSandbox for \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\"" Sep 4 17:38:32.649559 containerd[1981]: time="2024-09-04T17:38:32.649519001Z" level=info msg="Forcibly stopping sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\"" Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.698 [WARNING][5367] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69a2d1ad-1774-4773-ab86-418e1662aaff", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"e9d16f48c9b9fa5ca214f64186e0f5ab9c29a202d0ac00b6a27d0f680d749030", Pod:"csi-node-driver-qrdz7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.7.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib901bdce216", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.698 [INFO][5367] k8s.go 608: Cleaning up netns ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.698 [INFO][5367] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" iface="eth0" netns="" Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.698 [INFO][5367] k8s.go 615: Releasing IP address(es) ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.698 [INFO][5367] utils.go 188: Calico CNI releasing IP address ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.723 [INFO][5374] ipam_plugin.go 417: Releasing address using handleID ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" HandleID="k8s-pod-network.a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.723 [INFO][5374] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.723 [INFO][5374] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.729 [WARNING][5374] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" HandleID="k8s-pod-network.a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.729 [INFO][5374] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" HandleID="k8s-pod-network.a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Workload="ip--172--31--29--194-k8s-csi--node--driver--qrdz7-eth0" Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.732 [INFO][5374] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:32.735915 containerd[1981]: 2024-09-04 17:38:32.734 [INFO][5367] k8s.go 621: Teardown processing complete. ContainerID="a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b" Sep 4 17:38:32.736776 containerd[1981]: time="2024-09-04T17:38:32.736021633Z" level=info msg="TearDown network for sandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\" successfully" Sep 4 17:38:32.758562 containerd[1981]: time="2024-09-04T17:38:32.758506627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:38:32.758700 containerd[1981]: time="2024-09-04T17:38:32.758607067Z" level=info msg="RemovePodSandbox \"a222c39c1009870b03800e286eb625bd67f99b9625505d2cf0ab4f7740275e3b\" returns successfully" Sep 4 17:38:32.759840 containerd[1981]: time="2024-09-04T17:38:32.759742898Z" level=info msg="StopPodSandbox for \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\"" Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.813 [WARNING][5392] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0", GenerateName:"calico-kube-controllers-7b8d99456c-", Namespace:"calico-system", SelfLink:"", UID:"ea34c68a-28f8-4302-9bc8-31267e5610bf", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b8d99456c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01", Pod:"calico-kube-controllers-7b8d99456c-4chwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58059f63258", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.814 [INFO][5392] k8s.go 608: Cleaning up netns ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.814 [INFO][5392] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" iface="eth0" netns="" Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.814 [INFO][5392] k8s.go 615: Releasing IP address(es) ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.814 [INFO][5392] utils.go 188: Calico CNI releasing IP address ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.846 [INFO][5398] ipam_plugin.go 417: Releasing address using handleID ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" HandleID="k8s-pod-network.b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.846 [INFO][5398] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.846 [INFO][5398] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.852 [WARNING][5398] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" HandleID="k8s-pod-network.b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.852 [INFO][5398] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" HandleID="k8s-pod-network.b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.853 [INFO][5398] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:32.857432 containerd[1981]: 2024-09-04 17:38:32.855 [INFO][5392] k8s.go 621: Teardown processing complete. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:32.858672 containerd[1981]: time="2024-09-04T17:38:32.857487282Z" level=info msg="TearDown network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\" successfully" Sep 4 17:38:32.858672 containerd[1981]: time="2024-09-04T17:38:32.857517833Z" level=info msg="StopPodSandbox for \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\" returns successfully" Sep 4 17:38:32.858672 containerd[1981]: time="2024-09-04T17:38:32.858213011Z" level=info msg="RemovePodSandbox for \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\"" Sep 4 17:38:32.858672 containerd[1981]: time="2024-09-04T17:38:32.858251252Z" level=info msg="Forcibly stopping sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\"" Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.906 [WARNING][5416] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0", GenerateName:"calico-kube-controllers-7b8d99456c-", Namespace:"calico-system", SelfLink:"", UID:"ea34c68a-28f8-4302-9bc8-31267e5610bf", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b8d99456c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"a90696da81ae07bfb9cd37f61491c2c263ec1715890e33588542b6cb26860b01", Pod:"calico-kube-controllers-7b8d99456c-4chwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58059f63258", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.906 [INFO][5416] k8s.go 608: Cleaning up netns ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.906 [INFO][5416] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" iface="eth0" netns="" Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.906 [INFO][5416] k8s.go 615: Releasing IP address(es) ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.906 [INFO][5416] utils.go 188: Calico CNI releasing IP address ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.937 [INFO][5423] ipam_plugin.go 417: Releasing address using handleID ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" HandleID="k8s-pod-network.b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.937 [INFO][5423] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.937 [INFO][5423] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.944 [WARNING][5423] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" HandleID="k8s-pod-network.b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.944 [INFO][5423] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" HandleID="k8s-pod-network.b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Workload="ip--172--31--29--194-k8s-calico--kube--controllers--7b8d99456c--4chwr-eth0" Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.946 [INFO][5423] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:32.950952 containerd[1981]: 2024-09-04 17:38:32.947 [INFO][5416] k8s.go 621: Teardown processing complete. ContainerID="b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824" Sep 4 17:38:32.950952 containerd[1981]: time="2024-09-04T17:38:32.950918651Z" level=info msg="TearDown network for sandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\" successfully" Sep 4 17:38:32.957413 containerd[1981]: time="2024-09-04T17:38:32.957359174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:38:32.957727 containerd[1981]: time="2024-09-04T17:38:32.957444511Z" level=info msg="RemovePodSandbox \"b01eda6ecfd38bf381ce1287ed55123c8b1ac7571d3ccbffb26f40d419ac6824\" returns successfully" Sep 4 17:38:32.958244 containerd[1981]: time="2024-09-04T17:38:32.958214139Z" level=info msg="StopPodSandbox for \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\"" Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.010 [WARNING][5441] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0e1f12cd-e8de-4552-9978-e9886ee78d4e", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9", Pod:"coredns-5dd5756b68-7fc8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38bd30fc0d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.011 [INFO][5441] k8s.go 608: Cleaning up netns ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.011 [INFO][5441] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" iface="eth0" netns="" Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.011 [INFO][5441] k8s.go 615: Releasing IP address(es) ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.011 [INFO][5441] utils.go 188: Calico CNI releasing IP address ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.038 [INFO][5447] ipam_plugin.go 417: Releasing address using handleID ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" HandleID="k8s-pod-network.966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.038 [INFO][5447] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.038 [INFO][5447] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.046 [WARNING][5447] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" HandleID="k8s-pod-network.966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.046 [INFO][5447] ipam_plugin.go 445: Releasing address using workloadID ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" HandleID="k8s-pod-network.966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.048 [INFO][5447] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:33.051746 containerd[1981]: 2024-09-04 17:38:33.049 [INFO][5441] k8s.go 621: Teardown processing complete. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:33.051746 containerd[1981]: time="2024-09-04T17:38:33.051577069Z" level=info msg="TearDown network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\" successfully" Sep 4 17:38:33.051746 containerd[1981]: time="2024-09-04T17:38:33.051600373Z" level=info msg="StopPodSandbox for \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\" returns successfully" Sep 4 17:38:33.053276 containerd[1981]: time="2024-09-04T17:38:33.052099635Z" level=info msg="RemovePodSandbox for \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\"" Sep 4 17:38:33.053276 containerd[1981]: time="2024-09-04T17:38:33.052132050Z" level=info msg="Forcibly stopping sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\"" Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.096 [WARNING][5466] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0e1f12cd-e8de-4552-9978-e9886ee78d4e", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"31b6049a9489d5ddba460a4609148fe355f4fd1ae731195918a246a2238343a9", Pod:"coredns-5dd5756b68-7fc8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38bd30fc0d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.096 [INFO][5466] k8s.go 608: Cleaning up netns ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.096 [INFO][5466] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" iface="eth0" netns="" Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.096 [INFO][5466] k8s.go 615: Releasing IP address(es) ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.096 [INFO][5466] utils.go 188: Calico CNI releasing IP address ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.129 [INFO][5472] ipam_plugin.go 417: Releasing address using handleID ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" HandleID="k8s-pod-network.966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.130 [INFO][5472] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.130 [INFO][5472] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.141 [WARNING][5472] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" HandleID="k8s-pod-network.966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.142 [INFO][5472] ipam_plugin.go 445: Releasing address using workloadID ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" HandleID="k8s-pod-network.966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--7fc8n-eth0" Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.143 [INFO][5472] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:33.149826 containerd[1981]: 2024-09-04 17:38:33.147 [INFO][5466] k8s.go 621: Teardown processing complete. ContainerID="966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07" Sep 4 17:38:33.151043 containerd[1981]: time="2024-09-04T17:38:33.149898841Z" level=info msg="TearDown network for sandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\" successfully" Sep 4 17:38:33.170709 containerd[1981]: time="2024-09-04T17:38:33.170655548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:38:33.170971 containerd[1981]: time="2024-09-04T17:38:33.170729290Z" level=info msg="RemovePodSandbox \"966e730532f621e9346cb5c1e39aabf479fed55f175501252bc193e46a751c07\" returns successfully" Sep 4 17:38:33.175136 containerd[1981]: time="2024-09-04T17:38:33.175094658Z" level=info msg="StopPodSandbox for \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\"" Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.220 [WARNING][5490] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"dfc3a0dc-44f6-439b-9637-624a1c16320c", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149", Pod:"coredns-5dd5756b68-xcl8p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadafa3be521", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.220 [INFO][5490] k8s.go 608: Cleaning up netns ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.220 [INFO][5490] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" iface="eth0" netns="" Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.220 [INFO][5490] k8s.go 615: Releasing IP address(es) ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.220 [INFO][5490] utils.go 188: Calico CNI releasing IP address ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.249 [INFO][5496] ipam_plugin.go 417: Releasing address using handleID ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" HandleID="k8s-pod-network.b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.250 [INFO][5496] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.250 [INFO][5496] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.258 [WARNING][5496] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" HandleID="k8s-pod-network.b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.258 [INFO][5496] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" HandleID="k8s-pod-network.b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.259 [INFO][5496] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:33.264414 containerd[1981]: 2024-09-04 17:38:33.261 [INFO][5490] k8s.go 621: Teardown processing complete. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:33.264414 containerd[1981]: time="2024-09-04T17:38:33.263064854Z" level=info msg="TearDown network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\" successfully" Sep 4 17:38:33.264414 containerd[1981]: time="2024-09-04T17:38:33.264284416Z" level=info msg="StopPodSandbox for \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\" returns successfully" Sep 4 17:38:33.266752 containerd[1981]: time="2024-09-04T17:38:33.265826557Z" level=info msg="RemovePodSandbox for \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\"" Sep 4 17:38:33.266752 containerd[1981]: time="2024-09-04T17:38:33.265863117Z" level=info msg="Forcibly stopping sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\"" Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.314 [WARNING][5514] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"dfc3a0dc-44f6-439b-9637-624a1c16320c", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"2ae8f796364d508587d797e9da16eabe22ddbd441a2c7ce5c100748bbe4e2149", Pod:"coredns-5dd5756b68-xcl8p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadafa3be521", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.314 [INFO][5514] k8s.go 608: Cleaning up netns ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.314 [INFO][5514] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" iface="eth0" netns="" Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.314 [INFO][5514] k8s.go 615: Releasing IP address(es) ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.314 [INFO][5514] utils.go 188: Calico CNI releasing IP address ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.340 [INFO][5520] ipam_plugin.go 417: Releasing address using handleID ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" HandleID="k8s-pod-network.b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.340 [INFO][5520] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.340 [INFO][5520] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.346 [WARNING][5520] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" HandleID="k8s-pod-network.b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.346 [INFO][5520] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" HandleID="k8s-pod-network.b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Workload="ip--172--31--29--194-k8s-coredns--5dd5756b68--xcl8p-eth0" Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.348 [INFO][5520] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:33.367641 containerd[1981]: 2024-09-04 17:38:33.350 [INFO][5514] k8s.go 621: Teardown processing complete. ContainerID="b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338" Sep 4 17:38:33.368630 containerd[1981]: time="2024-09-04T17:38:33.367659630Z" level=info msg="TearDown network for sandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\" successfully" Sep 4 17:38:33.382752 containerd[1981]: time="2024-09-04T17:38:33.382697222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:38:33.383084 containerd[1981]: time="2024-09-04T17:38:33.382785736Z" level=info msg="RemovePodSandbox \"b7279b07578ecf220c34d1ff96acc707670cfa93735d699faab764ea8c912338\" returns successfully" Sep 4 17:38:34.189679 systemd[1]: Started sshd@9-172.31.29.194:22-139.178.68.195:58680.service - OpenSSH per-connection server daemon (139.178.68.195:58680). Sep 4 17:38:34.423107 sshd[5529]: Accepted publickey for core from 139.178.68.195 port 58680 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:38:34.429363 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:34.447426 systemd-logind[1960]: New session 10 of user core. Sep 4 17:38:34.455402 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:38:34.886420 sshd[5529]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:34.891575 systemd-logind[1960]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:38:34.892867 systemd[1]: sshd@9-172.31.29.194:22-139.178.68.195:58680.service: Deactivated successfully. Sep 4 17:38:34.895635 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:38:34.897053 systemd-logind[1960]: Removed session 10. Sep 4 17:38:36.425944 systemd[1]: run-containerd-runc-k8s.io-c9872396ff21855ab54ec6f3260508fa895d27c87571dd122a9195daf094203e-runc.Nc4q9j.mount: Deactivated successfully. Sep 4 17:38:39.917986 systemd[1]: Started sshd@10-172.31.29.194:22-139.178.68.195:37286.service - OpenSSH per-connection server daemon (139.178.68.195:37286). Sep 4 17:38:40.098124 sshd[5572]: Accepted publickey for core from 139.178.68.195 port 37286 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:38:40.100272 sshd[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:40.119257 systemd-logind[1960]: New session 11 of user core. Sep 4 17:38:40.128434 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:38:40.359162 sshd[5572]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:40.363782 systemd[1]: sshd@10-172.31.29.194:22-139.178.68.195:37286.service: Deactivated successfully. Sep 4 17:38:40.367867 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:38:40.374256 systemd-logind[1960]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:38:40.377830 systemd-logind[1960]: Removed session 11. Sep 4 17:38:40.396883 systemd[1]: Started sshd@11-172.31.29.194:22-139.178.68.195:37302.service - OpenSSH per-connection server daemon (139.178.68.195:37302). Sep 4 17:38:40.585094 sshd[5586]: Accepted publickey for core from 139.178.68.195 port 37302 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:38:40.587630 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:40.595067 systemd-logind[1960]: New session 12 of user core. Sep 4 17:38:40.598550 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:38:41.297853 sshd[5586]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:41.316670 systemd[1]: sshd@11-172.31.29.194:22-139.178.68.195:37302.service: Deactivated successfully. Sep 4 17:38:41.324102 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:38:41.352132 systemd-logind[1960]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:38:41.363090 systemd[1]: Started sshd@12-172.31.29.194:22-139.178.68.195:37310.service - OpenSSH per-connection server daemon (139.178.68.195:37310). Sep 4 17:38:41.366902 systemd-logind[1960]: Removed session 12. Sep 4 17:38:41.556137 sshd[5597]: Accepted publickey for core from 139.178.68.195 port 37310 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:38:41.556419 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:41.562552 systemd-logind[1960]: New session 13 of user core. Sep 4 17:38:41.568766 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:38:41.901012 sshd[5597]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:41.907376 systemd-logind[1960]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:38:41.908096 systemd[1]: sshd@12-172.31.29.194:22-139.178.68.195:37310.service: Deactivated successfully. Sep 4 17:38:41.911874 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:38:41.914122 systemd-logind[1960]: Removed session 13. Sep 4 17:38:46.950632 systemd[1]: Started sshd@13-172.31.29.194:22-139.178.68.195:50132.service - OpenSSH per-connection server daemon (139.178.68.195:50132). Sep 4 17:38:47.156436 sshd[5660]: Accepted publickey for core from 139.178.68.195 port 50132 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:38:47.160901 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:47.169403 systemd-logind[1960]: New session 14 of user core. Sep 4 17:38:47.177480 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:38:47.437165 sshd[5660]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:47.442343 systemd[1]: sshd@13-172.31.29.194:22-139.178.68.195:50132.service: Deactivated successfully. Sep 4 17:38:47.445018 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:38:47.447427 systemd-logind[1960]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:38:47.450645 systemd-logind[1960]: Removed session 14. Sep 4 17:38:52.475812 systemd[1]: Started sshd@14-172.31.29.194:22-139.178.68.195:50146.service - OpenSSH per-connection server daemon (139.178.68.195:50146). Sep 4 17:38:52.660639 sshd[5679]: Accepted publickey for core from 139.178.68.195 port 50146 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:38:52.662963 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:52.683738 systemd-logind[1960]: New session 15 of user core. Sep 4 17:38:52.689560 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:38:52.942451 sshd[5679]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:52.952750 systemd[1]: sshd@14-172.31.29.194:22-139.178.68.195:50146.service: Deactivated successfully. Sep 4 17:38:52.955525 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:38:52.960158 systemd-logind[1960]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:38:52.962252 systemd-logind[1960]: Removed session 15. Sep 4 17:38:57.141702 kubelet[3382]: I0904 17:38:57.141661 3382 topology_manager.go:215] "Topology Admit Handler" podUID="3db985e4-0cf2-44a9-9eb9-2c0bbde58881" podNamespace="calico-apiserver" podName="calico-apiserver-78bc578676-mbm9b" Sep 4 17:38:57.181960 systemd[1]: Created slice kubepods-besteffort-pod3db985e4_0cf2_44a9_9eb9_2c0bbde58881.slice - libcontainer container kubepods-besteffort-pod3db985e4_0cf2_44a9_9eb9_2c0bbde58881.slice. Sep 4 17:38:57.293699 kubelet[3382]: I0904 17:38:57.293654 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3db985e4-0cf2-44a9-9eb9-2c0bbde58881-calico-apiserver-certs\") pod \"calico-apiserver-78bc578676-mbm9b\" (UID: \"3db985e4-0cf2-44a9-9eb9-2c0bbde58881\") " pod="calico-apiserver/calico-apiserver-78bc578676-mbm9b" Sep 4 17:38:57.293844 kubelet[3382]: I0904 17:38:57.293782 3382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brjwk\" (UniqueName: \"kubernetes.io/projected/3db985e4-0cf2-44a9-9eb9-2c0bbde58881-kube-api-access-brjwk\") pod \"calico-apiserver-78bc578676-mbm9b\" (UID: \"3db985e4-0cf2-44a9-9eb9-2c0bbde58881\") " pod="calico-apiserver/calico-apiserver-78bc578676-mbm9b" Sep 4 17:38:57.400393 kubelet[3382]: E0904 17:38:57.400181 3382 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:38:57.423954 kubelet[3382]: E0904 17:38:57.423842 3382 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3db985e4-0cf2-44a9-9eb9-2c0bbde58881-calico-apiserver-certs podName:3db985e4-0cf2-44a9-9eb9-2c0bbde58881 nodeName:}" failed. No retries permitted until 2024-09-04 17:38:57.917114709 +0000 UTC m=+85.820875713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3db985e4-0cf2-44a9-9eb9-2c0bbde58881-calico-apiserver-certs") pod "calico-apiserver-78bc578676-mbm9b" (UID: "3db985e4-0cf2-44a9-9eb9-2c0bbde58881") : secret "calico-apiserver-certs" not found Sep 4 17:38:57.995996 systemd[1]: Started sshd@15-172.31.29.194:22-139.178.68.195:37520.service - OpenSSH per-connection server daemon (139.178.68.195:37520). Sep 4 17:38:58.124362 containerd[1981]: time="2024-09-04T17:38:58.124314149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78bc578676-mbm9b,Uid:3db985e4-0cf2-44a9-9eb9-2c0bbde58881,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:38:58.402274 sshd[5698]: Accepted publickey for core from 139.178.68.195 port 37520 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:38:58.422616 sshd[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:58.451533 systemd-logind[1960]: New session 16 of user core. Sep 4 17:38:58.476470 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:38:58.781729 systemd-networkd[1825]: caliea2a9905fda: Link UP Sep 4 17:38:58.782094 systemd-networkd[1825]: caliea2a9905fda: Gained carrier Sep 4 17:38:58.795323 (udev-worker)[5735]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.490 [INFO][5702] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0 calico-apiserver-78bc578676- calico-apiserver 3db985e4-0cf2-44a9-9eb9-2c0bbde58881 1037 0 2024-09-04 17:38:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78bc578676 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-194 calico-apiserver-78bc578676-mbm9b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliea2a9905fda [] []}} ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Namespace="calico-apiserver" Pod="calico-apiserver-78bc578676-mbm9b" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.491 [INFO][5702] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Namespace="calico-apiserver" Pod="calico-apiserver-78bc578676-mbm9b" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.625 [INFO][5717] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" HandleID="k8s-pod-network.ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Workload="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.658 [INFO][5717] ipam_plugin.go 270: Auto assigning IP ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" HandleID="k8s-pod-network.ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Workload="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-194", "pod":"calico-apiserver-78bc578676-mbm9b", "timestamp":"2024-09-04 17:38:58.625764228 +0000 UTC"}, Hostname:"ip-172-31-29-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.658 [INFO][5717] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.658 [INFO][5717] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.659 [INFO][5717] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-194' Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.662 [INFO][5717] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" host="ip-172-31-29-194" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.674 [INFO][5717] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-194" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.695 [INFO][5717] ipam.go 489: Trying affinity for 192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.702 [INFO][5717] ipam.go 155: Attempting to load block cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.718 [INFO][5717] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.7.0/26 host="ip-172-31-29-194" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.718 [INFO][5717] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.7.0/26 handle="k8s-pod-network.ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" host="ip-172-31-29-194" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.731 [INFO][5717] ipam.go 1685: Creating new handle: k8s-pod-network.ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.741 [INFO][5717] ipam.go 1203: Writing block in order to claim IPs block=192.168.7.0/26 handle="k8s-pod-network.ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" host="ip-172-31-29-194" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.756 [INFO][5717] ipam.go 1216: Successfully claimed IPs: [192.168.7.5/26] block=192.168.7.0/26 handle="k8s-pod-network.ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" host="ip-172-31-29-194" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.757 [INFO][5717] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.7.5/26] handle="k8s-pod-network.ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" host="ip-172-31-29-194" Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.757 [INFO][5717] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:58.844051 containerd[1981]: 2024-09-04 17:38:58.758 [INFO][5717] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.7.5/26] IPv6=[] ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" HandleID="k8s-pod-network.ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Workload="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" Sep 4 17:38:58.852417 containerd[1981]: 2024-09-04 17:38:58.770 [INFO][5702] k8s.go 386: Populated endpoint ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Namespace="calico-apiserver" Pod="calico-apiserver-78bc578676-mbm9b" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0", GenerateName:"calico-apiserver-78bc578676-", Namespace:"calico-apiserver", SelfLink:"", UID:"3db985e4-0cf2-44a9-9eb9-2c0bbde58881", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78bc578676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"", Pod:"calico-apiserver-78bc578676-mbm9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea2a9905fda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:58.852417 containerd[1981]: 2024-09-04 17:38:58.773 [INFO][5702] k8s.go 387: Calico CNI using IPs: [192.168.7.5/32] ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Namespace="calico-apiserver" Pod="calico-apiserver-78bc578676-mbm9b" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" Sep 4 17:38:58.852417 containerd[1981]: 2024-09-04 17:38:58.773 [INFO][5702] dataplane_linux.go 68: Setting the host side veth name to caliea2a9905fda ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Namespace="calico-apiserver" Pod="calico-apiserver-78bc578676-mbm9b" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" Sep 4 17:38:58.852417 containerd[1981]: 2024-09-04 17:38:58.778 [INFO][5702] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Namespace="calico-apiserver" Pod="calico-apiserver-78bc578676-mbm9b" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" Sep 4 17:38:58.852417 containerd[1981]: 2024-09-04 17:38:58.779 [INFO][5702] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Namespace="calico-apiserver" Pod="calico-apiserver-78bc578676-mbm9b" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0", GenerateName:"calico-apiserver-78bc578676-", Namespace:"calico-apiserver", SelfLink:"", UID:"3db985e4-0cf2-44a9-9eb9-2c0bbde58881", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78bc578676", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-194", ContainerID:"ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af", Pod:"calico-apiserver-78bc578676-mbm9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea2a9905fda", MAC:"8a:2b:55:5b:fd:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:38:58.852417 containerd[1981]: 2024-09-04 17:38:58.835 [INFO][5702] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af" Namespace="calico-apiserver" Pod="calico-apiserver-78bc578676-mbm9b" WorkloadEndpoint="ip--172--31--29--194-k8s-calico--apiserver--78bc578676--mbm9b-eth0" Sep 4 17:38:59.037350 containerd[1981]: time="2024-09-04T17:38:59.032947200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:59.037350 containerd[1981]: time="2024-09-04T17:38:59.034267411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:59.037350 containerd[1981]: time="2024-09-04T17:38:59.034322065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:59.037350 containerd[1981]: time="2024-09-04T17:38:59.034744359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:59.171434 systemd[1]: Started cri-containerd-ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af.scope - libcontainer container ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af. Sep 4 17:38:59.250929 sshd[5698]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:59.264362 systemd-logind[1960]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:38:59.265711 systemd[1]: sshd@15-172.31.29.194:22-139.178.68.195:37520.service: Deactivated successfully. Sep 4 17:38:59.275998 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:38:59.284657 systemd-logind[1960]: Removed session 16. Sep 4 17:38:59.397456 containerd[1981]: time="2024-09-04T17:38:59.397089510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78bc578676-mbm9b,Uid:3db985e4-0cf2-44a9-9eb9-2c0bbde58881,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af\"" Sep 4 17:38:59.401694 containerd[1981]: time="2024-09-04T17:38:59.401367798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:39:00.374644 systemd-networkd[1825]: caliea2a9905fda: Gained IPv6LL Sep 4 17:39:03.011792 ntpd[1953]: Listen normally on 13 caliea2a9905fda [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:39:03.012530 ntpd[1953]: 4 Sep 17:39:03 ntpd[1953]: Listen normally on 13 caliea2a9905fda [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:39:03.604810 containerd[1981]: time="2024-09-04T17:39:03.604699408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:03.611552 containerd[1981]: time="2024-09-04T17:39:03.611374701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:39:03.613295 containerd[1981]: time="2024-09-04T17:39:03.613229097Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:03.622249 containerd[1981]: time="2024-09-04T17:39:03.621841890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:03.622851 containerd[1981]: time="2024-09-04T17:39:03.622809815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 4.221393466s" Sep 4 17:39:03.622945 containerd[1981]: time="2024-09-04T17:39:03.622859288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:39:03.633093 containerd[1981]: time="2024-09-04T17:39:03.632616544Z" level=info msg="CreateContainer within sandbox \"ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:39:03.656973 containerd[1981]: time="2024-09-04T17:39:03.656909506Z" level=info msg="CreateContainer within sandbox \"ffafa2a3eec44309a1629dc92471bc755e5e5fee2dc86e13a2fb68e703b9f9af\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef25dacd36568cf411ebcd9ea52fff7231c6f51b1f54ca8cd7cc48d6b0d1c09c\"" Sep 4 17:39:03.659052 containerd[1981]: time="2024-09-04T17:39:03.657593182Z" level=info msg="StartContainer for \"ef25dacd36568cf411ebcd9ea52fff7231c6f51b1f54ca8cd7cc48d6b0d1c09c\"" Sep 4 17:39:03.716463 systemd[1]: Started cri-containerd-ef25dacd36568cf411ebcd9ea52fff7231c6f51b1f54ca8cd7cc48d6b0d1c09c.scope - libcontainer container ef25dacd36568cf411ebcd9ea52fff7231c6f51b1f54ca8cd7cc48d6b0d1c09c. Sep 4 17:39:03.850523 containerd[1981]: time="2024-09-04T17:39:03.850480159Z" level=info msg="StartContainer for \"ef25dacd36568cf411ebcd9ea52fff7231c6f51b1f54ca8cd7cc48d6b0d1c09c\" returns successfully" Sep 4 17:39:04.288648 systemd[1]: Started sshd@16-172.31.29.194:22-139.178.68.195:37524.service - OpenSSH per-connection server daemon (139.178.68.195:37524). Sep 4 17:39:04.602251 sshd[5851]: Accepted publickey for core from 139.178.68.195 port 37524 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:04.606883 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:04.619743 systemd-logind[1960]: New session 17 of user core. Sep 4 17:39:04.627419 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:39:05.674350 sshd[5851]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:05.680275 systemd-logind[1960]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:39:05.681404 systemd[1]: sshd@16-172.31.29.194:22-139.178.68.195:37524.service: Deactivated successfully. Sep 4 17:39:05.689109 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:39:05.713308 systemd-logind[1960]: Removed session 17. Sep 4 17:39:05.721955 systemd[1]: Started sshd@17-172.31.29.194:22-139.178.68.195:37528.service - OpenSSH per-connection server daemon (139.178.68.195:37528). Sep 4 17:39:05.933393 sshd[5864]: Accepted publickey for core from 139.178.68.195 port 37528 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:05.935427 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:05.946492 systemd-logind[1960]: New session 18 of user core. Sep 4 17:39:05.952811 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:39:06.859072 sshd[5864]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:06.866520 systemd[1]: sshd@17-172.31.29.194:22-139.178.68.195:37528.service: Deactivated successfully. Sep 4 17:39:06.870413 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:39:06.872621 systemd-logind[1960]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:39:06.893884 systemd[1]: Started sshd@18-172.31.29.194:22-139.178.68.195:41498.service - OpenSSH per-connection server daemon (139.178.68.195:41498). Sep 4 17:39:06.895930 systemd-logind[1960]: Removed session 18. Sep 4 17:39:07.137379 sshd[5892]: Accepted publickey for core from 139.178.68.195 port 41498 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:07.142436 sshd[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:07.149968 systemd-logind[1960]: New session 19 of user core. Sep 4 17:39:07.157659 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:39:08.610768 sshd[5892]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:08.621133 systemd-logind[1960]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:39:08.622089 systemd[1]: sshd@18-172.31.29.194:22-139.178.68.195:41498.service: Deactivated successfully. Sep 4 17:39:08.626528 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:39:08.628750 systemd-logind[1960]: Removed session 19. Sep 4 17:39:08.653571 systemd[1]: Started sshd@19-172.31.29.194:22-139.178.68.195:41514.service - OpenSSH per-connection server daemon (139.178.68.195:41514). Sep 4 17:39:08.845674 sshd[5914]: Accepted publickey for core from 139.178.68.195 port 41514 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:08.848142 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:08.854571 systemd-logind[1960]: New session 20 of user core. Sep 4 17:39:08.860412 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:39:09.999242 sshd[5914]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:10.013852 systemd[1]: sshd@19-172.31.29.194:22-139.178.68.195:41514.service: Deactivated successfully. Sep 4 17:39:10.035715 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:39:10.053515 systemd-logind[1960]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:39:10.058768 systemd[1]: Started sshd@20-172.31.29.194:22-139.178.68.195:41524.service - OpenSSH per-connection server daemon (139.178.68.195:41524). Sep 4 17:39:10.062301 systemd-logind[1960]: Removed session 20. Sep 4 17:39:10.274319 sshd[5928]: Accepted publickey for core from 139.178.68.195 port 41524 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:10.277061 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:10.285210 systemd-logind[1960]: New session 21 of user core. Sep 4 17:39:10.289444 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:39:10.527755 sshd[5928]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:10.532542 systemd[1]: sshd@20-172.31.29.194:22-139.178.68.195:41524.service: Deactivated successfully. Sep 4 17:39:10.535582 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:39:10.536657 systemd-logind[1960]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:39:10.538091 systemd-logind[1960]: Removed session 21. Sep 4 17:39:11.713219 systemd[1]: run-containerd-runc-k8s.io-93e56748b3621468b9c8fd1773c6c50f975b85da555361010eb175ad63a80565-runc.w65Jpj.mount: Deactivated successfully. Sep 4 17:39:15.566900 systemd[1]: Started sshd@21-172.31.29.194:22-139.178.68.195:41534.service - OpenSSH per-connection server daemon (139.178.68.195:41534). Sep 4 17:39:15.756774 sshd[5968]: Accepted publickey for core from 139.178.68.195 port 41534 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:15.759651 sshd[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:15.771611 systemd-logind[1960]: New session 22 of user core. Sep 4 17:39:15.782755 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:39:16.025695 sshd[5968]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:16.030478 systemd-logind[1960]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:39:16.031377 systemd[1]: sshd@21-172.31.29.194:22-139.178.68.195:41534.service: Deactivated successfully. Sep 4 17:39:16.036250 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:39:16.037681 systemd-logind[1960]: Removed session 22. Sep 4 17:39:21.080022 systemd[1]: Started sshd@22-172.31.29.194:22-139.178.68.195:45834.service - OpenSSH per-connection server daemon (139.178.68.195:45834). Sep 4 17:39:21.281367 sshd[5988]: Accepted publickey for core from 139.178.68.195 port 45834 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:21.283928 sshd[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:21.291172 systemd-logind[1960]: New session 23 of user core. Sep 4 17:39:21.298407 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:39:21.538339 sshd[5988]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:21.560757 systemd[1]: sshd@22-172.31.29.194:22-139.178.68.195:45834.service: Deactivated successfully. Sep 4 17:39:21.574964 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:39:21.578860 systemd-logind[1960]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:39:21.581289 systemd-logind[1960]: Removed session 23. Sep 4 17:39:26.578635 systemd[1]: Started sshd@23-172.31.29.194:22-139.178.68.195:34958.service - OpenSSH per-connection server daemon (139.178.68.195:34958). Sep 4 17:39:26.743659 sshd[6006]: Accepted publickey for core from 139.178.68.195 port 34958 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:26.744452 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:26.750255 systemd-logind[1960]: New session 24 of user core. Sep 4 17:39:26.757424 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:39:26.944008 sshd[6006]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:26.949217 systemd-logind[1960]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:39:26.950238 systemd[1]: sshd@23-172.31.29.194:22-139.178.68.195:34958.service: Deactivated successfully. Sep 4 17:39:26.952968 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:39:26.954432 systemd-logind[1960]: Removed session 24. Sep 4 17:39:31.980612 systemd[1]: Started sshd@24-172.31.29.194:22-139.178.68.195:34968.service - OpenSSH per-connection server daemon (139.178.68.195:34968). Sep 4 17:39:32.154053 sshd[6019]: Accepted publickey for core from 139.178.68.195 port 34968 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:32.160205 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:32.168508 systemd-logind[1960]: New session 25 of user core. Sep 4 17:39:32.177480 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:39:32.440299 sshd[6019]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:32.472228 systemd-logind[1960]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:39:32.478575 systemd[1]: sshd@24-172.31.29.194:22-139.178.68.195:34968.service: Deactivated successfully. Sep 4 17:39:32.491498 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:39:32.493883 systemd-logind[1960]: Removed session 25. Sep 4 17:39:37.477854 systemd[1]: Started sshd@25-172.31.29.194:22-139.178.68.195:34106.service - OpenSSH per-connection server daemon (139.178.68.195:34106). Sep 4 17:39:37.642236 sshd[6058]: Accepted publickey for core from 139.178.68.195 port 34106 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:37.643719 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:37.678335 systemd-logind[1960]: New session 26 of user core. Sep 4 17:39:37.681471 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:39:37.876577 sshd[6058]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:37.881484 systemd-logind[1960]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:39:37.882093 systemd[1]: sshd@25-172.31.29.194:22-139.178.68.195:34106.service: Deactivated successfully. Sep 4 17:39:37.885105 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:39:37.888352 systemd-logind[1960]: Removed session 26. Sep 4 17:39:42.913547 systemd[1]: Started sshd@26-172.31.29.194:22-139.178.68.195:34122.service - OpenSSH per-connection server daemon (139.178.68.195:34122). Sep 4 17:39:43.105124 sshd[6101]: Accepted publickey for core from 139.178.68.195 port 34122 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:43.107075 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:43.113760 systemd-logind[1960]: New session 27 of user core. Sep 4 17:39:43.119449 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:39:43.352732 sshd[6101]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:43.379533 systemd[1]: sshd@26-172.31.29.194:22-139.178.68.195:34122.service: Deactivated successfully. Sep 4 17:39:43.385116 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:39:43.388392 systemd-logind[1960]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:39:43.391784 systemd-logind[1960]: Removed session 27. Sep 4 17:39:48.384755 systemd[1]: Started sshd@27-172.31.29.194:22-139.178.68.195:40224.service - OpenSSH per-connection server daemon (139.178.68.195:40224). Sep 4 17:39:48.556131 sshd[6142]: Accepted publickey for core from 139.178.68.195 port 40224 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:39:48.558712 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:48.566301 systemd-logind[1960]: New session 28 of user core. Sep 4 17:39:48.574627 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:39:48.833308 sshd[6142]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:48.838451 systemd[1]: sshd@27-172.31.29.194:22-139.178.68.195:40224.service: Deactivated successfully. Sep 4 17:39:48.842346 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:39:48.844841 systemd-logind[1960]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:39:48.846836 systemd-logind[1960]: Removed session 28. Sep 4 17:40:03.760320 systemd[1]: cri-containerd-06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca.scope: Deactivated successfully. Sep 4 17:40:03.760824 systemd[1]: cri-containerd-06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca.scope: Consumed 3.463s CPU time, 26.5M memory peak, 0B memory swap peak. Sep 4 17:40:03.817953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca-rootfs.mount: Deactivated successfully. Sep 4 17:40:03.877269 containerd[1981]: time="2024-09-04T17:40:03.813079945Z" level=info msg="shim disconnected" id=06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca namespace=k8s.io Sep 4 17:40:03.877269 containerd[1981]: time="2024-09-04T17:40:03.875455467Z" level=warning msg="cleaning up after shim disconnected" id=06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca namespace=k8s.io Sep 4 17:40:03.877269 containerd[1981]: time="2024-09-04T17:40:03.875477184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:40:04.325633 kubelet[3382]: I0904 17:40:04.325585 3382 scope.go:117] "RemoveContainer" containerID="06fdf576af656d90ec23faf275a1cd7cd58fab2b56a8519b18f8a71e069c82ca" Sep 4 17:40:04.337436 containerd[1981]: time="2024-09-04T17:40:04.337341012Z" level=info msg="CreateContainer within sandbox \"b6568f5c5f2dd1e01374b5b4ce1091fcac940eb5b86c05438134cf8a49345cdc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:40:04.353327 systemd[1]: cri-containerd-0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783.scope: Deactivated successfully. Sep 4 17:40:04.354068 systemd[1]: cri-containerd-0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783.scope: Consumed 5.932s CPU time. Sep 4 17:40:04.403537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474611231.mount: Deactivated successfully. Sep 4 17:40:04.427083 containerd[1981]: time="2024-09-04T17:40:04.417911687Z" level=info msg="shim disconnected" id=0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783 namespace=k8s.io Sep 4 17:40:04.427083 containerd[1981]: time="2024-09-04T17:40:04.417975054Z" level=warning msg="cleaning up after shim disconnected" id=0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783 namespace=k8s.io Sep 4 17:40:04.427083 containerd[1981]: time="2024-09-04T17:40:04.417987573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:40:04.418879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783-rootfs.mount: Deactivated successfully. Sep 4 17:40:04.465737 containerd[1981]: time="2024-09-04T17:40:04.461346905Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:40:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:40:04.498120 containerd[1981]: time="2024-09-04T17:40:04.498073755Z" level=info msg="CreateContainer within sandbox \"b6568f5c5f2dd1e01374b5b4ce1091fcac940eb5b86c05438134cf8a49345cdc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"26b7618ef7f2736c0f209e6fc1c262252a9a7ebf22120feddd869c4ea6389469\"" Sep 4 17:40:04.499208 containerd[1981]: time="2024-09-04T17:40:04.499155076Z" level=info msg="StartContainer for \"26b7618ef7f2736c0f209e6fc1c262252a9a7ebf22120feddd869c4ea6389469\"" Sep 4 17:40:04.595349 systemd[1]: Started cri-containerd-26b7618ef7f2736c0f209e6fc1c262252a9a7ebf22120feddd869c4ea6389469.scope - libcontainer container 26b7618ef7f2736c0f209e6fc1c262252a9a7ebf22120feddd869c4ea6389469. Sep 4 17:40:04.670222 containerd[1981]: time="2024-09-04T17:40:04.670151321Z" level=info msg="StartContainer for \"26b7618ef7f2736c0f209e6fc1c262252a9a7ebf22120feddd869c4ea6389469\" returns successfully" Sep 4 17:40:05.307363 kubelet[3382]: I0904 17:40:05.307040 3382 scope.go:117] "RemoveContainer" containerID="0e5a2d26543a7742e7428ddf20fb053eddca5a51927579e89b0e6ce21a4b5783" Sep 4 17:40:05.338886 containerd[1981]: time="2024-09-04T17:40:05.338533730Z" level=info msg="CreateContainer within sandbox \"123135c607475e8adb8ec0d1a7254c66b0bec83291857e0b11baf39aa78debc9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:40:05.394015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666664220.mount: Deactivated successfully. Sep 4 17:40:05.398243 containerd[1981]: time="2024-09-04T17:40:05.398105939Z" level=info msg="CreateContainer within sandbox \"123135c607475e8adb8ec0d1a7254c66b0bec83291857e0b11baf39aa78debc9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"521a944e4ed7711afe0669231ee8cca9efc5a72631ae937aa58c2ba8afd1c8f4\"" Sep 4 17:40:05.404709 containerd[1981]: time="2024-09-04T17:40:05.404662059Z" level=info msg="StartContainer for \"521a944e4ed7711afe0669231ee8cca9efc5a72631ae937aa58c2ba8afd1c8f4\"" Sep 4 17:40:05.425268 kubelet[3382]: E0904 17:40:05.422171 3382 controller.go:193] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-29-194)" Sep 4 17:40:05.509787 systemd[1]: Started cri-containerd-521a944e4ed7711afe0669231ee8cca9efc5a72631ae937aa58c2ba8afd1c8f4.scope - libcontainer container 521a944e4ed7711afe0669231ee8cca9efc5a72631ae937aa58c2ba8afd1c8f4. Sep 4 17:40:05.589462 containerd[1981]: time="2024-09-04T17:40:05.589350058Z" level=info msg="StartContainer for \"521a944e4ed7711afe0669231ee8cca9efc5a72631ae937aa58c2ba8afd1c8f4\" returns successfully" Sep 4 17:40:08.306916 systemd[1]: cri-containerd-c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d.scope: Deactivated successfully. Sep 4 17:40:08.307959 systemd[1]: cri-containerd-c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d.scope: Consumed 2.119s CPU time, 15.5M memory peak, 0B memory swap peak. Sep 4 17:40:08.359481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d-rootfs.mount: Deactivated successfully. Sep 4 17:40:08.362417 containerd[1981]: time="2024-09-04T17:40:08.362352400Z" level=info msg="shim disconnected" id=c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d namespace=k8s.io Sep 4 17:40:08.362417 containerd[1981]: time="2024-09-04T17:40:08.362411958Z" level=warning msg="cleaning up after shim disconnected" id=c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d namespace=k8s.io Sep 4 17:40:08.363597 containerd[1981]: time="2024-09-04T17:40:08.362424731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:40:09.329528 kubelet[3382]: I0904 17:40:09.328689 3382 scope.go:117] "RemoveContainer" containerID="c00f25bd56e2043e29e8f4ad206f32910cb1621cb1eb934211df89087d885d8d" Sep 4 17:40:09.341373 containerd[1981]: time="2024-09-04T17:40:09.341273429Z" level=info msg="CreateContainer within sandbox \"e3c3d38f86caf721b1cfc5b660fb94becc97236984f9126081334c3124d23251\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:40:09.390836 containerd[1981]: time="2024-09-04T17:40:09.390617463Z" level=info msg="CreateContainer within sandbox \"e3c3d38f86caf721b1cfc5b660fb94becc97236984f9126081334c3124d23251\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4812cbc067985733ac825cd51554604f2b4b878233f6c727f4847cea9cf1d87a\"" Sep 4 17:40:09.393249 containerd[1981]: time="2024-09-04T17:40:09.391772834Z" level=info msg="StartContainer for \"4812cbc067985733ac825cd51554604f2b4b878233f6c727f4847cea9cf1d87a\"" Sep 4 17:40:09.499402 systemd[1]: Started cri-containerd-4812cbc067985733ac825cd51554604f2b4b878233f6c727f4847cea9cf1d87a.scope - libcontainer container 4812cbc067985733ac825cd51554604f2b4b878233f6c727f4847cea9cf1d87a. Sep 4 17:40:09.562602 containerd[1981]: time="2024-09-04T17:40:09.562541517Z" level=info msg="StartContainer for \"4812cbc067985733ac825cd51554604f2b4b878233f6c727f4847cea9cf1d87a\" returns successfully" Sep 4 17:40:11.699062 systemd[1]: run-containerd-runc-k8s.io-93e56748b3621468b9c8fd1773c6c50f975b85da555361010eb175ad63a80565-runc.9OXaG3.mount: Deactivated successfully. Sep 4 17:40:15.467001 kubelet[3382]: E0904 17:40:15.466060 3382 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-194?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"