Sep 4 17:33:40.521483 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:33:40.521527 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:33:40.521542 kernel: BIOS-provided physical RAM map: Sep 4 17:33:40.521552 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:33:40.521561 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:33:40.521572 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:33:40.521587 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Sep 4 17:33:40.521597 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Sep 4 17:33:40.521608 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Sep 4 17:33:40.521618 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:33:40.521629 kernel: NX (Execute Disable) protection: active Sep 4 17:33:40.521639 kernel: APIC: Static calls initialized Sep 4 17:33:40.521650 kernel: SMBIOS 2.7 present. Sep 4 17:33:40.521662 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 4 17:33:40.521677 kernel: Hypervisor detected: KVM Sep 4 17:33:40.521689 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:33:40.521701 kernel: kvm-clock: using sched offset of 7848104995 cycles Sep 4 17:33:40.521713 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:33:40.521725 kernel: tsc: Detected 2499.996 MHz processor Sep 4 17:33:40.521737 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:33:40.521749 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:33:40.521764 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Sep 4 17:33:40.521776 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:33:40.521788 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:33:40.522011 kernel: Using GB pages for direct mapping Sep 4 17:33:40.522026 kernel: ACPI: Early table checksum verification disabled Sep 4 17:33:40.522038 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Sep 4 17:33:40.522050 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Sep 4 17:33:40.522062 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:33:40.522074 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 4 17:33:40.523914 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Sep 4 17:33:40.523940 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 17:33:40.523956 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:33:40.524023 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 4 17:33:40.524039 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:33:40.524054 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 4 17:33:40.524069 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 4 17:33:40.524084 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 17:33:40.524098 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Sep 4 17:33:40.524119 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Sep 4 17:33:40.524141 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Sep 4 17:33:40.524334 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Sep 4 17:33:40.524353 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Sep 4 17:33:40.524369 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Sep 4 17:33:40.524388 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Sep 4 17:33:40.524403 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Sep 4 17:33:40.524419 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Sep 4 17:33:40.524435 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Sep 4 17:33:40.524451 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:33:40.524467 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:33:40.524483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 4 17:33:40.524499 kernel: NUMA: Initialized distance table, cnt=1 Sep 4 17:33:40.524515 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Sep 4 17:33:40.524534 kernel: Zone ranges: Sep 4 17:33:40.524549 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:33:40.524565 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Sep 4 17:33:40.524626 kernel: Normal empty Sep 4 17:33:40.524645 kernel: Movable zone start for each node Sep 4 17:33:40.524662 kernel: Early memory node ranges Sep 4 17:33:40.524728 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:33:40.524749 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Sep 4 17:33:40.524765 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Sep 4 17:33:40.524785 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:33:40.524800 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:33:40.524815 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Sep 4 17:33:40.524831 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:33:40.524847 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:33:40.524922 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 4 17:33:40.524938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:33:40.524953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:33:40.524969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:33:40.524984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:33:40.525072 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:33:40.525092 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:33:40.525109 kernel: TSC deadline timer available Sep 4 17:33:40.525123 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:33:40.525139 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:33:40.525155 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Sep 4 17:33:40.525171 kernel: Booting paravirtualized kernel on KVM Sep 4 17:33:40.525187 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:33:40.525203 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:33:40.525224 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:33:40.525239 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:33:40.525255 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:33:40.525270 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:33:40.525286 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:33:40.525303 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:33:40.525320 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:33:40.525468 kernel: random: crng init done Sep 4 17:33:40.525492 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:33:40.525508 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:33:40.525524 kernel: Fallback order for Node 0: 0 Sep 4 17:33:40.525539 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Sep 4 17:33:40.525555 kernel: Policy zone: DMA32 Sep 4 17:33:40.525570 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:33:40.525644 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 125152K reserved, 0K cma-reserved) Sep 4 17:33:40.525662 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:33:40.525683 kernel: Kernel/User page tables isolation: enabled Sep 4 17:33:40.525699 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:33:40.525714 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:33:40.525730 kernel: Dynamic Preempt: voluntary Sep 4 17:33:40.525745 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:33:40.525763 kernel: rcu: RCU event tracing is enabled. Sep 4 17:33:40.525779 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:33:40.525794 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:33:40.525810 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:33:40.525825 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:33:40.525844 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:33:40.529927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:33:40.529957 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 17:33:40.529974 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:33:40.529989 kernel: Console: colour VGA+ 80x25 Sep 4 17:33:40.530003 kernel: printk: console [ttyS0] enabled Sep 4 17:33:40.530014 kernel: ACPI: Core revision 20230628 Sep 4 17:33:40.530027 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 4 17:33:40.530045 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:33:40.530074 kernel: x2apic enabled Sep 4 17:33:40.530093 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:33:40.530131 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 4 17:33:40.530158 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 4 17:33:40.530177 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:33:40.530196 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:33:40.530216 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:33:40.530298 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:33:40.530313 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:33:40.530329 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:33:40.530346 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:33:40.530362 kernel: RETBleed: Vulnerable Sep 4 17:33:40.530383 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:33:40.530399 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:33:40.530415 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:33:40.530431 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:33:40.530447 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:33:40.530501 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:33:40.530518 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:33:40.530538 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 4 17:33:40.530555 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 4 17:33:40.530571 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:33:40.530587 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:33:40.530603 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:33:40.530619 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 4 17:33:40.530635 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:33:40.530651 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 4 17:33:40.530667 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 4 17:33:40.530683 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 4 17:33:40.530702 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 4 17:33:40.530718 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 4 17:33:40.530733 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 4 17:33:40.530749 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 4 17:33:40.533009 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:33:40.533052 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:33:40.533069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:33:40.533086 kernel: landlock: Up and running. Sep 4 17:33:40.533103 kernel: SELinux: Initializing. Sep 4 17:33:40.533119 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:33:40.533135 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:33:40.533151 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:33:40.533175 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:33:40.533192 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:33:40.533208 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:33:40.533225 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:33:40.533242 kernel: signal: max sigframe size: 3632 Sep 4 17:33:40.533511 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:33:40.533535 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:33:40.533552 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:33:40.533568 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:33:40.533590 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:33:40.533606 kernel: .... node #0, CPUs: #1 Sep 4 17:33:40.533625 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 17:33:40.533643 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:33:40.533659 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:33:40.533675 kernel: smpboot: Max logical packages: 1 Sep 4 17:33:40.533691 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 4 17:33:40.535943 kernel: devtmpfs: initialized Sep 4 17:33:40.535986 kernel: x86/mm: Memory block size: 128MB Sep 4 17:33:40.536010 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:33:40.536027 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:33:40.536044 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:33:40.536060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:33:40.536076 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:33:40.536093 kernel: audit: type=2000 audit(1725471218.341:1): state=initialized audit_enabled=0 res=1 Sep 4 17:33:40.536109 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:33:40.536125 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:33:40.536141 kernel: cpuidle: using governor menu Sep 4 17:33:40.536161 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:33:40.536177 kernel: dca service started, version 1.12.1 Sep 4 17:33:40.536193 kernel: PCI: Using configuration type 1 for base access Sep 4 17:33:40.536209 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:33:40.536225 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:33:40.536242 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:33:40.536257 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:33:40.536274 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:33:40.536293 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:33:40.536344 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:33:40.536361 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:33:40.536377 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:33:40.536393 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 17:33:40.536410 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:33:40.536426 kernel: ACPI: Interpreter enabled Sep 4 17:33:40.536442 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:33:40.536458 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:33:40.536474 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:33:40.536494 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:33:40.536510 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 4 17:33:40.536526 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:33:40.540240 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:33:40.540405 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 17:33:40.540533 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 17:33:40.540552 kernel: acpiphp: Slot [3] registered Sep 4 17:33:40.540575 kernel: acpiphp: Slot [4] registered Sep 4 17:33:40.540590 kernel: acpiphp: Slot [5] registered Sep 4 17:33:40.540606 kernel: acpiphp: Slot [6] registered Sep 4 17:33:40.540622 kernel: acpiphp: Slot [7] registered Sep 4 17:33:40.540637 kernel: acpiphp: Slot [8] registered Sep 4 17:33:40.540653 kernel: acpiphp: Slot [9] registered Sep 4 17:33:40.540669 kernel: acpiphp: Slot [10] registered Sep 4 17:33:40.540684 kernel: acpiphp: Slot [11] registered Sep 4 17:33:40.540700 kernel: acpiphp: Slot [12] registered Sep 4 17:33:40.540718 kernel: acpiphp: Slot [13] registered Sep 4 17:33:40.540734 kernel: acpiphp: Slot [14] registered Sep 4 17:33:40.540750 kernel: acpiphp: Slot [15] registered Sep 4 17:33:40.540765 kernel: acpiphp: Slot [16] registered Sep 4 17:33:40.540780 kernel: acpiphp: Slot [17] registered Sep 4 17:33:40.540796 kernel: acpiphp: Slot [18] registered Sep 4 17:33:40.540812 kernel: acpiphp: Slot [19] registered Sep 4 17:33:40.540828 kernel: acpiphp: Slot [20] registered Sep 4 17:33:40.540843 kernel: acpiphp: Slot [21] registered Sep 4 17:33:40.540871 kernel: acpiphp: Slot [22] registered Sep 4 17:33:40.540890 kernel: acpiphp: Slot [23] registered Sep 4 17:33:40.540906 kernel: acpiphp: Slot [24] registered Sep 4 17:33:40.540923 kernel: acpiphp: Slot [25] registered Sep 4 17:33:40.540939 kernel: acpiphp: Slot [26] registered Sep 4 17:33:40.540955 kernel: acpiphp: Slot [27] registered Sep 4 17:33:40.540971 kernel: acpiphp: Slot [28] registered Sep 4 17:33:40.540987 kernel: acpiphp: Slot [29] registered Sep 4 17:33:40.541003 kernel: acpiphp: Slot [30] registered Sep 4 17:33:40.541020 kernel: acpiphp: Slot [31] registered Sep 4 17:33:40.541040 kernel: PCI host bridge to bus 0000:00 Sep 4 17:33:40.541299 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:33:40.541432 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:33:40.541553 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:33:40.541669 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 17:33:40.545917 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:33:40.546208 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:33:40.546574 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:33:40.546747 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 4 17:33:40.547600 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:33:40.547758 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Sep 4 17:33:40.547904 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 4 17:33:40.548032 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 4 17:33:40.548161 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 4 17:33:40.548297 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 4 17:33:40.548426 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 4 17:33:40.548558 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 4 17:33:40.548686 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 13671 usecs Sep 4 17:33:40.548835 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 4 17:33:40.553232 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Sep 4 17:33:40.553390 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 4 17:33:40.553532 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:33:40.553816 kernel: pci 0000:00:03.0: pci_fixup_video+0x0/0x110 took 12695 usecs Sep 4 17:33:40.556154 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:33:40.556323 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Sep 4 17:33:40.556470 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:33:40.556607 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Sep 4 17:33:40.556635 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:33:40.556653 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:33:40.556669 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:33:40.556686 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:33:40.556703 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:33:40.556720 kernel: iommu: Default domain type: Translated Sep 4 17:33:40.556737 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:33:40.556753 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:33:40.556770 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:33:40.560902 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:33:40.560948 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Sep 4 17:33:40.561161 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 4 17:33:40.561360 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 4 17:33:40.561512 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:33:40.561535 kernel: vgaarb: loaded Sep 4 17:33:40.561553 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 4 17:33:40.561570 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 4 17:33:40.561587 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:33:40.561612 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:33:40.561629 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:33:40.561646 kernel: pnp: PnP ACPI init Sep 4 17:33:40.561663 kernel: pnp: PnP ACPI: found 5 devices Sep 4 17:33:40.561680 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:33:40.561697 kernel: NET: Registered PF_INET protocol family Sep 4 17:33:40.561714 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:33:40.561731 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 17:33:40.561748 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:33:40.561768 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:33:40.561785 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 17:33:40.561802 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 17:33:40.561819 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:33:40.561835 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:33:40.561852 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:33:40.563941 kernel: NET: Registered PF_XDP protocol family Sep 4 17:33:40.564180 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:33:40.564318 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:33:40.564433 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:33:40.564566 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 17:33:40.564775 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:33:40.564796 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:33:40.564813 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:33:40.564828 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 4 17:33:40.564843 kernel: clocksource: Switched to clocksource tsc Sep 4 17:33:40.565916 kernel: Initialise system trusted keyrings Sep 4 17:33:40.565933 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 17:33:40.565947 kernel: Key type asymmetric registered Sep 4 17:33:40.565960 kernel: Asymmetric key parser 'x509' registered Sep 4 17:33:40.565974 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:33:40.565991 kernel: io scheduler mq-deadline registered Sep 4 17:33:40.566007 kernel: io scheduler kyber registered Sep 4 17:33:40.566023 kernel: io scheduler bfq registered Sep 4 17:33:40.566038 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:33:40.566060 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:33:40.566077 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:33:40.566092 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:33:40.566108 kernel: i8042: Warning: Keylock active Sep 4 17:33:40.566121 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:33:40.566133 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:33:40.566426 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 17:33:40.566596 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 17:33:40.566754 kernel: rtc_cmos 00:00: setting system clock to 2024-09-04T17:33:39 UTC (1725471219) Sep 4 17:33:40.566924 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 17:33:40.566950 kernel: intel_pstate: CPU model not supported Sep 4 17:33:40.566970 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:33:40.566990 kernel: Segment Routing with IPv6 Sep 4 17:33:40.567010 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:33:40.567031 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:33:40.567052 kernel: Key type dns_resolver registered Sep 4 17:33:40.567071 kernel: IPI shorthand broadcast: enabled Sep 4 17:33:40.567097 kernel: sched_clock: Marking stable (1048257584, 391540859)->(1660209511, -220411068) Sep 4 17:33:40.567116 kernel: registered taskstats version 1 Sep 4 17:33:40.567136 kernel: Loading compiled-in X.509 certificates Sep 4 17:33:40.567156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:33:40.567174 kernel: Key type .fscrypt registered Sep 4 17:33:40.567193 kernel: Key type fscrypt-provisioning registered Sep 4 17:33:40.567221 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:33:40.567242 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:33:40.567262 kernel: ima: No architecture policies found Sep 4 17:33:40.567285 kernel: clk: Disabling unused clocks Sep 4 17:33:40.567306 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:33:40.567325 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:33:40.567345 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:33:40.567366 kernel: Run /init as init process Sep 4 17:33:40.567385 kernel: with arguments: Sep 4 17:33:40.567405 kernel: /init Sep 4 17:33:40.567423 kernel: with environment: Sep 4 17:33:40.567502 kernel: HOME=/ Sep 4 17:33:40.567532 kernel: TERM=linux Sep 4 17:33:40.567578 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:33:40.567608 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:33:40.567635 systemd[1]: Detected virtualization amazon. Sep 4 17:33:40.567652 systemd[1]: Detected architecture x86-64. Sep 4 17:33:40.567672 systemd[1]: Running in initrd. Sep 4 17:33:40.567688 systemd[1]: No hostname configured, using default hostname. Sep 4 17:33:40.567706 systemd[1]: Hostname set to . Sep 4 17:33:40.567720 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:33:40.567736 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:33:40.567753 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:33:40.567812 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:33:40.567829 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:33:40.567847 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:33:40.567919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:33:40.567941 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:33:40.567957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:33:40.567976 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:33:40.567991 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:33:40.568006 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:33:40.568023 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:33:40.568046 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:33:40.568060 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:33:40.568075 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:33:40.568091 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:33:40.568110 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:33:40.568127 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:33:40.568142 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:33:40.568158 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:33:40.568173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:33:40.568192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:33:40.568207 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:33:40.568222 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:33:40.568236 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:33:40.568250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:33:40.568272 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:33:40.568287 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:33:40.568303 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:33:40.568318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:33:40.568430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:33:40.568446 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:33:40.568462 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:33:40.568513 systemd-journald[178]: Collecting audit messages is disabled. Sep 4 17:33:40.568552 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:33:40.568569 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:33:40.568589 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:33:40.568605 kernel: Bridge firewalling registered Sep 4 17:33:40.568621 systemd-journald[178]: Journal started Sep 4 17:33:40.568651 systemd-journald[178]: Runtime Journal (/run/log/journal/ec23d113116fdbc80c307ef24dcd177b) is 4.8M, max 38.6M, 33.7M free. Sep 4 17:33:40.570957 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:33:40.447155 systemd-modules-load[179]: Inserted module 'overlay' Sep 4 17:33:40.736986 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:33:40.565211 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 4 17:33:40.737304 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:33:40.749574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:33:40.771204 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:33:40.799362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:33:40.802316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:33:40.806079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:33:40.829078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:33:40.840442 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:33:40.850350 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:33:40.856715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:33:40.875793 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:33:40.904255 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:33:40.937747 dracut-cmdline[211]: dracut-dracut-053 Sep 4 17:33:40.942313 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:33:41.029201 systemd-resolved[213]: Positive Trust Anchors: Sep 4 17:33:41.029219 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:33:41.029277 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:33:41.060757 systemd-resolved[213]: Defaulting to hostname 'linux'. Sep 4 17:33:41.064999 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:33:41.069007 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:33:41.128883 kernel: SCSI subsystem initialized Sep 4 17:33:41.146414 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:33:41.158884 kernel: iscsi: registered transport (tcp) Sep 4 17:33:41.255165 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:33:41.255288 kernel: QLogic iSCSI HBA Driver Sep 4 17:33:41.386753 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:33:41.399296 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:33:41.449593 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:33:41.449689 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:33:41.449712 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:33:41.549913 kernel: raid6: avx512x4 gen() 13896 MB/s Sep 4 17:33:41.566917 kernel: raid6: avx512x2 gen() 14550 MB/s Sep 4 17:33:41.583913 kernel: raid6: avx512x1 gen() 15541 MB/s Sep 4 17:33:41.601043 kernel: raid6: avx2x4 gen() 15681 MB/s Sep 4 17:33:41.624059 kernel: raid6: avx2x2 gen() 13990 MB/s Sep 4 17:33:41.642345 kernel: raid6: avx2x1 gen() 5859 MB/s Sep 4 17:33:41.642435 kernel: raid6: using algorithm avx2x4 gen() 15681 MB/s Sep 4 17:33:41.666736 kernel: raid6: .... xor() 3203 MB/s, rmw enabled Sep 4 17:33:41.666834 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:33:41.693890 kernel: xor: automatically using best checksumming function avx Sep 4 17:33:42.061047 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:33:42.075025 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:33:42.082759 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:33:42.123399 systemd-udevd[396]: Using default interface naming scheme 'v255'. Sep 4 17:33:42.132161 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:33:42.165137 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:33:42.225646 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Sep 4 17:33:42.359634 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:33:42.367155 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:33:42.601844 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:33:42.615773 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:33:42.671779 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:33:42.677112 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:33:42.680639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:33:42.683397 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:33:42.700301 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:33:42.714541 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:33:42.714822 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:33:42.745150 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 4 17:33:42.803085 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:33:42.813279 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:33:42.826923 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:70:8a:93:c1:07 Sep 4 17:33:42.854150 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:33:42.854230 kernel: AES CTR mode by8 optimization enabled Sep 4 17:33:42.885283 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:33:42.933625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:33:42.933927 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:33:42.939259 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:33:42.940904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:33:42.943354 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:33:42.949758 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:33:42.964993 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:33:42.968326 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 17:33:42.965970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:33:42.991890 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:33:43.023179 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:33:43.023258 kernel: GPT:9289727 != 16777215 Sep 4 17:33:43.023280 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:33:43.023299 kernel: GPT:9289727 != 16777215 Sep 4 17:33:43.023316 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:33:43.023337 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:33:43.280529 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (450) Sep 4 17:33:43.301917 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (452) Sep 4 17:33:43.319569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:33:43.335193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:33:43.429120 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:33:43.491009 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:33:43.500731 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:33:43.515042 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:33:43.515421 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:33:43.534600 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:33:43.543275 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:33:43.556586 disk-uuid[629]: Primary Header is updated. Sep 4 17:33:43.556586 disk-uuid[629]: Secondary Entries is updated. Sep 4 17:33:43.556586 disk-uuid[629]: Secondary Header is updated. Sep 4 17:33:43.565998 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:33:43.577986 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:33:44.584885 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:33:44.591409 disk-uuid[630]: The operation has completed successfully. Sep 4 17:33:44.941159 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:33:44.941398 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:33:44.991159 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:33:45.025815 sh[888]: Success Sep 4 17:33:45.062001 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:33:45.206571 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:33:45.231143 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:33:45.259405 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:33:45.289963 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:33:45.290052 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:33:45.291328 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:33:45.291372 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:33:45.293275 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:33:45.382156 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:33:45.416649 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:33:45.417750 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:33:45.434163 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:33:45.437563 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:33:45.474507 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:33:45.474652 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:33:45.474676 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:33:45.485894 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:33:45.503729 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:33:45.505372 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:33:45.525593 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:33:45.534463 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:33:45.593755 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:33:45.604701 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:33:45.634261 systemd-networkd[1081]: lo: Link UP Sep 4 17:33:45.634276 systemd-networkd[1081]: lo: Gained carrier Sep 4 17:33:45.636012 systemd-networkd[1081]: Enumeration completed Sep 4 17:33:45.636502 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:33:45.636620 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:33:45.637168 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:33:45.638876 systemd[1]: Reached target network.target - Network. Sep 4 17:33:45.707544 systemd-networkd[1081]: eth0: Link UP Sep 4 17:33:45.707723 systemd-networkd[1081]: eth0: Gained carrier Sep 4 17:33:45.707744 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:33:45.730992 systemd-networkd[1081]: eth0: DHCPv4 address 172.31.21.246/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:33:46.067420 ignition[1017]: Ignition 2.19.0 Sep 4 17:33:46.067437 ignition[1017]: Stage: fetch-offline Sep 4 17:33:46.067802 ignition[1017]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:33:46.067816 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:33:46.074111 ignition[1017]: Ignition finished successfully Sep 4 17:33:46.076485 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:33:46.089286 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:33:46.129724 ignition[1091]: Ignition 2.19.0 Sep 4 17:33:46.129740 ignition[1091]: Stage: fetch Sep 4 17:33:46.131696 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:33:46.131710 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:33:46.131806 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:33:46.145662 ignition[1091]: PUT result: OK Sep 4 17:33:46.148897 ignition[1091]: parsed url from cmdline: "" Sep 4 17:33:46.149156 ignition[1091]: no config URL provided Sep 4 17:33:46.149170 ignition[1091]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:33:46.149191 ignition[1091]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:33:46.149212 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:33:46.150398 ignition[1091]: PUT result: OK Sep 4 17:33:46.150503 ignition[1091]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:33:46.157673 ignition[1091]: GET result: OK Sep 4 17:33:46.161315 ignition[1091]: parsing config with SHA512: 60cc7b3ae8ee64565d6507019dd720dae1f13700615413fe99bdaced53dbfd9cdf516e786af41b83f5b3882808cc945f4bb6d7c03c44135ee9c6dbb523fa9b4d Sep 4 17:33:46.176791 unknown[1091]: fetched base config from "system" Sep 4 17:33:46.177660 ignition[1091]: fetch: fetch complete Sep 4 17:33:46.176814 unknown[1091]: fetched base config from "system" Sep 4 17:33:46.177952 ignition[1091]: fetch: fetch passed Sep 4 17:33:46.176823 unknown[1091]: fetched user config from "aws" Sep 4 17:33:46.178032 ignition[1091]: Ignition finished successfully Sep 4 17:33:46.183712 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:33:46.197133 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:33:46.227125 ignition[1097]: Ignition 2.19.0 Sep 4 17:33:46.228320 ignition[1097]: Stage: kargs Sep 4 17:33:46.229914 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:33:46.229932 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:33:46.230099 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:33:46.232908 ignition[1097]: PUT result: OK Sep 4 17:33:46.261378 ignition[1097]: kargs: kargs passed Sep 4 17:33:46.261595 ignition[1097]: Ignition finished successfully Sep 4 17:33:46.277923 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:33:46.297140 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:33:46.392960 ignition[1104]: Ignition 2.19.0 Sep 4 17:33:46.393660 ignition[1104]: Stage: disks Sep 4 17:33:46.394222 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:33:46.394248 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:33:46.394907 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:33:46.409333 ignition[1104]: PUT result: OK Sep 4 17:33:46.433555 ignition[1104]: disks: disks passed Sep 4 17:33:46.437019 ignition[1104]: Ignition finished successfully Sep 4 17:33:46.451683 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:33:46.452094 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:33:46.471924 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:33:46.472064 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:33:46.491338 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:33:46.501250 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:33:46.531358 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:33:46.621972 systemd-fsck[1113]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:33:46.634451 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:33:46.655375 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:33:46.962900 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:33:46.965390 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:33:46.967168 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:33:46.985708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:33:47.003410 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:33:47.009357 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:33:47.010520 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:33:47.010562 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:33:47.044954 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1132) Sep 4 17:33:47.053521 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:33:47.053604 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:33:47.053625 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:33:47.067452 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:33:47.069250 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:33:47.079846 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:33:47.098035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:33:47.378034 systemd-networkd[1081]: eth0: Gained IPv6LL Sep 4 17:33:47.412324 initrd-setup-root[1156]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:33:47.442540 initrd-setup-root[1163]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:33:47.462744 initrd-setup-root[1170]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:33:47.484528 initrd-setup-root[1177]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:33:47.828756 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:33:47.840040 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:33:47.846330 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:33:47.923831 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:33:47.926668 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:33:47.945101 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:33:47.979681 ignition[1245]: INFO : Ignition 2.19.0 Sep 4 17:33:47.979681 ignition[1245]: INFO : Stage: mount Sep 4 17:33:47.986318 ignition[1245]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:33:47.986318 ignition[1245]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:33:47.986318 ignition[1245]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:33:48.000499 ignition[1245]: INFO : PUT result: OK Sep 4 17:33:48.018797 ignition[1245]: INFO : mount: mount passed Sep 4 17:33:48.019817 ignition[1245]: INFO : Ignition finished successfully Sep 4 17:33:48.025887 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:33:48.037117 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:33:48.096176 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:33:48.132990 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1256) Sep 4 17:33:48.135507 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:33:48.135575 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:33:48.135594 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:33:48.144885 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:33:48.148370 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:33:48.208894 ignition[1273]: INFO : Ignition 2.19.0 Sep 4 17:33:48.208894 ignition[1273]: INFO : Stage: files Sep 4 17:33:48.212172 ignition[1273]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:33:48.212172 ignition[1273]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:33:48.212172 ignition[1273]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:33:48.218180 ignition[1273]: INFO : PUT result: OK Sep 4 17:33:48.229390 ignition[1273]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:33:48.231580 ignition[1273]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:33:48.231580 ignition[1273]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:33:48.252029 ignition[1273]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:33:48.254254 ignition[1273]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:33:48.254254 ignition[1273]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:33:48.253348 unknown[1273]: wrote ssh authorized keys file for user: core Sep 4 17:33:48.279287 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:33:48.285746 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:33:48.346439 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:33:48.484668 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:33:48.484668 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:33:48.500121 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Sep 4 17:33:48.944245 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:33:49.604213 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:33:49.604213 ignition[1273]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:33:49.614077 ignition[1273]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:33:49.622267 ignition[1273]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:33:49.622267 ignition[1273]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:33:49.622267 ignition[1273]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:33:49.622267 ignition[1273]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:33:49.622267 ignition[1273]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:33:49.622267 ignition[1273]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:33:49.622267 ignition[1273]: INFO : files: files passed Sep 4 17:33:49.622267 ignition[1273]: INFO : Ignition finished successfully Sep 4 17:33:49.644186 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:33:49.671360 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:33:49.689133 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:33:49.724167 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:33:49.724381 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:33:49.751444 initrd-setup-root-after-ignition[1301]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:33:49.753647 initrd-setup-root-after-ignition[1301]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:33:49.755994 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:33:49.763869 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:33:49.768505 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:33:49.793029 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:33:49.905612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:33:49.905974 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:33:49.912653 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:33:49.915436 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:33:49.918598 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:33:49.930226 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:33:49.998417 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:33:50.011191 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:33:50.103447 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:33:50.109799 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:33:50.112279 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:33:50.117187 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:33:50.119670 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:33:50.128473 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:33:50.129158 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:33:50.147210 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:33:50.160158 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:33:50.164419 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:33:50.178163 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:33:50.189613 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:33:50.197763 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:33:50.206749 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:33:50.213610 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:33:50.215391 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:33:50.215577 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:33:50.222925 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:33:50.225367 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:33:50.225496 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:33:50.225650 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:33:50.230875 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:33:50.232475 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:33:50.235950 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:33:50.237217 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:33:50.244818 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:33:50.246064 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:33:50.270246 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:33:50.306539 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:33:50.314573 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:33:50.315001 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:33:50.330452 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:33:50.337523 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:33:50.368716 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:33:50.369004 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:33:50.389341 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:33:50.393073 ignition[1325]: INFO : Ignition 2.19.0 Sep 4 17:33:50.393073 ignition[1325]: INFO : Stage: umount Sep 4 17:33:50.393073 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:33:50.393073 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:33:50.393073 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:33:50.401248 ignition[1325]: INFO : PUT result: OK Sep 4 17:33:50.401157 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:33:50.405720 ignition[1325]: INFO : umount: umount passed Sep 4 17:33:50.405720 ignition[1325]: INFO : Ignition finished successfully Sep 4 17:33:50.401402 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:33:50.406818 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:33:50.407008 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:33:50.421990 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:33:50.427748 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:33:50.428758 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:33:50.430352 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:33:50.433926 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:33:50.434006 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:33:50.436573 systemd[1]: Stopped target network.target - Network. Sep 4 17:33:50.439959 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:33:50.440119 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:33:50.443814 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:33:50.445398 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:33:50.449205 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:33:50.453769 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:33:50.455379 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:33:50.457493 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:33:50.457548 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:33:50.460139 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:33:50.460197 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:33:50.461809 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:33:50.461924 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:33:50.467384 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:33:50.467456 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:33:50.469282 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:33:50.469376 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:33:50.472054 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:33:50.474731 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:33:50.482123 systemd-networkd[1081]: eth0: DHCPv6 lease lost Sep 4 17:33:50.488187 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:33:50.488342 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:33:50.492410 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:33:50.492464 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:33:50.503303 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:33:50.511082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:33:50.511193 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:33:50.527055 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:33:50.558346 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:33:50.559120 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:33:50.592580 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:33:50.592705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:33:50.594176 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:33:50.594255 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:33:50.595988 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:33:50.596060 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:33:50.611491 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:33:50.613721 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:33:50.623346 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:33:50.623447 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:33:50.623641 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:33:50.623684 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:33:50.629417 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:33:50.629597 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:33:50.634851 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:33:50.634982 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:33:50.641270 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:33:50.641357 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:33:50.669498 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:33:50.671289 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:33:50.675950 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:33:50.681749 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:33:50.681822 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:33:50.687982 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:33:50.688803 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:33:50.694836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:33:50.694963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:33:50.728178 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:33:50.728367 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:33:50.754717 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:33:50.754880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:33:50.783292 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:33:50.794749 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:33:50.838246 systemd[1]: Switching root. Sep 4 17:33:50.877633 systemd-journald[178]: Journal stopped Sep 4 17:33:54.450009 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 4 17:33:54.450130 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:33:54.450158 kernel: SELinux: policy capability open_perms=1 Sep 4 17:33:54.450243 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:33:54.450263 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:33:54.450283 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:33:54.450309 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:33:54.450326 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:33:54.450343 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:33:54.450359 kernel: audit: type=1403 audit(1725471232.094:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:33:54.450382 systemd[1]: Successfully loaded SELinux policy in 84.412ms. Sep 4 17:33:54.450406 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.559ms. Sep 4 17:33:54.450427 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:33:54.450446 systemd[1]: Detected virtualization amazon. Sep 4 17:33:54.450465 systemd[1]: Detected architecture x86-64. Sep 4 17:33:54.450486 systemd[1]: Detected first boot. Sep 4 17:33:54.450504 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:33:54.450523 zram_generator::config[1367]: No configuration found. Sep 4 17:33:54.450543 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:33:54.450561 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:33:54.450581 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:33:54.450602 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:33:54.450625 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:33:54.450868 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:33:54.450960 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:33:54.450979 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:33:54.451097 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:33:54.451124 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:33:54.451187 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:33:54.451209 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:33:54.451238 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:33:54.451258 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:33:54.451287 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:33:54.451305 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:33:54.451327 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:33:54.451465 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:33:54.451496 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:33:54.451519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:33:54.451540 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:33:54.451562 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:33:54.451597 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:33:54.451620 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:33:54.451640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:33:54.451665 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:33:54.451687 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:33:54.451710 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:33:54.451730 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:33:54.451751 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:33:54.451777 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:33:54.451798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:33:54.451825 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:33:54.451842 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:33:54.451898 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:33:54.451917 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:33:54.451936 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:33:54.451955 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:33:54.451974 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:33:54.451998 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:33:54.452016 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:33:54.452037 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:33:54.452056 systemd[1]: Reached target machines.target - Containers. Sep 4 17:33:54.452075 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:33:54.452095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:33:54.452114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:33:54.452132 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:33:54.452151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:33:54.452172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:33:54.452191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:33:54.452213 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:33:54.452233 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:33:54.452256 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:33:54.452280 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:33:54.452298 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:33:54.452375 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:33:54.452403 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:33:54.452641 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:33:54.452667 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:33:54.452690 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:33:54.452713 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:33:54.452735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:33:54.452757 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:33:54.452780 systemd[1]: Stopped verity-setup.service. Sep 4 17:33:54.452802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:33:54.452824 kernel: loop: module loaded Sep 4 17:33:54.452845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:33:54.471429 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:33:54.471459 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:33:54.471479 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:33:54.471512 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:33:54.471531 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:33:54.471550 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:33:54.471569 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:33:54.471588 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:33:54.471606 kernel: fuse: init (API version 7.39) Sep 4 17:33:54.471625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:33:54.471645 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:33:54.471663 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:33:54.471685 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:33:54.471703 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:33:54.471723 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:33:54.471745 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:33:54.471764 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:33:54.471787 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:33:54.471806 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:33:54.471829 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:33:54.471847 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:33:54.472200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:33:54.472226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:33:54.472245 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:33:54.472263 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:33:54.472325 systemd-journald[1441]: Collecting audit messages is disabled. Sep 4 17:33:54.472365 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:33:54.472386 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:33:54.472406 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:33:54.472425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:33:54.472445 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:33:54.472463 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:33:54.472481 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:33:54.472504 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:33:54.472523 systemd-journald[1441]: Journal started Sep 4 17:33:54.472558 systemd-journald[1441]: Runtime Journal (/run/log/journal/ec23d113116fdbc80c307ef24dcd177b) is 4.8M, max 38.6M, 33.7M free. Sep 4 17:33:53.577748 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:33:54.478492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:33:53.627368 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:33:53.627961 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:33:54.528459 kernel: ACPI: bus type drm_connector registered Sep 4 17:33:54.542306 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:33:54.542419 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:33:54.554382 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:33:54.576353 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:33:54.580504 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:33:54.587740 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:33:54.590095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:33:54.597530 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:33:54.667144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:33:54.722742 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:33:54.735098 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:33:54.749204 kernel: loop0: detected capacity change from 0 to 210664 Sep 4 17:33:54.744303 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:33:54.748240 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:33:54.762656 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:33:54.764604 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Sep 4 17:33:54.764630 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Sep 4 17:33:54.785113 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:33:54.795710 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:33:54.797827 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:33:54.802480 systemd-journald[1441]: Time spent on flushing to /var/log/journal/ec23d113116fdbc80c307ef24dcd177b is 184.371ms for 972 entries. Sep 4 17:33:54.802480 systemd-journald[1441]: System Journal (/var/log/journal/ec23d113116fdbc80c307ef24dcd177b) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:33:55.037148 systemd-journald[1441]: Received client request to flush runtime journal. Sep 4 17:33:55.037248 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:33:55.037290 kernel: loop1: detected capacity change from 0 to 140728 Sep 4 17:33:54.822797 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:33:54.898289 udevadm[1500]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:33:55.063167 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:33:55.068809 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:33:55.074924 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:33:55.088434 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:33:55.101346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:33:55.171479 kernel: loop2: detected capacity change from 0 to 89336 Sep 4 17:33:55.177416 systemd-tmpfiles[1514]: ACLs are not supported, ignoring. Sep 4 17:33:55.178119 systemd-tmpfiles[1514]: ACLs are not supported, ignoring. Sep 4 17:33:55.190555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:33:55.359786 kernel: loop3: detected capacity change from 0 to 61336 Sep 4 17:33:55.508899 kernel: loop4: detected capacity change from 0 to 210664 Sep 4 17:33:55.542959 kernel: loop5: detected capacity change from 0 to 140728 Sep 4 17:33:55.574889 kernel: loop6: detected capacity change from 0 to 89336 Sep 4 17:33:55.598959 kernel: loop7: detected capacity change from 0 to 61336 Sep 4 17:33:55.635559 (sd-merge)[1521]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:33:55.636952 (sd-merge)[1521]: Merged extensions into '/usr'. Sep 4 17:33:55.644989 systemd[1]: Reloading requested from client PID 1471 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:33:55.645116 systemd[1]: Reloading... Sep 4 17:33:55.905651 zram_generator::config[1542]: No configuration found. Sep 4 17:33:56.524913 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:33:56.758185 systemd[1]: Reloading finished in 1111 ms. Sep 4 17:33:56.849124 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:33:56.863666 systemd[1]: Starting ensure-sysext.service... Sep 4 17:33:56.876235 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:33:56.910849 systemd[1]: Reloading requested from client PID 1593 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:33:56.910892 systemd[1]: Reloading... Sep 4 17:33:56.975506 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:33:56.978750 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:33:56.994129 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:33:56.997798 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Sep 4 17:33:56.997916 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Sep 4 17:33:57.040570 systemd-tmpfiles[1594]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:33:57.040590 systemd-tmpfiles[1594]: Skipping /boot Sep 4 17:33:57.181994 ldconfig[1462]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:33:57.183102 systemd-tmpfiles[1594]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:33:57.183125 systemd-tmpfiles[1594]: Skipping /boot Sep 4 17:33:57.343179 zram_generator::config[1623]: No configuration found. Sep 4 17:33:57.768239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:33:57.993784 systemd[1]: Reloading finished in 1081 ms. Sep 4 17:33:58.026572 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:33:58.028755 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:33:58.037029 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:33:58.079347 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:33:58.115226 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:33:58.168153 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:33:58.187234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:33:58.207125 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:33:58.228216 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:33:58.253245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:33:58.253596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:33:58.256492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:33:58.276262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:33:58.287413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:33:58.292150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:33:58.292394 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:33:58.308292 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:33:58.312615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:33:58.315314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:33:58.315585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:33:58.315735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:33:58.335148 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:33:58.335381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:33:58.338126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:33:58.338333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:33:58.361587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:33:58.363442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:33:58.380171 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:33:58.382282 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:33:58.382417 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:33:58.382479 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:33:58.384543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:33:58.386717 systemd[1]: Finished ensure-sysext.service. Sep 4 17:33:58.389937 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:33:58.395147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:33:58.397256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:33:58.410424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:33:58.417436 systemd-udevd[1677]: Using default interface naming scheme 'v255'. Sep 4 17:33:58.417688 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:33:58.441098 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:33:58.445785 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:33:58.446983 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:33:58.482364 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:33:58.484462 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:33:58.492489 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:33:58.496546 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:33:58.499198 augenrules[1708]: No rules Sep 4 17:33:58.502760 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:33:58.514145 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:33:58.524137 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:33:58.689892 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1727) Sep 4 17:33:58.723553 (udev-worker)[1729]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:33:58.725629 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:33:58.744333 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1727) Sep 4 17:33:58.746849 systemd-resolved[1676]: Positive Trust Anchors: Sep 4 17:33:58.747953 systemd-resolved[1676]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:33:58.748018 systemd-resolved[1676]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:33:58.763705 systemd-resolved[1676]: Defaulting to hostname 'linux'. Sep 4 17:33:58.768574 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:33:58.768772 systemd-networkd[1721]: lo: Link UP Sep 4 17:33:58.768777 systemd-networkd[1721]: lo: Gained carrier Sep 4 17:33:58.770075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:33:58.773426 systemd-networkd[1721]: Enumeration completed Sep 4 17:33:58.773543 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:33:58.775742 systemd[1]: Reached target network.target - Network. Sep 4 17:33:58.776531 systemd-networkd[1721]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:33:58.776540 systemd-networkd[1721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:33:58.786218 systemd-networkd[1721]: eth0: Link UP Sep 4 17:33:58.786475 systemd-networkd[1721]: eth0: Gained carrier Sep 4 17:33:58.786516 systemd-networkd[1721]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:33:58.788198 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:33:58.797959 systemd-networkd[1721]: eth0: DHCPv4 address 172.31.21.246/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:33:58.800430 systemd-networkd[1721]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:33:58.847905 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Sep 4 17:33:58.876929 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 17:33:58.879908 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 17:33:58.890262 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1727) Sep 4 17:33:58.893885 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:33:58.902961 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 4 17:33:58.917161 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 17:33:58.994381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:33:59.000882 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:33:59.128629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:33:59.130727 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:33:59.158423 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:33:59.409115 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:33:59.415609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:33:59.473926 lvm[1836]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:33:59.565721 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:33:59.591662 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:33:59.598702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:33:59.601564 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:33:59.605321 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:33:59.607088 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:33:59.609803 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:33:59.612132 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:33:59.615979 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:33:59.621046 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:33:59.621429 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:33:59.624989 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:33:59.632394 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:33:59.641687 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:33:59.654225 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:33:59.668349 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:33:59.673629 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:33:59.680914 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:33:59.682434 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:33:59.688419 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:33:59.688673 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:33:59.703015 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:33:59.729084 lvm[1844]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:33:59.729492 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:33:59.754758 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:33:59.777124 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:33:59.799325 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:33:59.808955 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:33:59.833350 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:33:59.869201 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:33:59.876407 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:33:59.896074 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:33:59.933138 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:33:59.958235 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:33:59.980062 jq[1848]: false Sep 4 17:34:00.055057 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:34:00.057060 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:34:00.058975 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:34:00.109797 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:34:00.116102 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:34:00.155945 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:34:00.158448 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:34:00.158690 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:34:00.174265 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:34:00.174538 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:34:00.245705 systemd-networkd[1721]: eth0: Gained IPv6LL Sep 4 17:34:00.316002 extend-filesystems[1849]: Found loop4 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found loop5 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found loop6 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found loop7 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found nvme0n1 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found nvme0n1p1 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found nvme0n1p2 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found nvme0n1p3 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found usr Sep 4 17:34:00.316002 extend-filesystems[1849]: Found nvme0n1p4 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found nvme0n1p6 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found nvme0n1p7 Sep 4 17:34:00.316002 extend-filesystems[1849]: Found nvme0n1p9 Sep 4 17:34:00.316002 extend-filesystems[1849]: Checking size of /dev/nvme0n1p9 Sep 4 17:34:00.466847 jq[1864]: true Sep 4 17:34:00.289728 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:17:38 UTC 2024 (1): Starting Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: ---------------------------------------------------- Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: corporation. Support and training for ntp-4 are Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: available at https://www.nwtime.org/support Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: ---------------------------------------------------- Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: proto: precision = 0.073 usec (-24) Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: basedate set to 2024-08-23 Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: gps base set to 2024-08-25 (week 2329) Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: Listen normally on 3 eth0 172.31.21.246:123 Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: Listen normally on 4 lo [::1]:123 Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: Listen normally on 5 eth0 [fe80::470:8aff:fe93:c107%2]:123 Sep 4 17:34:00.468136 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: Listening on routing socket on fd #22 for interface updates Sep 4 17:34:00.266623 ntpd[1851]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:17:38 UTC 2024 (1): Starting Sep 4 17:34:00.553717 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:34:00.312758 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:34:00.557316 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:34:00.557316 ntpd[1851]: 4 Sep 17:34:00 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:34:00.557395 extend-filesystems[1849]: Resized partition /dev/nvme0n1p9 Sep 4 17:34:00.605003 tar[1874]: linux-amd64/helm Sep 4 17:34:00.266654 ntpd[1851]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:34:00.340064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:34:00.605615 extend-filesystems[1905]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:34:00.643700 jq[1880]: true Sep 4 17:34:00.266666 ntpd[1851]: ---------------------------------------------------- Sep 4 17:34:00.370171 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:34:00.646400 update_engine[1857]: I0904 17:34:00.644785 1857 main.cc:92] Flatcar Update Engine starting Sep 4 17:34:00.266676 ntpd[1851]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:34:00.496594 (ntainerd)[1879]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:34:00.266686 ntpd[1851]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:34:00.556451 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:34:00.266697 ntpd[1851]: corporation. Support and training for ntp-4 are Sep 4 17:34:00.556776 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:34:00.266710 ntpd[1851]: available at https://www.nwtime.org/support Sep 4 17:34:00.601670 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:34:00.266720 ntpd[1851]: ---------------------------------------------------- Sep 4 17:34:00.635117 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:34:00.322749 ntpd[1851]: proto: precision = 0.073 usec (-24) Sep 4 17:34:00.635165 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:34:00.351228 ntpd[1851]: basedate set to 2024-08-23 Sep 4 17:34:00.659581 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:34:00.351261 ntpd[1851]: gps base set to 2024-08-25 (week 2329) Sep 4 17:34:00.659610 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:34:00.406401 ntpd[1851]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:34:00.406471 ntpd[1851]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:34:00.406690 ntpd[1851]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:34:00.406728 ntpd[1851]: Listen normally on 3 eth0 172.31.21.246:123 Sep 4 17:34:00.406781 ntpd[1851]: Listen normally on 4 lo [::1]:123 Sep 4 17:34:00.406823 ntpd[1851]: Listen normally on 5 eth0 [fe80::470:8aff:fe93:c107%2]:123 Sep 4 17:34:00.406877 ntpd[1851]: Listening on routing socket on fd #22 for interface updates Sep 4 17:34:00.516388 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:34:00.516432 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:34:00.584562 dbus-daemon[1847]: [system] SELinux support is enabled Sep 4 17:34:00.672781 dbus-daemon[1847]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1721 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:34:00.728289 dbus-daemon[1847]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:34:00.811315 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:34:00.876662 update_engine[1857]: I0904 17:34:00.827010 1857 update_check_scheduler.cc:74] Next update check in 7m24s Sep 4 17:34:00.813768 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:34:00.876871 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1724) Sep 4 17:34:00.819439 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:34:00.826946 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:34:00.855405 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:34:00.876141 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:34:00.945983 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:34:01.032923 extend-filesystems[1905]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:34:01.032923 extend-filesystems[1905]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:34:01.032923 extend-filesystems[1905]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:34:01.025231 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:34:01.055134 extend-filesystems[1849]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:34:01.025523 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:34:01.115446 systemd-logind[1856]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:34:01.115483 systemd-logind[1856]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 17:34:01.115508 systemd-logind[1856]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:34:01.130137 systemd-logind[1856]: New seat seat0. Sep 4 17:34:01.144101 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:34:01.150238 coreos-metadata[1846]: Sep 04 17:34:01.149 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:34:01.156066 coreos-metadata[1846]: Sep 04 17:34:01.152 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:34:01.156066 coreos-metadata[1846]: Sep 04 17:34:01.153 INFO Fetch successful Sep 4 17:34:01.156066 coreos-metadata[1846]: Sep 04 17:34:01.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:34:01.165621 bash[1942]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:34:01.158304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:34:01.173086 coreos-metadata[1846]: Sep 04 17:34:01.166 INFO Fetch successful Sep 4 17:34:01.173086 coreos-metadata[1846]: Sep 04 17:34:01.168 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:34:01.171972 systemd[1]: Starting sshkeys.service... Sep 4 17:34:01.177155 coreos-metadata[1846]: Sep 04 17:34:01.175 INFO Fetch successful Sep 4 17:34:01.177155 coreos-metadata[1846]: Sep 04 17:34:01.175 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:34:01.180290 coreos-metadata[1846]: Sep 04 17:34:01.177 INFO Fetch successful Sep 4 17:34:01.180290 coreos-metadata[1846]: Sep 04 17:34:01.177 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:34:01.187889 coreos-metadata[1846]: Sep 04 17:34:01.182 INFO Fetch failed with 404: resource not found Sep 4 17:34:01.187889 coreos-metadata[1846]: Sep 04 17:34:01.182 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:34:01.187889 coreos-metadata[1846]: Sep 04 17:34:01.186 INFO Fetch successful Sep 4 17:34:01.187889 coreos-metadata[1846]: Sep 04 17:34:01.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:34:01.191341 coreos-metadata[1846]: Sep 04 17:34:01.191 INFO Fetch successful Sep 4 17:34:01.191341 coreos-metadata[1846]: Sep 04 17:34:01.191 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:34:01.196286 coreos-metadata[1846]: Sep 04 17:34:01.196 INFO Fetch successful Sep 4 17:34:01.196286 coreos-metadata[1846]: Sep 04 17:34:01.196 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:34:01.197155 coreos-metadata[1846]: Sep 04 17:34:01.197 INFO Fetch successful Sep 4 17:34:01.197155 coreos-metadata[1846]: Sep 04 17:34:01.197 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:34:01.197770 coreos-metadata[1846]: Sep 04 17:34:01.197 INFO Fetch successful Sep 4 17:34:01.273953 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:34:01.283405 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:34:01.303514 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:34:01.308702 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:34:01.378612 dbus-daemon[1847]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:34:01.392801 dbus-daemon[1847]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1917 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:34:01.391093 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:34:01.409578 amazon-ssm-agent[1921]: Initializing new seelog logger Sep 4 17:34:01.410335 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:34:01.426893 amazon-ssm-agent[1921]: New Seelog Logger Creation Complete Sep 4 17:34:01.426893 amazon-ssm-agent[1921]: 2024/09/04 17:34:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:34:01.426893 amazon-ssm-agent[1921]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:34:01.426893 amazon-ssm-agent[1921]: 2024/09/04 17:34:01 processing appconfig overrides Sep 4 17:34:01.443476 amazon-ssm-agent[1921]: 2024/09/04 17:34:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:34:01.470038 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO Proxy environment variables: Sep 4 17:34:01.470038 amazon-ssm-agent[1921]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:34:01.473526 amazon-ssm-agent[1921]: 2024/09/04 17:34:01 processing appconfig overrides Sep 4 17:34:01.494913 amazon-ssm-agent[1921]: 2024/09/04 17:34:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:34:01.494913 amazon-ssm-agent[1921]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:34:01.494913 amazon-ssm-agent[1921]: 2024/09/04 17:34:01 processing appconfig overrides Sep 4 17:34:01.490354 polkitd[2017]: Started polkitd version 121 Sep 4 17:34:01.516077 amazon-ssm-agent[1921]: 2024/09/04 17:34:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:34:01.516077 amazon-ssm-agent[1921]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:34:01.516077 amazon-ssm-agent[1921]: 2024/09/04 17:34:01 processing appconfig overrides Sep 4 17:34:01.593065 coreos-metadata[1976]: Sep 04 17:34:01.584 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:34:01.593065 coreos-metadata[1976]: Sep 04 17:34:01.586 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:34:01.593065 coreos-metadata[1976]: Sep 04 17:34:01.587 INFO Fetch successful Sep 4 17:34:01.593065 coreos-metadata[1976]: Sep 04 17:34:01.587 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:34:01.593065 coreos-metadata[1976]: Sep 04 17:34:01.590 INFO Fetch successful Sep 4 17:34:01.594412 unknown[1976]: wrote ssh authorized keys file for user: core Sep 4 17:34:01.595194 polkitd[2017]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:34:01.595320 polkitd[2017]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:34:01.608649 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO https_proxy: Sep 4 17:34:01.610866 polkitd[2017]: Finished loading, compiling and executing 2 rules Sep 4 17:34:01.614788 dbus-daemon[1847]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:34:01.623771 polkitd[2017]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:34:01.625672 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:34:01.688431 update-ssh-keys[2058]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:34:01.688006 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:34:01.694375 systemd[1]: Finished sshkeys.service. Sep 4 17:34:01.715956 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO http_proxy: Sep 4 17:34:01.735395 systemd-hostnamed[1917]: Hostname set to (transient) Sep 4 17:34:01.735524 systemd-resolved[1676]: System hostname changed to 'ip-172-31-21-246'. Sep 4 17:34:01.761247 locksmithd[1922]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:34:01.779013 sshd_keygen[1861]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:34:01.817391 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO no_proxy: Sep 4 17:34:01.867638 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:34:01.883450 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:34:01.914342 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:34:01.915294 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:34:01.915012 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:34:01.950586 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:34:02.013294 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:34:02.078123 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:34:02.092349 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:34:02.106990 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:34:02.108528 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:34:02.111969 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO Agent will take identity from EC2 Sep 4 17:34:02.174035 containerd[1879]: time="2024-09-04T17:34:02.173285468Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:34:02.212051 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:34:02.240427 containerd[1879]: time="2024-09-04T17:34:02.239687227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:34:02.242322 containerd[1879]: time="2024-09-04T17:34:02.242109894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:34:02.242322 containerd[1879]: time="2024-09-04T17:34:02.242173696Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:34:02.242322 containerd[1879]: time="2024-09-04T17:34:02.242200540Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:34:02.242512 containerd[1879]: time="2024-09-04T17:34:02.242409510Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:34:02.242512 containerd[1879]: time="2024-09-04T17:34:02.242434417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:34:02.242605 containerd[1879]: time="2024-09-04T17:34:02.242543230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:34:02.242605 containerd[1879]: time="2024-09-04T17:34:02.242561552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:34:02.243160 containerd[1879]: time="2024-09-04T17:34:02.242814509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:34:02.243160 containerd[1879]: time="2024-09-04T17:34:02.242840834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:34:02.243160 containerd[1879]: time="2024-09-04T17:34:02.242873377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:34:02.243160 containerd[1879]: time="2024-09-04T17:34:02.242887669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:34:02.243160 containerd[1879]: time="2024-09-04T17:34:02.242988912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:34:02.243390 containerd[1879]: time="2024-09-04T17:34:02.243259689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:34:02.243572 containerd[1879]: time="2024-09-04T17:34:02.243447804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:34:02.243572 containerd[1879]: time="2024-09-04T17:34:02.243473034Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:34:02.243661 containerd[1879]: time="2024-09-04T17:34:02.243618601Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:34:02.243698 containerd[1879]: time="2024-09-04T17:34:02.243683372Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO [Registrar] Starting registrar module Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:01 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:02 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:02 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:02 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:34:02.277141 amazon-ssm-agent[1921]: 2024-09-04 17:34:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:34:02.290981 containerd[1879]: time="2024-09-04T17:34:02.290921981Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:34:02.291549 containerd[1879]: time="2024-09-04T17:34:02.291264838Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:34:02.291549 containerd[1879]: time="2024-09-04T17:34:02.291317964Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:34:02.291549 containerd[1879]: time="2024-09-04T17:34:02.291348757Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:34:02.291549 containerd[1879]: time="2024-09-04T17:34:02.291415107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292003186Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292430107Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292589449Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292611589Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292631845Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292656492Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292676809Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292695797Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292716954Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292740625Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292761151Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292783145Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292801565Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:34:02.292884 containerd[1879]: time="2024-09-04T17:34:02.292828719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.298953 containerd[1879]: time="2024-09-04T17:34:02.292850802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.299775 containerd[1879]: time="2024-09-04T17:34:02.299511253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.306573 containerd[1879]: time="2024-09-04T17:34:02.306504146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.306806 containerd[1879]: time="2024-09-04T17:34:02.306653072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.307258 containerd[1879]: time="2024-09-04T17:34:02.306877593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.307258 containerd[1879]: time="2024-09-04T17:34:02.306904071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.307258 containerd[1879]: time="2024-09-04T17:34:02.306924928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.307607 containerd[1879]: time="2024-09-04T17:34:02.307229734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.307607 containerd[1879]: time="2024-09-04T17:34:02.307482947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.307607 containerd[1879]: time="2024-09-04T17:34:02.307506744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.307607 containerd[1879]: time="2024-09-04T17:34:02.307549002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.307607 containerd[1879]: time="2024-09-04T17:34:02.307571733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.308030 containerd[1879]: time="2024-09-04T17:34:02.307820846Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:34:02.308030 containerd[1879]: time="2024-09-04T17:34:02.307890800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.308030 containerd[1879]: time="2024-09-04T17:34:02.307912329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.308030 containerd[1879]: time="2024-09-04T17:34:02.307930350Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:34:02.308467 containerd[1879]: time="2024-09-04T17:34:02.308326643Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:34:02.308467 containerd[1879]: time="2024-09-04T17:34:02.308363854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:34:02.308467 containerd[1879]: time="2024-09-04T17:34:02.308402361Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:34:02.308467 containerd[1879]: time="2024-09-04T17:34:02.308423607Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:34:02.308467 containerd[1879]: time="2024-09-04T17:34:02.308439082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.308842 containerd[1879]: time="2024-09-04T17:34:02.308692052Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:34:02.308842 containerd[1879]: time="2024-09-04T17:34:02.308716147Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:34:02.308842 containerd[1879]: time="2024-09-04T17:34:02.308734938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:34:02.309732 containerd[1879]: time="2024-09-04T17:34:02.309515984Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:34:02.309732 containerd[1879]: time="2024-09-04T17:34:02.309649261Z" level=info msg="Connect containerd service" Sep 4 17:34:02.309732 containerd[1879]: time="2024-09-04T17:34:02.309698223Z" level=info msg="using legacy CRI server" Sep 4 17:34:02.323870 containerd[1879]: time="2024-09-04T17:34:02.323278337Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:34:02.323930 amazon-ssm-agent[1921]: 2024-09-04 17:34:02 INFO [CredentialRefresher] Next credential rotation will be in 31.358239693416667 minutes Sep 4 17:34:02.324613 containerd[1879]: time="2024-09-04T17:34:02.324193738Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:34:02.333459 containerd[1879]: time="2024-09-04T17:34:02.333224445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:34:02.333886 containerd[1879]: time="2024-09-04T17:34:02.333665935Z" level=info msg="Start subscribing containerd event" Sep 4 17:34:02.333886 containerd[1879]: time="2024-09-04T17:34:02.333783146Z" level=info msg="Start recovering state" Sep 4 17:34:02.334760 containerd[1879]: time="2024-09-04T17:34:02.334458290Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:34:02.334760 containerd[1879]: time="2024-09-04T17:34:02.334521122Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:34:02.335353 containerd[1879]: time="2024-09-04T17:34:02.334933346Z" level=info msg="Start event monitor" Sep 4 17:34:02.335353 containerd[1879]: time="2024-09-04T17:34:02.334973629Z" level=info msg="Start snapshots syncer" Sep 4 17:34:02.335353 containerd[1879]: time="2024-09-04T17:34:02.334987874Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:34:02.335353 containerd[1879]: time="2024-09-04T17:34:02.334999263Z" level=info msg="Start streaming server" Sep 4 17:34:02.335230 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:34:02.350449 containerd[1879]: time="2024-09-04T17:34:02.335658553Z" level=info msg="containerd successfully booted in 0.166000s" Sep 4 17:34:03.348426 tar[1874]: linux-amd64/LICENSE Sep 4 17:34:03.350530 tar[1874]: linux-amd64/README.md Sep 4 17:34:03.392820 amazon-ssm-agent[1921]: 2024-09-04 17:34:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:34:03.418099 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:34:03.494252 amazon-ssm-agent[1921]: 2024-09-04 17:34:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2090) started Sep 4 17:34:03.596904 amazon-ssm-agent[1921]: 2024-09-04 17:34:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:34:04.222237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:34:04.226711 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:34:04.227497 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:34:04.231630 systemd[1]: Startup finished in 1.287s (kernel) + 12.210s (initrd) + 12.218s (userspace) = 25.716s. Sep 4 17:34:05.659371 kubelet[2106]: E0904 17:34:05.659299 2106 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:34:05.664306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:34:05.664534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:34:05.666396 systemd[1]: kubelet.service: Consumed 1.154s CPU time. Sep 4 17:34:07.608898 systemd-resolved[1676]: Clock change detected. Flushing caches. Sep 4 17:34:09.364339 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:34:09.374821 systemd[1]: Started sshd@0-172.31.21.246:22-139.178.68.195:53922.service - OpenSSH per-connection server daemon (139.178.68.195:53922). Sep 4 17:34:09.598351 sshd[2119]: Accepted publickey for core from 139.178.68.195 port 53922 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:34:09.604031 sshd[2119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:34:09.632812 systemd-logind[1856]: New session 1 of user core. Sep 4 17:34:09.634552 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:34:09.640829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:34:09.683992 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:34:09.700909 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:34:09.734408 (systemd)[2123]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:10.050368 systemd[2123]: Queued start job for default target default.target. Sep 4 17:34:10.063819 systemd[2123]: Created slice app.slice - User Application Slice. Sep 4 17:34:10.063862 systemd[2123]: Reached target paths.target - Paths. Sep 4 17:34:10.063884 systemd[2123]: Reached target timers.target - Timers. Sep 4 17:34:10.067464 systemd[2123]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:34:10.109200 systemd[2123]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:34:10.109452 systemd[2123]: Reached target sockets.target - Sockets. Sep 4 17:34:10.109474 systemd[2123]: Reached target basic.target - Basic System. Sep 4 17:34:10.111223 systemd[2123]: Reached target default.target - Main User Target. Sep 4 17:34:10.111304 systemd[2123]: Startup finished in 351ms. Sep 4 17:34:10.111393 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:34:10.126851 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:34:10.294771 systemd[1]: Started sshd@1-172.31.21.246:22-139.178.68.195:53934.service - OpenSSH per-connection server daemon (139.178.68.195:53934). Sep 4 17:34:10.518562 sshd[2134]: Accepted publickey for core from 139.178.68.195 port 53934 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:34:10.520702 sshd[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:34:10.531746 systemd-logind[1856]: New session 2 of user core. Sep 4 17:34:10.542925 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:34:10.690988 sshd[2134]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:10.697151 systemd[1]: sshd@1-172.31.21.246:22-139.178.68.195:53934.service: Deactivated successfully. Sep 4 17:34:10.702517 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:34:10.705038 systemd-logind[1856]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:34:10.707659 systemd-logind[1856]: Removed session 2. Sep 4 17:34:10.734791 systemd[1]: Started sshd@2-172.31.21.246:22-139.178.68.195:53946.service - OpenSSH per-connection server daemon (139.178.68.195:53946). Sep 4 17:34:10.913142 sshd[2141]: Accepted publickey for core from 139.178.68.195 port 53946 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:34:10.915342 sshd[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:34:10.934286 systemd-logind[1856]: New session 3 of user core. Sep 4 17:34:10.947615 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:34:11.080078 sshd[2141]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:11.087528 systemd[1]: sshd@2-172.31.21.246:22-139.178.68.195:53946.service: Deactivated successfully. Sep 4 17:34:11.092590 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:34:11.100423 systemd-logind[1856]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:34:11.120467 systemd-logind[1856]: Removed session 3. Sep 4 17:34:11.135820 systemd[1]: Started sshd@3-172.31.21.246:22-139.178.68.195:53950.service - OpenSSH per-connection server daemon (139.178.68.195:53950). Sep 4 17:34:11.357054 sshd[2148]: Accepted publickey for core from 139.178.68.195 port 53950 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:34:11.359087 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:34:11.379652 systemd-logind[1856]: New session 4 of user core. Sep 4 17:34:11.391588 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:34:11.534235 sshd[2148]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:11.540608 systemd[1]: sshd@3-172.31.21.246:22-139.178.68.195:53950.service: Deactivated successfully. Sep 4 17:34:11.545738 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:34:11.548368 systemd-logind[1856]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:34:11.551889 systemd-logind[1856]: Removed session 4. Sep 4 17:34:11.580930 systemd[1]: Started sshd@4-172.31.21.246:22-139.178.68.195:53960.service - OpenSSH per-connection server daemon (139.178.68.195:53960). Sep 4 17:34:11.770475 sshd[2155]: Accepted publickey for core from 139.178.68.195 port 53960 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:34:11.772473 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:34:11.784363 systemd-logind[1856]: New session 5 of user core. Sep 4 17:34:11.788652 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:34:11.937999 sudo[2158]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:34:11.938440 sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:34:11.962945 sudo[2158]: pam_unix(sudo:session): session closed for user root Sep 4 17:34:11.987949 sshd[2155]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:11.996777 systemd[1]: sshd@4-172.31.21.246:22-139.178.68.195:53960.service: Deactivated successfully. Sep 4 17:34:12.010713 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:34:12.012406 systemd-logind[1856]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:34:12.040982 systemd[1]: Started sshd@5-172.31.21.246:22-139.178.68.195:53974.service - OpenSSH per-connection server daemon (139.178.68.195:53974). Sep 4 17:34:12.049843 systemd-logind[1856]: Removed session 5. Sep 4 17:34:12.235949 sshd[2163]: Accepted publickey for core from 139.178.68.195 port 53974 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:34:12.238442 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:34:12.248518 systemd-logind[1856]: New session 6 of user core. Sep 4 17:34:12.260635 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:34:12.368281 sudo[2167]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:34:12.368731 sudo[2167]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:34:12.378640 sudo[2167]: pam_unix(sudo:session): session closed for user root Sep 4 17:34:12.401308 sudo[2166]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:34:12.403975 sudo[2166]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:34:12.473753 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:34:12.500532 auditctl[2170]: No rules Sep 4 17:34:12.501105 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:34:12.501522 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:34:12.509016 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:34:12.553889 augenrules[2188]: No rules Sep 4 17:34:12.556275 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:34:12.558306 sudo[2166]: pam_unix(sudo:session): session closed for user root Sep 4 17:34:12.581992 sshd[2163]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:12.588442 systemd[1]: sshd@5-172.31.21.246:22-139.178.68.195:53974.service: Deactivated successfully. Sep 4 17:34:12.597082 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:34:12.598725 systemd-logind[1856]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:34:12.600774 systemd-logind[1856]: Removed session 6. Sep 4 17:34:12.620885 systemd[1]: Started sshd@6-172.31.21.246:22-139.178.68.195:53978.service - OpenSSH per-connection server daemon (139.178.68.195:53978). Sep 4 17:34:12.801941 sshd[2196]: Accepted publickey for core from 139.178.68.195 port 53978 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:34:12.803912 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:34:12.824201 systemd-logind[1856]: New session 7 of user core. Sep 4 17:34:12.832629 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:34:12.959777 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:34:12.960614 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:34:13.498034 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:34:13.519560 (dockerd)[2208]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:34:14.427268 dockerd[2208]: time="2024-09-04T17:34:14.427195924Z" level=info msg="Starting up" Sep 4 17:34:14.799153 dockerd[2208]: time="2024-09-04T17:34:14.798833875Z" level=info msg="Loading containers: start." Sep 4 17:34:15.242365 kernel: Initializing XFRM netlink socket Sep 4 17:34:15.383891 (udev-worker)[2230]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:34:15.549505 systemd-networkd[1721]: docker0: Link UP Sep 4 17:34:15.610081 dockerd[2208]: time="2024-09-04T17:34:15.607079205Z" level=info msg="Loading containers: done." Sep 4 17:34:15.661555 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck859898928-merged.mount: Deactivated successfully. Sep 4 17:34:15.689053 dockerd[2208]: time="2024-09-04T17:34:15.688987734Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:34:15.689291 dockerd[2208]: time="2024-09-04T17:34:15.689140924Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:34:15.689378 dockerd[2208]: time="2024-09-04T17:34:15.689293681Z" level=info msg="Daemon has completed initialization" Sep 4 17:34:15.749852 dockerd[2208]: time="2024-09-04T17:34:15.749379447Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:34:15.750147 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:34:16.228248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:34:16.241220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:34:17.336096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:34:17.350180 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:34:17.504184 containerd[1879]: time="2024-09-04T17:34:17.504128558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:34:17.540262 kubelet[2356]: E0904 17:34:17.540191 2356 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:34:17.550441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:34:17.550589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:34:18.248823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365326291.mount: Deactivated successfully. Sep 4 17:34:21.910881 containerd[1879]: time="2024-09-04T17:34:21.909573763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:21.915255 containerd[1879]: time="2024-09-04T17:34:21.915202909Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=32772416" Sep 4 17:34:21.919374 containerd[1879]: time="2024-09-04T17:34:21.917049569Z" level=info msg="ImageCreate event name:\"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:21.933493 containerd[1879]: time="2024-09-04T17:34:21.933412365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:21.938338 containerd[1879]: time="2024-09-04T17:34:21.938257554Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"32769216\" in 4.434071412s" Sep 4 17:34:21.939486 containerd[1879]: time="2024-09-04T17:34:21.938347699Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\"" Sep 4 17:34:21.971581 containerd[1879]: time="2024-09-04T17:34:21.971531399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:34:25.918644 containerd[1879]: time="2024-09-04T17:34:25.918577746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:25.920461 containerd[1879]: time="2024-09-04T17:34:25.920402889Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=29594065" Sep 4 17:34:25.929108 containerd[1879]: time="2024-09-04T17:34:25.922050486Z" level=info msg="ImageCreate event name:\"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:25.942664 containerd[1879]: time="2024-09-04T17:34:25.942570300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:25.950431 containerd[1879]: time="2024-09-04T17:34:25.950369406Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"31144011\" in 3.978793106s" Sep 4 17:34:25.950431 containerd[1879]: time="2024-09-04T17:34:25.950418870Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\"" Sep 4 17:34:25.987724 containerd[1879]: time="2024-09-04T17:34:25.987673304Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:34:27.726540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:34:27.736683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:34:28.664952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:34:28.683283 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:34:28.820893 kubelet[2448]: E0904 17:34:28.820508 2448 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:34:28.828225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:34:28.828510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:34:29.390651 containerd[1879]: time="2024-09-04T17:34:29.390586357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:29.403455 containerd[1879]: time="2024-09-04T17:34:29.403373766Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=17780233" Sep 4 17:34:29.416706 containerd[1879]: time="2024-09-04T17:34:29.416621943Z" level=info msg="ImageCreate event name:\"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:29.422063 containerd[1879]: time="2024-09-04T17:34:29.421964226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:29.424547 containerd[1879]: time="2024-09-04T17:34:29.423767881Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"19330197\" in 3.436041903s" Sep 4 17:34:29.424547 containerd[1879]: time="2024-09-04T17:34:29.423822450Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\"" Sep 4 17:34:29.466710 containerd[1879]: time="2024-09-04T17:34:29.466657776Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:34:30.854487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103313412.mount: Deactivated successfully. Sep 4 17:34:31.475736 containerd[1879]: time="2024-09-04T17:34:31.475666944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:31.478194 containerd[1879]: time="2024-09-04T17:34:31.477698539Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=29037161" Sep 4 17:34:31.482362 containerd[1879]: time="2024-09-04T17:34:31.481649661Z" level=info msg="ImageCreate event name:\"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:31.487579 containerd[1879]: time="2024-09-04T17:34:31.487437713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:31.491096 containerd[1879]: time="2024-09-04T17:34:31.490894903Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"29036180\" in 2.02418007s" Sep 4 17:34:31.491096 containerd[1879]: time="2024-09-04T17:34:31.490958294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\"" Sep 4 17:34:31.547743 containerd[1879]: time="2024-09-04T17:34:31.547703910Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:34:32.115719 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:34:32.168513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3976382591.mount: Deactivated successfully. Sep 4 17:34:33.839096 containerd[1879]: time="2024-09-04T17:34:33.839035431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:33.843353 containerd[1879]: time="2024-09-04T17:34:33.842749577Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 17:34:33.846931 containerd[1879]: time="2024-09-04T17:34:33.846791692Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:33.855519 containerd[1879]: time="2024-09-04T17:34:33.855457753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:33.859400 containerd[1879]: time="2024-09-04T17:34:33.857890849Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.310135306s" Sep 4 17:34:33.859400 containerd[1879]: time="2024-09-04T17:34:33.857949302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:34:33.907481 containerd[1879]: time="2024-09-04T17:34:33.907435356Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:34:34.496252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212070433.mount: Deactivated successfully. Sep 4 17:34:34.505753 containerd[1879]: time="2024-09-04T17:34:34.505686253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:34.507640 containerd[1879]: time="2024-09-04T17:34:34.507407542Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:34:34.509369 containerd[1879]: time="2024-09-04T17:34:34.509308599Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:34.515752 containerd[1879]: time="2024-09-04T17:34:34.515361825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:34.516580 containerd[1879]: time="2024-09-04T17:34:34.516536173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 609.048289ms" Sep 4 17:34:34.516693 containerd[1879]: time="2024-09-04T17:34:34.516585314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:34:34.556579 containerd[1879]: time="2024-09-04T17:34:34.556527442Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:34:35.244258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194629553.mount: Deactivated successfully. Sep 4 17:34:38.977156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:34:38.985121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:34:40.192343 containerd[1879]: time="2024-09-04T17:34:40.192235386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:40.198158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:34:40.202334 containerd[1879]: time="2024-09-04T17:34:40.201358474Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Sep 4 17:34:40.211916 containerd[1879]: time="2024-09-04T17:34:40.211849517Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:40.220339 (kubelet)[2590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:34:40.222687 containerd[1879]: time="2024-09-04T17:34:40.222609886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:40.226709 containerd[1879]: time="2024-09-04T17:34:40.225893064Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.669319349s" Sep 4 17:34:40.226709 containerd[1879]: time="2024-09-04T17:34:40.226670342Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Sep 4 17:34:40.523991 kubelet[2590]: E0904 17:34:40.523937 2590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:34:40.527370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:34:40.527572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:34:44.922108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:34:44.929743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:34:44.983375 systemd[1]: Reloading requested from client PID 2659 ('systemctl') (unit session-7.scope)... Sep 4 17:34:44.983766 systemd[1]: Reloading... Sep 4 17:34:45.157537 zram_generator::config[2694]: No configuration found. Sep 4 17:34:45.403236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:34:45.561199 systemd[1]: Reloading finished in 576 ms. Sep 4 17:34:45.654379 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:34:45.654697 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:34:45.656425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:34:45.664008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:34:46.308925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:34:46.312667 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:34:46.391451 kubelet[2756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:34:46.391451 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:34:46.391451 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:34:46.429708 kubelet[2756]: I0904 17:34:46.391511 2756 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:34:46.859337 update_engine[1857]: I0904 17:34:46.858120 1857 update_attempter.cc:509] Updating boot flags... Sep 4 17:34:46.954368 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2777) Sep 4 17:34:47.368683 kubelet[2756]: I0904 17:34:47.368637 2756 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:34:47.368683 kubelet[2756]: I0904 17:34:47.368671 2756 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:34:47.368971 kubelet[2756]: I0904 17:34:47.368950 2756 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:34:47.396161 kubelet[2756]: I0904 17:34:47.396122 2756 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:34:47.397957 kubelet[2756]: E0904 17:34:47.397925 2756 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:47.411330 kubelet[2756]: I0904 17:34:47.411284 2756 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:34:47.415245 kubelet[2756]: I0904 17:34:47.415189 2756 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:34:47.415508 kubelet[2756]: I0904 17:34:47.415242 2756 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-246","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:34:47.416247 kubelet[2756]: I0904 17:34:47.416222 2756 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:34:47.416247 kubelet[2756]: I0904 17:34:47.416248 2756 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:34:47.416449 kubelet[2756]: I0904 17:34:47.416430 2756 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:34:47.417996 kubelet[2756]: W0904 17:34:47.417944 2756 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-246&limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:47.418085 kubelet[2756]: E0904 17:34:47.418006 2756 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-246&limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:47.419628 kubelet[2756]: I0904 17:34:47.419604 2756 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:34:47.419704 kubelet[2756]: I0904 17:34:47.419638 2756 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:34:47.419704 kubelet[2756]: I0904 17:34:47.419684 2756 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:34:47.419704 kubelet[2756]: I0904 17:34:47.419703 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:34:47.426707 kubelet[2756]: W0904 17:34:47.425607 2756 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.246:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:47.426707 kubelet[2756]: E0904 17:34:47.425689 2756 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.246:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:47.428601 kubelet[2756]: I0904 17:34:47.428568 2756 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:34:47.430549 kubelet[2756]: I0904 17:34:47.430525 2756 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:34:47.430758 kubelet[2756]: W0904 17:34:47.430746 2756 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:34:47.433016 kubelet[2756]: I0904 17:34:47.432992 2756 server.go:1264] "Started kubelet" Sep 4 17:34:47.439774 kubelet[2756]: I0904 17:34:47.439692 2756 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:34:47.450535 kubelet[2756]: I0904 17:34:47.450159 2756 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:34:47.454872 kubelet[2756]: I0904 17:34:47.454800 2756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:34:47.456337 kubelet[2756]: I0904 17:34:47.455938 2756 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:34:47.464591 kubelet[2756]: I0904 17:34:47.458254 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:34:47.470297 kubelet[2756]: I0904 17:34:47.470255 2756 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:34:47.477336 kubelet[2756]: E0904 17:34:47.464595 2756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.246:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.246:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-246.17f21afe7593bf3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-246,UID:ip-172-31-21-246,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-246,},FirstTimestamp:2024-09-04 17:34:47.432953662 +0000 UTC m=+1.112923730,LastTimestamp:2024-09-04 17:34:47.432953662 +0000 UTC m=+1.112923730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-246,}" Sep 4 17:34:47.481004 kubelet[2756]: I0904 17:34:47.479261 2756 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:34:47.481004 kubelet[2756]: I0904 17:34:47.479368 2756 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:34:47.481004 kubelet[2756]: W0904 17:34:47.480217 2756 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:47.481004 kubelet[2756]: E0904 17:34:47.480290 2756 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:47.481004 kubelet[2756]: E0904 17:34:47.480394 2756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-246?timeout=10s\": dial tcp 172.31.21.246:6443: connect: connection refused" interval="200ms" Sep 4 17:34:47.481981 kubelet[2756]: I0904 17:34:47.481944 2756 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:34:47.482078 kubelet[2756]: I0904 17:34:47.482063 2756 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:34:47.488411 kubelet[2756]: I0904 17:34:47.488100 2756 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:34:47.497748 kubelet[2756]: I0904 17:34:47.497569 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:34:47.500247 kubelet[2756]: I0904 17:34:47.500189 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:34:47.500461 kubelet[2756]: I0904 17:34:47.500391 2756 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:34:47.500461 kubelet[2756]: I0904 17:34:47.500420 2756 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:34:47.500551 kubelet[2756]: E0904 17:34:47.500473 2756 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:34:47.507456 kubelet[2756]: E0904 17:34:47.507403 2756 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:34:47.507976 kubelet[2756]: W0904 17:34:47.507668 2756 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:47.507976 kubelet[2756]: E0904 17:34:47.507728 2756 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:47.519553 kubelet[2756]: I0904 17:34:47.519530 2756 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:34:47.519968 kubelet[2756]: I0904 17:34:47.519739 2756 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:34:47.519968 kubelet[2756]: I0904 17:34:47.519766 2756 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:34:47.526083 kubelet[2756]: I0904 17:34:47.526055 2756 policy_none.go:49] "None policy: Start" Sep 4 17:34:47.528419 kubelet[2756]: I0904 17:34:47.528381 2756 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:34:47.528419 kubelet[2756]: I0904 17:34:47.528426 2756 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:34:47.542490 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:34:47.553906 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:34:47.557645 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:34:47.567498 kubelet[2756]: I0904 17:34:47.567462 2756 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:34:47.568117 kubelet[2756]: I0904 17:34:47.567727 2756 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:34:47.568117 kubelet[2756]: I0904 17:34:47.567873 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:34:47.572019 kubelet[2756]: E0904 17:34:47.571980 2756 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-246\" not found" Sep 4 17:34:47.577866 kubelet[2756]: I0904 17:34:47.577833 2756 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-246" Sep 4 17:34:47.578239 kubelet[2756]: E0904 17:34:47.578207 2756 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.246:6443/api/v1/nodes\": dial tcp 172.31.21.246:6443: connect: connection refused" node="ip-172-31-21-246" Sep 4 17:34:47.600742 kubelet[2756]: I0904 17:34:47.600661 2756 topology_manager.go:215] "Topology Admit Handler" podUID="41de4750edcc953d52b631746d5c4035" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-246" Sep 4 17:34:47.602707 kubelet[2756]: I0904 17:34:47.602395 2756 topology_manager.go:215] "Topology Admit Handler" podUID="0b42d74bbc2b34ce838b953830ea9344" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:47.604010 kubelet[2756]: I0904 17:34:47.603980 2756 topology_manager.go:215] "Topology Admit Handler" podUID="3472507ac09e1673115c1422df081d02" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-246" Sep 4 17:34:47.612384 systemd[1]: Created slice kubepods-burstable-pod41de4750edcc953d52b631746d5c4035.slice - libcontainer container kubepods-burstable-pod41de4750edcc953d52b631746d5c4035.slice. Sep 4 17:34:47.636080 systemd[1]: Created slice kubepods-burstable-pod0b42d74bbc2b34ce838b953830ea9344.slice - libcontainer container kubepods-burstable-pod0b42d74bbc2b34ce838b953830ea9344.slice. Sep 4 17:34:47.643298 systemd[1]: Created slice kubepods-burstable-pod3472507ac09e1673115c1422df081d02.slice - libcontainer container kubepods-burstable-pod3472507ac09e1673115c1422df081d02.slice. Sep 4 17:34:47.681365 kubelet[2756]: I0904 17:34:47.681015 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:47.681365 kubelet[2756]: I0904 17:34:47.681066 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:47.681365 kubelet[2756]: I0904 17:34:47.681091 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:47.681365 kubelet[2756]: I0904 17:34:47.681114 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41de4750edcc953d52b631746d5c4035-ca-certs\") pod \"kube-apiserver-ip-172-31-21-246\" (UID: \"41de4750edcc953d52b631746d5c4035\") " pod="kube-system/kube-apiserver-ip-172-31-21-246" Sep 4 17:34:47.681365 kubelet[2756]: I0904 17:34:47.681135 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41de4750edcc953d52b631746d5c4035-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-246\" (UID: \"41de4750edcc953d52b631746d5c4035\") " pod="kube-system/kube-apiserver-ip-172-31-21-246" Sep 4 17:34:47.681652 kubelet[2756]: I0904 17:34:47.681157 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41de4750edcc953d52b631746d5c4035-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-246\" (UID: \"41de4750edcc953d52b631746d5c4035\") " pod="kube-system/kube-apiserver-ip-172-31-21-246" Sep 4 17:34:47.681652 kubelet[2756]: I0904 17:34:47.681183 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:47.681652 kubelet[2756]: I0904 17:34:47.681224 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:47.681652 kubelet[2756]: I0904 17:34:47.681258 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3472507ac09e1673115c1422df081d02-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-246\" (UID: \"3472507ac09e1673115c1422df081d02\") " pod="kube-system/kube-scheduler-ip-172-31-21-246" Sep 4 17:34:47.681652 kubelet[2756]: E0904 17:34:47.681273 2756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-246?timeout=10s\": dial tcp 172.31.21.246:6443: connect: connection refused" interval="400ms" Sep 4 17:34:47.780705 kubelet[2756]: I0904 17:34:47.780673 2756 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-246" Sep 4 17:34:47.781172 kubelet[2756]: E0904 17:34:47.781129 2756 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.246:6443/api/v1/nodes\": dial tcp 172.31.21.246:6443: connect: connection refused" node="ip-172-31-21-246" Sep 4 17:34:47.933101 containerd[1879]: time="2024-09-04T17:34:47.932959273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-246,Uid:41de4750edcc953d52b631746d5c4035,Namespace:kube-system,Attempt:0,}" Sep 4 17:34:47.948032 containerd[1879]: time="2024-09-04T17:34:47.947740792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-246,Uid:0b42d74bbc2b34ce838b953830ea9344,Namespace:kube-system,Attempt:0,}" Sep 4 17:34:47.956367 containerd[1879]: time="2024-09-04T17:34:47.947767594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-246,Uid:3472507ac09e1673115c1422df081d02,Namespace:kube-system,Attempt:0,}" Sep 4 17:34:48.082261 kubelet[2756]: E0904 17:34:48.082134 2756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-246?timeout=10s\": dial tcp 172.31.21.246:6443: connect: connection refused" interval="800ms" Sep 4 17:34:48.183677 kubelet[2756]: I0904 17:34:48.183246 2756 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-246" Sep 4 17:34:48.183677 kubelet[2756]: E0904 17:34:48.183665 2756 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.246:6443/api/v1/nodes\": dial tcp 172.31.21.246:6443: connect: connection refused" node="ip-172-31-21-246" Sep 4 17:34:48.248968 kubelet[2756]: W0904 17:34:48.248923 2756 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.246:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:48.248968 kubelet[2756]: E0904 17:34:48.248968 2756 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.246:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:48.409816 kubelet[2756]: W0904 17:34:48.409770 2756 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:48.410241 kubelet[2756]: E0904 17:34:48.409852 2756 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:48.477832 kubelet[2756]: W0904 17:34:48.477767 2756 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-246&limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:48.478004 kubelet[2756]: E0904 17:34:48.477844 2756 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-246&limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:48.489230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048590195.mount: Deactivated successfully. Sep 4 17:34:48.514940 containerd[1879]: time="2024-09-04T17:34:48.514878337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:34:48.517273 containerd[1879]: time="2024-09-04T17:34:48.516456448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:34:48.521816 containerd[1879]: time="2024-09-04T17:34:48.521754802Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:34:48.527428 containerd[1879]: time="2024-09-04T17:34:48.527208997Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:34:48.546877 containerd[1879]: time="2024-09-04T17:34:48.546277357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:34:48.548995 containerd[1879]: time="2024-09-04T17:34:48.548779884Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:34:48.553437 containerd[1879]: time="2024-09-04T17:34:48.553212863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:34:48.553437 containerd[1879]: time="2024-09-04T17:34:48.553376123Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:34:48.562349 containerd[1879]: time="2024-09-04T17:34:48.561933552Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.345502ms" Sep 4 17:34:48.564090 containerd[1879]: time="2024-09-04T17:34:48.563875595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.81674ms" Sep 4 17:34:48.564090 containerd[1879]: time="2024-09-04T17:34:48.564030017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.186038ms" Sep 4 17:34:48.567583 kubelet[2756]: W0904 17:34:48.567541 2756 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:48.567583 kubelet[2756]: E0904 17:34:48.567588 2756 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:48.795151 containerd[1879]: time="2024-09-04T17:34:48.794694021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:34:48.795151 containerd[1879]: time="2024-09-04T17:34:48.794773949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:34:48.795151 containerd[1879]: time="2024-09-04T17:34:48.794795341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:48.795151 containerd[1879]: time="2024-09-04T17:34:48.794910081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:48.797102 containerd[1879]: time="2024-09-04T17:34:48.795704963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:34:48.797102 containerd[1879]: time="2024-09-04T17:34:48.795796344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:34:48.797102 containerd[1879]: time="2024-09-04T17:34:48.795815722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:48.797102 containerd[1879]: time="2024-09-04T17:34:48.795938111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:48.800579 containerd[1879]: time="2024-09-04T17:34:48.798478699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:34:48.800579 containerd[1879]: time="2024-09-04T17:34:48.798570981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:34:48.800579 containerd[1879]: time="2024-09-04T17:34:48.798590839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:48.800579 containerd[1879]: time="2024-09-04T17:34:48.798711601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:48.837571 systemd[1]: Started cri-containerd-b02eec641a563b016a97b107e24ed5810809cc9587b4b74e137bb9267f88c097.scope - libcontainer container b02eec641a563b016a97b107e24ed5810809cc9587b4b74e137bb9267f88c097. Sep 4 17:34:48.859575 systemd[1]: Started cri-containerd-2c7350915685b983b03d17ea456252f2a379b44b2e830757dcd33d11952661c3.scope - libcontainer container 2c7350915685b983b03d17ea456252f2a379b44b2e830757dcd33d11952661c3. Sep 4 17:34:48.883026 systemd[1]: Started cri-containerd-8784e52476935f54b1cdc091f71a3f43ef6e2b3d538b6cd8f9d766977b1c0252.scope - libcontainer container 8784e52476935f54b1cdc091f71a3f43ef6e2b3d538b6cd8f9d766977b1c0252. Sep 4 17:34:48.886552 kubelet[2756]: E0904 17:34:48.886487 2756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-246?timeout=10s\": dial tcp 172.31.21.246:6443: connect: connection refused" interval="1.6s" Sep 4 17:34:48.972813 containerd[1879]: time="2024-09-04T17:34:48.972296177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-246,Uid:41de4750edcc953d52b631746d5c4035,Namespace:kube-system,Attempt:0,} returns sandbox id \"b02eec641a563b016a97b107e24ed5810809cc9587b4b74e137bb9267f88c097\"" Sep 4 17:34:48.974662 containerd[1879]: time="2024-09-04T17:34:48.974100765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-246,Uid:3472507ac09e1673115c1422df081d02,Namespace:kube-system,Attempt:0,} returns sandbox id \"8784e52476935f54b1cdc091f71a3f43ef6e2b3d538b6cd8f9d766977b1c0252\"" Sep 4 17:34:48.995337 kubelet[2756]: I0904 17:34:48.993625 2756 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-246" Sep 4 17:34:48.995337 kubelet[2756]: E0904 17:34:48.993988 2756 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.246:6443/api/v1/nodes\": dial tcp 172.31.21.246:6443: connect: connection refused" node="ip-172-31-21-246" Sep 4 17:34:48.996341 containerd[1879]: time="2024-09-04T17:34:48.996276125Z" level=info msg="CreateContainer within sandbox \"8784e52476935f54b1cdc091f71a3f43ef6e2b3d538b6cd8f9d766977b1c0252\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:34:48.996606 containerd[1879]: time="2024-09-04T17:34:48.996576477Z" level=info msg="CreateContainer within sandbox \"b02eec641a563b016a97b107e24ed5810809cc9587b4b74e137bb9267f88c097\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:34:49.005175 containerd[1879]: time="2024-09-04T17:34:49.005132537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-246,Uid:0b42d74bbc2b34ce838b953830ea9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c7350915685b983b03d17ea456252f2a379b44b2e830757dcd33d11952661c3\"" Sep 4 17:34:49.014180 containerd[1879]: time="2024-09-04T17:34:49.014133646Z" level=info msg="CreateContainer within sandbox \"2c7350915685b983b03d17ea456252f2a379b44b2e830757dcd33d11952661c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:34:49.051801 containerd[1879]: time="2024-09-04T17:34:49.050731251Z" level=info msg="CreateContainer within sandbox \"b02eec641a563b016a97b107e24ed5810809cc9587b4b74e137bb9267f88c097\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"718206944e67a07161c7ee5e1118617da7a01416ed66cdae2e977a3ea7dcd334\"" Sep 4 17:34:49.052745 containerd[1879]: time="2024-09-04T17:34:49.052696099Z" level=info msg="CreateContainer within sandbox \"8784e52476935f54b1cdc091f71a3f43ef6e2b3d538b6cd8f9d766977b1c0252\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5\"" Sep 4 17:34:49.054773 containerd[1879]: time="2024-09-04T17:34:49.053360388Z" level=info msg="StartContainer for \"c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5\"" Sep 4 17:34:49.060075 containerd[1879]: time="2024-09-04T17:34:49.060016065Z" level=info msg="CreateContainer within sandbox \"2c7350915685b983b03d17ea456252f2a379b44b2e830757dcd33d11952661c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90\"" Sep 4 17:34:49.060827 containerd[1879]: time="2024-09-04T17:34:49.060792595Z" level=info msg="StartContainer for \"718206944e67a07161c7ee5e1118617da7a01416ed66cdae2e977a3ea7dcd334\"" Sep 4 17:34:49.065757 containerd[1879]: time="2024-09-04T17:34:49.065709562Z" level=info msg="StartContainer for \"115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90\"" Sep 4 17:34:49.104572 systemd[1]: Started cri-containerd-c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5.scope - libcontainer container c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5. Sep 4 17:34:49.120659 systemd[1]: Started cri-containerd-718206944e67a07161c7ee5e1118617da7a01416ed66cdae2e977a3ea7dcd334.scope - libcontainer container 718206944e67a07161c7ee5e1118617da7a01416ed66cdae2e977a3ea7dcd334. Sep 4 17:34:49.149690 systemd[1]: Started cri-containerd-115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90.scope - libcontainer container 115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90. Sep 4 17:34:49.231914 containerd[1879]: time="2024-09-04T17:34:49.231852826Z" level=info msg="StartContainer for \"c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5\" returns successfully" Sep 4 17:34:49.234234 containerd[1879]: time="2024-09-04T17:34:49.232010322Z" level=info msg="StartContainer for \"718206944e67a07161c7ee5e1118617da7a01416ed66cdae2e977a3ea7dcd334\" returns successfully" Sep 4 17:34:49.266606 containerd[1879]: time="2024-09-04T17:34:49.265636449Z" level=info msg="StartContainer for \"115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90\" returns successfully" Sep 4 17:34:49.568925 kubelet[2756]: E0904 17:34:49.568888 2756 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.246:6443: connect: connection refused Sep 4 17:34:50.596607 kubelet[2756]: I0904 17:34:50.596569 2756 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-246" Sep 4 17:34:52.766291 kubelet[2756]: E0904 17:34:52.766232 2756 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-246\" not found" node="ip-172-31-21-246" Sep 4 17:34:52.905342 kubelet[2756]: I0904 17:34:52.905261 2756 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-246" Sep 4 17:34:53.426286 kubelet[2756]: I0904 17:34:53.426243 2756 apiserver.go:52] "Watching apiserver" Sep 4 17:34:53.479626 kubelet[2756]: I0904 17:34:53.479567 2756 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:34:54.834144 systemd[1]: Reloading requested from client PID 3125 ('systemctl') (unit session-7.scope)... Sep 4 17:34:54.834169 systemd[1]: Reloading... Sep 4 17:34:55.005939 zram_generator::config[3160]: No configuration found. Sep 4 17:34:55.299072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:34:55.461619 systemd[1]: Reloading finished in 624 ms. Sep 4 17:34:55.527427 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:34:55.529457 kubelet[2756]: I0904 17:34:55.527442 2756 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:34:55.542842 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:34:55.543089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:34:55.543170 systemd[1]: kubelet.service: Consumed 1.224s CPU time, 114.1M memory peak, 0B memory swap peak. Sep 4 17:34:55.549836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:34:55.925642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:34:55.930367 (kubelet)[3220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:34:56.059354 kubelet[3220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:34:56.059354 kubelet[3220]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:34:56.059354 kubelet[3220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:34:56.060118 kubelet[3220]: I0904 17:34:56.059543 3220 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:34:56.067350 kubelet[3220]: I0904 17:34:56.067287 3220 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:34:56.067566 kubelet[3220]: I0904 17:34:56.067460 3220 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:34:56.075959 kubelet[3220]: I0904 17:34:56.067886 3220 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:34:56.078636 kubelet[3220]: I0904 17:34:56.078591 3220 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:34:56.086149 kubelet[3220]: I0904 17:34:56.085394 3220 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:34:56.117028 kubelet[3220]: I0904 17:34:56.108756 3220 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:34:56.117028 kubelet[3220]: I0904 17:34:56.115137 3220 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:34:56.117028 kubelet[3220]: I0904 17:34:56.115221 3220 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-246","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:34:56.117028 kubelet[3220]: I0904 17:34:56.116534 3220 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:34:56.118781 kubelet[3220]: I0904 17:34:56.116570 3220 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:34:56.118781 kubelet[3220]: I0904 17:34:56.116649 3220 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:34:56.118781 kubelet[3220]: I0904 17:34:56.116821 3220 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:34:56.118781 kubelet[3220]: I0904 17:34:56.116838 3220 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:34:56.118781 kubelet[3220]: I0904 17:34:56.116871 3220 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:34:56.118781 kubelet[3220]: I0904 17:34:56.116894 3220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:34:56.138461 kubelet[3220]: I0904 17:34:56.138424 3220 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:34:56.142123 kubelet[3220]: I0904 17:34:56.139153 3220 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:34:56.145080 kubelet[3220]: I0904 17:34:56.144948 3220 server.go:1264] "Started kubelet" Sep 4 17:34:56.179563 kubelet[3220]: I0904 17:34:56.177456 3220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:34:56.185636 kubelet[3220]: I0904 17:34:56.185565 3220 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:34:56.187492 kubelet[3220]: I0904 17:34:56.187457 3220 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:34:56.190472 kubelet[3220]: I0904 17:34:56.190322 3220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:34:56.190609 kubelet[3220]: I0904 17:34:56.190585 3220 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:34:56.196069 kubelet[3220]: I0904 17:34:56.194918 3220 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:34:56.196069 kubelet[3220]: I0904 17:34:56.195811 3220 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:34:56.196069 kubelet[3220]: I0904 17:34:56.195994 3220 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:34:56.210757 kubelet[3220]: I0904 17:34:56.209880 3220 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:34:56.210757 kubelet[3220]: I0904 17:34:56.210005 3220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:34:56.218135 kubelet[3220]: E0904 17:34:56.218096 3220 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:34:56.230392 kubelet[3220]: I0904 17:34:56.230151 3220 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:34:56.236104 kubelet[3220]: I0904 17:34:56.235963 3220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:34:56.239277 kubelet[3220]: I0904 17:34:56.239236 3220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:34:56.239277 kubelet[3220]: I0904 17:34:56.239279 3220 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:34:56.239478 kubelet[3220]: I0904 17:34:56.239305 3220 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:34:56.239478 kubelet[3220]: E0904 17:34:56.239377 3220 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:34:56.299091 kubelet[3220]: I0904 17:34:56.298583 3220 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-246" Sep 4 17:34:56.313545 kubelet[3220]: I0904 17:34:56.313517 3220 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-21-246" Sep 4 17:34:56.314430 kubelet[3220]: I0904 17:34:56.314408 3220 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-246" Sep 4 17:34:56.342925 kubelet[3220]: E0904 17:34:56.342251 3220 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:34:56.353545 kubelet[3220]: I0904 17:34:56.352298 3220 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:34:56.353545 kubelet[3220]: I0904 17:34:56.352393 3220 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:34:56.353545 kubelet[3220]: I0904 17:34:56.352419 3220 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:34:56.353545 kubelet[3220]: I0904 17:34:56.352619 3220 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:34:56.353545 kubelet[3220]: I0904 17:34:56.352633 3220 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:34:56.353545 kubelet[3220]: I0904 17:34:56.352659 3220 policy_none.go:49] "None policy: Start" Sep 4 17:34:56.355247 kubelet[3220]: I0904 17:34:56.354073 3220 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:34:56.355247 kubelet[3220]: I0904 17:34:56.354099 3220 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:34:56.355790 kubelet[3220]: I0904 17:34:56.355741 3220 state_mem.go:75] "Updated machine memory state" Sep 4 17:34:56.379416 kubelet[3220]: I0904 17:34:56.379391 3220 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:34:56.386064 kubelet[3220]: I0904 17:34:56.383466 3220 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:34:56.389985 kubelet[3220]: I0904 17:34:56.386855 3220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:34:56.542713 kubelet[3220]: I0904 17:34:56.542650 3220 topology_manager.go:215] "Topology Admit Handler" podUID="41de4750edcc953d52b631746d5c4035" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-246" Sep 4 17:34:56.542899 kubelet[3220]: I0904 17:34:56.542790 3220 topology_manager.go:215] "Topology Admit Handler" podUID="0b42d74bbc2b34ce838b953830ea9344" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:56.542899 kubelet[3220]: I0904 17:34:56.542863 3220 topology_manager.go:215] "Topology Admit Handler" podUID="3472507ac09e1673115c1422df081d02" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-246" Sep 4 17:34:56.602474 kubelet[3220]: I0904 17:34:56.601625 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41de4750edcc953d52b631746d5c4035-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-246\" (UID: \"41de4750edcc953d52b631746d5c4035\") " pod="kube-system/kube-apiserver-ip-172-31-21-246" Sep 4 17:34:56.602474 kubelet[3220]: I0904 17:34:56.601710 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:56.602474 kubelet[3220]: I0904 17:34:56.601765 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3472507ac09e1673115c1422df081d02-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-246\" (UID: \"3472507ac09e1673115c1422df081d02\") " pod="kube-system/kube-scheduler-ip-172-31-21-246" Sep 4 17:34:56.602474 kubelet[3220]: I0904 17:34:56.601864 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:56.602474 kubelet[3220]: I0904 17:34:56.601997 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:56.603068 kubelet[3220]: I0904 17:34:56.602117 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41de4750edcc953d52b631746d5c4035-ca-certs\") pod \"kube-apiserver-ip-172-31-21-246\" (UID: \"41de4750edcc953d52b631746d5c4035\") " pod="kube-system/kube-apiserver-ip-172-31-21-246" Sep 4 17:34:56.603068 kubelet[3220]: I0904 17:34:56.602144 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41de4750edcc953d52b631746d5c4035-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-246\" (UID: \"41de4750edcc953d52b631746d5c4035\") " pod="kube-system/kube-apiserver-ip-172-31-21-246" Sep 4 17:34:56.603068 kubelet[3220]: I0904 17:34:56.602167 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:56.603068 kubelet[3220]: I0904 17:34:56.602192 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b42d74bbc2b34ce838b953830ea9344-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-246\" (UID: \"0b42d74bbc2b34ce838b953830ea9344\") " pod="kube-system/kube-controller-manager-ip-172-31-21-246" Sep 4 17:34:57.122412 kubelet[3220]: I0904 17:34:57.122068 3220 apiserver.go:52] "Watching apiserver" Sep 4 17:34:57.197149 kubelet[3220]: I0904 17:34:57.197096 3220 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:34:57.336755 kubelet[3220]: I0904 17:34:57.336678 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-246" podStartSLOduration=1.336655721 podStartE2EDuration="1.336655721s" podCreationTimestamp="2024-09-04 17:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:34:57.333672776 +0000 UTC m=+1.377774837" watchObservedRunningTime="2024-09-04 17:34:57.336655721 +0000 UTC m=+1.380757782" Sep 4 17:34:57.358937 kubelet[3220]: I0904 17:34:57.358858 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-246" podStartSLOduration=1.358834088 podStartE2EDuration="1.358834088s" podCreationTimestamp="2024-09-04 17:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:34:57.348350093 +0000 UTC m=+1.392452156" watchObservedRunningTime="2024-09-04 17:34:57.358834088 +0000 UTC m=+1.402936148" Sep 4 17:34:57.382565 kubelet[3220]: I0904 17:34:57.382391 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-246" podStartSLOduration=1.382362433 podStartE2EDuration="1.382362433s" podCreationTimestamp="2024-09-04 17:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:34:57.359234206 +0000 UTC m=+1.403336267" watchObservedRunningTime="2024-09-04 17:34:57.382362433 +0000 UTC m=+1.426464494" Sep 4 17:35:03.265226 sudo[2199]: pam_unix(sudo:session): session closed for user root Sep 4 17:35:03.291092 sshd[2196]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:03.300646 systemd[1]: sshd@6-172.31.21.246:22-139.178.68.195:53978.service: Deactivated successfully. Sep 4 17:35:03.310927 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:35:03.311175 systemd[1]: session-7.scope: Consumed 5.352s CPU time, 136.3M memory peak, 0B memory swap peak. Sep 4 17:35:03.321107 systemd-logind[1856]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:35:03.324288 systemd-logind[1856]: Removed session 7. Sep 4 17:35:09.871007 kubelet[3220]: I0904 17:35:09.870951 3220 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:35:09.872502 kubelet[3220]: I0904 17:35:09.872393 3220 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:35:09.872671 containerd[1879]: time="2024-09-04T17:35:09.872026330Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:35:10.406066 kubelet[3220]: I0904 17:35:10.401553 3220 topology_manager.go:215] "Topology Admit Handler" podUID="48350bac-001f-4bf4-b757-0a209c7c35e7" podNamespace="kube-system" podName="kube-proxy-8kwqh" Sep 4 17:35:10.427625 systemd[1]: Created slice kubepods-besteffort-pod48350bac_001f_4bf4_b757_0a209c7c35e7.slice - libcontainer container kubepods-besteffort-pod48350bac_001f_4bf4_b757_0a209c7c35e7.slice. Sep 4 17:35:10.520266 kubelet[3220]: I0904 17:35:10.520141 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48350bac-001f-4bf4-b757-0a209c7c35e7-xtables-lock\") pod \"kube-proxy-8kwqh\" (UID: \"48350bac-001f-4bf4-b757-0a209c7c35e7\") " pod="kube-system/kube-proxy-8kwqh" Sep 4 17:35:10.520266 kubelet[3220]: I0904 17:35:10.520245 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/48350bac-001f-4bf4-b757-0a209c7c35e7-kube-proxy\") pod \"kube-proxy-8kwqh\" (UID: \"48350bac-001f-4bf4-b757-0a209c7c35e7\") " pod="kube-system/kube-proxy-8kwqh" Sep 4 17:35:10.520266 kubelet[3220]: I0904 17:35:10.520277 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48350bac-001f-4bf4-b757-0a209c7c35e7-lib-modules\") pod \"kube-proxy-8kwqh\" (UID: \"48350bac-001f-4bf4-b757-0a209c7c35e7\") " pod="kube-system/kube-proxy-8kwqh" Sep 4 17:35:10.520555 kubelet[3220]: I0904 17:35:10.520304 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxc9f\" (UniqueName: \"kubernetes.io/projected/48350bac-001f-4bf4-b757-0a209c7c35e7-kube-api-access-bxc9f\") pod \"kube-proxy-8kwqh\" (UID: \"48350bac-001f-4bf4-b757-0a209c7c35e7\") " pod="kube-system/kube-proxy-8kwqh" Sep 4 17:35:10.755130 containerd[1879]: time="2024-09-04T17:35:10.755069915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8kwqh,Uid:48350bac-001f-4bf4-b757-0a209c7c35e7,Namespace:kube-system,Attempt:0,}" Sep 4 17:35:10.798282 containerd[1879]: time="2024-09-04T17:35:10.798152635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:10.799221 containerd[1879]: time="2024-09-04T17:35:10.799105672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:10.799221 containerd[1879]: time="2024-09-04T17:35:10.799157537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:10.799529 containerd[1879]: time="2024-09-04T17:35:10.799411183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:10.837437 systemd[1]: Started cri-containerd-5280eb4dbfa1b33efc20c3424e93ac6bfc93a46831dedf7f517ae01abcf638f8.scope - libcontainer container 5280eb4dbfa1b33efc20c3424e93ac6bfc93a46831dedf7f517ae01abcf638f8. Sep 4 17:35:10.972135 kubelet[3220]: I0904 17:35:10.967984 3220 topology_manager.go:215] "Topology Admit Handler" podUID="ead17ba1-10b1-4895-9365-59983b63ecfb" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-kpwg7" Sep 4 17:35:11.013142 systemd[1]: Created slice kubepods-besteffort-podead17ba1_10b1_4895_9365_59983b63ecfb.slice - libcontainer container kubepods-besteffort-podead17ba1_10b1_4895_9365_59983b63ecfb.slice. Sep 4 17:35:11.025385 containerd[1879]: time="2024-09-04T17:35:11.023965790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8kwqh,Uid:48350bac-001f-4bf4-b757-0a209c7c35e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5280eb4dbfa1b33efc20c3424e93ac6bfc93a46831dedf7f517ae01abcf638f8\"" Sep 4 17:35:11.039178 containerd[1879]: time="2024-09-04T17:35:11.039108163Z" level=info msg="CreateContainer within sandbox \"5280eb4dbfa1b33efc20c3424e93ac6bfc93a46831dedf7f517ae01abcf638f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:35:11.077667 containerd[1879]: time="2024-09-04T17:35:11.077609029Z" level=info msg="CreateContainer within sandbox \"5280eb4dbfa1b33efc20c3424e93ac6bfc93a46831dedf7f517ae01abcf638f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ba67849cbc25f099c8625a7cd18ca9e3f7c3382be2688435a9061a1b37de4e56\"" Sep 4 17:35:11.079401 containerd[1879]: time="2024-09-04T17:35:11.079012323Z" level=info msg="StartContainer for \"ba67849cbc25f099c8625a7cd18ca9e3f7c3382be2688435a9061a1b37de4e56\"" Sep 4 17:35:11.125432 kubelet[3220]: I0904 17:35:11.124989 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pzbw\" (UniqueName: \"kubernetes.io/projected/ead17ba1-10b1-4895-9365-59983b63ecfb-kube-api-access-7pzbw\") pod \"tigera-operator-77f994b5bb-kpwg7\" (UID: \"ead17ba1-10b1-4895-9365-59983b63ecfb\") " pod="tigera-operator/tigera-operator-77f994b5bb-kpwg7" Sep 4 17:35:11.125432 kubelet[3220]: I0904 17:35:11.125054 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ead17ba1-10b1-4895-9365-59983b63ecfb-var-lib-calico\") pod \"tigera-operator-77f994b5bb-kpwg7\" (UID: \"ead17ba1-10b1-4895-9365-59983b63ecfb\") " pod="tigera-operator/tigera-operator-77f994b5bb-kpwg7" Sep 4 17:35:11.129582 systemd[1]: Started cri-containerd-ba67849cbc25f099c8625a7cd18ca9e3f7c3382be2688435a9061a1b37de4e56.scope - libcontainer container ba67849cbc25f099c8625a7cd18ca9e3f7c3382be2688435a9061a1b37de4e56. Sep 4 17:35:11.200800 containerd[1879]: time="2024-09-04T17:35:11.200744358Z" level=info msg="StartContainer for \"ba67849cbc25f099c8625a7cd18ca9e3f7c3382be2688435a9061a1b37de4e56\" returns successfully" Sep 4 17:35:11.335824 containerd[1879]: time="2024-09-04T17:35:11.334097318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-kpwg7,Uid:ead17ba1-10b1-4895-9365-59983b63ecfb,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:35:11.408117 containerd[1879]: time="2024-09-04T17:35:11.407816717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:11.408656 containerd[1879]: time="2024-09-04T17:35:11.408109502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:11.408656 containerd[1879]: time="2024-09-04T17:35:11.408137833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:11.408992 containerd[1879]: time="2024-09-04T17:35:11.408809428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:11.459636 systemd[1]: Started cri-containerd-b3fcddf0f58c009b767690d6cbefec220ef11e31853d6c5f939d324013808e4c.scope - libcontainer container b3fcddf0f58c009b767690d6cbefec220ef11e31853d6c5f939d324013808e4c. Sep 4 17:35:11.600473 containerd[1879]: time="2024-09-04T17:35:11.600247751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-kpwg7,Uid:ead17ba1-10b1-4895-9365-59983b63ecfb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b3fcddf0f58c009b767690d6cbefec220ef11e31853d6c5f939d324013808e4c\"" Sep 4 17:35:11.607013 containerd[1879]: time="2024-09-04T17:35:11.606476219Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:35:11.675828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498514389.mount: Deactivated successfully. Sep 4 17:35:12.888925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370191356.mount: Deactivated successfully. Sep 4 17:35:14.519154 containerd[1879]: time="2024-09-04T17:35:14.517835862Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:14.523276 containerd[1879]: time="2024-09-04T17:35:14.520990118Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136533" Sep 4 17:35:14.525254 containerd[1879]: time="2024-09-04T17:35:14.523778419Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:14.531616 containerd[1879]: time="2024-09-04T17:35:14.531555493Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:14.533072 containerd[1879]: time="2024-09-04T17:35:14.533021328Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.926494361s" Sep 4 17:35:14.533467 containerd[1879]: time="2024-09-04T17:35:14.533331927Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:35:14.550305 containerd[1879]: time="2024-09-04T17:35:14.549923942Z" level=info msg="CreateContainer within sandbox \"b3fcddf0f58c009b767690d6cbefec220ef11e31853d6c5f939d324013808e4c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:35:14.628765 containerd[1879]: time="2024-09-04T17:35:14.628716357Z" level=info msg="CreateContainer within sandbox \"b3fcddf0f58c009b767690d6cbefec220ef11e31853d6c5f939d324013808e4c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3\"" Sep 4 17:35:14.639506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304599444.mount: Deactivated successfully. Sep 4 17:35:14.640756 containerd[1879]: time="2024-09-04T17:35:14.640709370Z" level=info msg="StartContainer for \"f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3\"" Sep 4 17:35:14.771294 systemd[1]: run-containerd-runc-k8s.io-f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3-runc.n2me10.mount: Deactivated successfully. Sep 4 17:35:14.794609 systemd[1]: Started cri-containerd-f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3.scope - libcontainer container f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3. Sep 4 17:35:14.857875 containerd[1879]: time="2024-09-04T17:35:14.857818208Z" level=info msg="StartContainer for \"f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3\" returns successfully" Sep 4 17:35:15.432069 kubelet[3220]: I0904 17:35:15.430980 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8kwqh" podStartSLOduration=5.43095382 podStartE2EDuration="5.43095382s" podCreationTimestamp="2024-09-04 17:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:35:11.373562015 +0000 UTC m=+15.417664074" watchObservedRunningTime="2024-09-04 17:35:15.43095382 +0000 UTC m=+19.475055884" Sep 4 17:35:15.432069 kubelet[3220]: I0904 17:35:15.431277 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-kpwg7" podStartSLOduration=2.497584724 podStartE2EDuration="5.431205348s" podCreationTimestamp="2024-09-04 17:35:10 +0000 UTC" firstStartedPulling="2024-09-04 17:35:11.604407221 +0000 UTC m=+15.648509268" lastFinishedPulling="2024-09-04 17:35:14.538027842 +0000 UTC m=+18.582129892" observedRunningTime="2024-09-04 17:35:15.430744544 +0000 UTC m=+19.474846602" watchObservedRunningTime="2024-09-04 17:35:15.431205348 +0000 UTC m=+19.475307414" Sep 4 17:35:18.785557 kubelet[3220]: I0904 17:35:18.784061 3220 topology_manager.go:215] "Topology Admit Handler" podUID="2a2a5c82-d754-4b67-bfe8-d6ae7734e019" podNamespace="calico-system" podName="calico-typha-5c457fbdd-2tkrh" Sep 4 17:35:18.814914 systemd[1]: Created slice kubepods-besteffort-pod2a2a5c82_d754_4b67_bfe8_d6ae7734e019.slice - libcontainer container kubepods-besteffort-pod2a2a5c82_d754_4b67_bfe8_d6ae7734e019.slice. Sep 4 17:35:18.865648 kubelet[3220]: I0904 17:35:18.865593 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a2a5c82-d754-4b67-bfe8-d6ae7734e019-tigera-ca-bundle\") pod \"calico-typha-5c457fbdd-2tkrh\" (UID: \"2a2a5c82-d754-4b67-bfe8-d6ae7734e019\") " pod="calico-system/calico-typha-5c457fbdd-2tkrh" Sep 4 17:35:18.865648 kubelet[3220]: I0904 17:35:18.865653 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a2a5c82-d754-4b67-bfe8-d6ae7734e019-typha-certs\") pod \"calico-typha-5c457fbdd-2tkrh\" (UID: \"2a2a5c82-d754-4b67-bfe8-d6ae7734e019\") " pod="calico-system/calico-typha-5c457fbdd-2tkrh" Sep 4 17:35:18.865883 kubelet[3220]: I0904 17:35:18.865682 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vrgw\" (UniqueName: \"kubernetes.io/projected/2a2a5c82-d754-4b67-bfe8-d6ae7734e019-kube-api-access-4vrgw\") pod \"calico-typha-5c457fbdd-2tkrh\" (UID: \"2a2a5c82-d754-4b67-bfe8-d6ae7734e019\") " pod="calico-system/calico-typha-5c457fbdd-2tkrh" Sep 4 17:35:18.982414 kubelet[3220]: I0904 17:35:18.981259 3220 topology_manager.go:215] "Topology Admit Handler" podUID="8e845aa6-c231-45f5-9e15-3c1544c4f5ef" podNamespace="calico-system" podName="calico-node-7dfd8" Sep 4 17:35:19.005485 kubelet[3220]: W0904 17:35:19.005446 3220 reflector.go:547] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-21-246" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-21-246' and this object Sep 4 17:35:19.009510 kubelet[3220]: E0904 17:35:19.007248 3220 reflector.go:150] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-21-246" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-21-246' and this object Sep 4 17:35:19.009510 kubelet[3220]: W0904 17:35:19.008146 3220 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-21-246" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-21-246' and this object Sep 4 17:35:19.009510 kubelet[3220]: E0904 17:35:19.008176 3220 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-21-246" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-21-246' and this object Sep 4 17:35:19.013951 systemd[1]: Created slice kubepods-besteffort-pod8e845aa6_c231_45f5_9e15_3c1544c4f5ef.slice - libcontainer container kubepods-besteffort-pod8e845aa6_c231_45f5_9e15_3c1544c4f5ef.slice. Sep 4 17:35:19.066983 kubelet[3220]: I0904 17:35:19.066814 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-cni-bin-dir\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.066983 kubelet[3220]: I0904 17:35:19.066870 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-cni-log-dir\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.066983 kubelet[3220]: I0904 17:35:19.066894 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t5n5\" (UniqueName: \"kubernetes.io/projected/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-kube-api-access-2t5n5\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.066983 kubelet[3220]: I0904 17:35:19.066918 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-cni-net-dir\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.066983 kubelet[3220]: I0904 17:35:19.066951 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-flexvol-driver-host\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.067301 kubelet[3220]: I0904 17:35:19.066977 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-xtables-lock\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.067301 kubelet[3220]: I0904 17:35:19.067002 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-lib-modules\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.067301 kubelet[3220]: I0904 17:35:19.067025 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-tigera-ca-bundle\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.067301 kubelet[3220]: I0904 17:35:19.067048 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-node-certs\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.067301 kubelet[3220]: I0904 17:35:19.067072 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-var-run-calico\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.067703 kubelet[3220]: I0904 17:35:19.067096 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-policysync\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.067703 kubelet[3220]: I0904 17:35:19.067127 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-var-lib-calico\") pod \"calico-node-7dfd8\" (UID: \"8e845aa6-c231-45f5-9e15-3c1544c4f5ef\") " pod="calico-system/calico-node-7dfd8" Sep 4 17:35:19.113273 kubelet[3220]: I0904 17:35:19.111323 3220 topology_manager.go:215] "Topology Admit Handler" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" podNamespace="calico-system" podName="csi-node-driver-6kpnd" Sep 4 17:35:19.113273 kubelet[3220]: E0904 17:35:19.111751 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:19.133138 containerd[1879]: time="2024-09-04T17:35:19.133072556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c457fbdd-2tkrh,Uid:2a2a5c82-d754-4b67-bfe8-d6ae7734e019,Namespace:calico-system,Attempt:0,}" Sep 4 17:35:19.181469 kubelet[3220]: E0904 17:35:19.181190 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.181469 kubelet[3220]: W0904 17:35:19.181222 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.183364 kubelet[3220]: E0904 17:35:19.182074 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.210119 kubelet[3220]: E0904 17:35:19.210082 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.211331 kubelet[3220]: W0904 17:35:19.210281 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.211331 kubelet[3220]: E0904 17:35:19.210358 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.241339 containerd[1879]: time="2024-09-04T17:35:19.240218200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:19.241339 containerd[1879]: time="2024-09-04T17:35:19.241038447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:19.241339 containerd[1879]: time="2024-09-04T17:35:19.241126694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:19.244723 containerd[1879]: time="2024-09-04T17:35:19.244589878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:19.269883 kubelet[3220]: E0904 17:35:19.269646 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.269883 kubelet[3220]: W0904 17:35:19.269678 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.269883 kubelet[3220]: E0904 17:35:19.269728 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.270579 kubelet[3220]: E0904 17:35:19.270414 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.270579 kubelet[3220]: W0904 17:35:19.270500 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.270579 kubelet[3220]: I0904 17:35:19.270466 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4cc4158-2c05-4c12-9faa-641987eb3d31-kubelet-dir\") pod \"csi-node-driver-6kpnd\" (UID: \"b4cc4158-2c05-4c12-9faa-641987eb3d31\") " pod="calico-system/csi-node-driver-6kpnd" Sep 4 17:35:19.271212 kubelet[3220]: E0904 17:35:19.270765 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.272873 kubelet[3220]: E0904 17:35:19.271709 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.272873 kubelet[3220]: W0904 17:35:19.271768 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.272873 kubelet[3220]: E0904 17:35:19.271792 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.273305 kubelet[3220]: E0904 17:35:19.273222 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.273305 kubelet[3220]: W0904 17:35:19.273236 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.273305 kubelet[3220]: E0904 17:35:19.273281 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.276109 kubelet[3220]: I0904 17:35:19.273536 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4cc4158-2c05-4c12-9faa-641987eb3d31-socket-dir\") pod \"csi-node-driver-6kpnd\" (UID: \"b4cc4158-2c05-4c12-9faa-641987eb3d31\") " pod="calico-system/csi-node-driver-6kpnd" Sep 4 17:35:19.277489 kubelet[3220]: E0904 17:35:19.277274 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.277489 kubelet[3220]: W0904 17:35:19.277304 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.277489 kubelet[3220]: E0904 17:35:19.277387 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.278076 kubelet[3220]: E0904 17:35:19.278055 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.278200 kubelet[3220]: W0904 17:35:19.278166 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.288955 kubelet[3220]: E0904 17:35:19.280299 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.288955 kubelet[3220]: W0904 17:35:19.280368 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.288955 kubelet[3220]: E0904 17:35:19.280390 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.288955 kubelet[3220]: I0904 17:35:19.280544 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4cc4158-2c05-4c12-9faa-641987eb3d31-registration-dir\") pod \"csi-node-driver-6kpnd\" (UID: \"b4cc4158-2c05-4c12-9faa-641987eb3d31\") " pod="calico-system/csi-node-driver-6kpnd" Sep 4 17:35:19.288955 kubelet[3220]: E0904 17:35:19.279807 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.288955 kubelet[3220]: E0904 17:35:19.288306 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.288955 kubelet[3220]: W0904 17:35:19.288361 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.288955 kubelet[3220]: E0904 17:35:19.288392 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.291370 kubelet[3220]: E0904 17:35:19.290676 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.291370 kubelet[3220]: W0904 17:35:19.290700 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.291370 kubelet[3220]: E0904 17:35:19.290728 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.291370 kubelet[3220]: I0904 17:35:19.290823 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b4cc4158-2c05-4c12-9faa-641987eb3d31-varrun\") pod \"csi-node-driver-6kpnd\" (UID: \"b4cc4158-2c05-4c12-9faa-641987eb3d31\") " pod="calico-system/csi-node-driver-6kpnd" Sep 4 17:35:19.292293 kubelet[3220]: E0904 17:35:19.291960 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.292293 kubelet[3220]: W0904 17:35:19.292044 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.292293 kubelet[3220]: E0904 17:35:19.292204 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.292293 kubelet[3220]: I0904 17:35:19.292244 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpzwp\" (UniqueName: \"kubernetes.io/projected/b4cc4158-2c05-4c12-9faa-641987eb3d31-kube-api-access-cpzwp\") pod \"csi-node-driver-6kpnd\" (UID: \"b4cc4158-2c05-4c12-9faa-641987eb3d31\") " pod="calico-system/csi-node-driver-6kpnd" Sep 4 17:35:19.294365 kubelet[3220]: E0904 17:35:19.293884 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.294365 kubelet[3220]: W0904 17:35:19.293902 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.294365 kubelet[3220]: E0904 17:35:19.294008 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.294966 kubelet[3220]: E0904 17:35:19.294779 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.294966 kubelet[3220]: W0904 17:35:19.294796 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.294966 kubelet[3220]: E0904 17:35:19.294882 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.295475 kubelet[3220]: E0904 17:35:19.295279 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.295475 kubelet[3220]: W0904 17:35:19.295292 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.295475 kubelet[3220]: E0904 17:35:19.295446 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.295880 kubelet[3220]: E0904 17:35:19.295776 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.295880 kubelet[3220]: W0904 17:35:19.295791 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.295880 kubelet[3220]: E0904 17:35:19.295807 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.296386 kubelet[3220]: E0904 17:35:19.296268 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.296386 kubelet[3220]: W0904 17:35:19.296282 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.296386 kubelet[3220]: E0904 17:35:19.296296 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.297062 kubelet[3220]: E0904 17:35:19.297006 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.297062 kubelet[3220]: W0904 17:35:19.297019 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.297062 kubelet[3220]: E0904 17:35:19.297034 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.343074 systemd[1]: Started cri-containerd-92b7291c51071a84eb442e56307d8fc4ef7aaddb15a0406c9305a399d5bc35b4.scope - libcontainer container 92b7291c51071a84eb442e56307d8fc4ef7aaddb15a0406c9305a399d5bc35b4. Sep 4 17:35:19.394912 kubelet[3220]: E0904 17:35:19.394212 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.394912 kubelet[3220]: W0904 17:35:19.394240 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.394912 kubelet[3220]: E0904 17:35:19.394272 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.395701 kubelet[3220]: E0904 17:35:19.395590 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.395701 kubelet[3220]: W0904 17:35:19.395611 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.395701 kubelet[3220]: E0904 17:35:19.395655 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.396191 kubelet[3220]: E0904 17:35:19.396174 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.396330 kubelet[3220]: W0904 17:35:19.396282 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.396569 kubelet[3220]: E0904 17:35:19.396534 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.397016 kubelet[3220]: E0904 17:35:19.397000 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.397182 kubelet[3220]: W0904 17:35:19.397100 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.397182 kubelet[3220]: E0904 17:35:19.397134 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.397936 kubelet[3220]: E0904 17:35:19.397798 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.397936 kubelet[3220]: W0904 17:35:19.397814 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.398274 kubelet[3220]: E0904 17:35:19.398088 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.398413 kubelet[3220]: E0904 17:35:19.398402 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.398580 kubelet[3220]: W0904 17:35:19.398482 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.398698 kubelet[3220]: E0904 17:35:19.398654 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.398867 kubelet[3220]: E0904 17:35:19.398805 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.398867 kubelet[3220]: W0904 17:35:19.398817 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.399162 kubelet[3220]: E0904 17:35:19.399043 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.399272 kubelet[3220]: E0904 17:35:19.399259 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.399501 kubelet[3220]: W0904 17:35:19.399405 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.399501 kubelet[3220]: E0904 17:35:19.399438 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.399982 kubelet[3220]: E0904 17:35:19.399847 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.399982 kubelet[3220]: W0904 17:35:19.399860 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.400292 kubelet[3220]: E0904 17:35:19.400123 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.400442 kubelet[3220]: E0904 17:35:19.400430 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.400787 kubelet[3220]: W0904 17:35:19.400684 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.400990 kubelet[3220]: E0904 17:35:19.400869 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.401304 kubelet[3220]: E0904 17:35:19.401203 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.401304 kubelet[3220]: W0904 17:35:19.401215 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.401641 kubelet[3220]: E0904 17:35:19.401510 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.401766 kubelet[3220]: E0904 17:35:19.401739 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.401766 kubelet[3220]: W0904 17:35:19.401751 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.402040 kubelet[3220]: E0904 17:35:19.401934 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.402337 kubelet[3220]: E0904 17:35:19.402222 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.402337 kubelet[3220]: W0904 17:35:19.402234 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.402718 kubelet[3220]: E0904 17:35:19.402624 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.402718 kubelet[3220]: W0904 17:35:19.402638 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.403018 kubelet[3220]: E0904 17:35:19.402843 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.403018 kubelet[3220]: E0904 17:35:19.402875 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.403388 kubelet[3220]: E0904 17:35:19.403254 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.403388 kubelet[3220]: W0904 17:35:19.403266 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.403714 kubelet[3220]: E0904 17:35:19.403627 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.403714 kubelet[3220]: E0904 17:35:19.403689 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.403714 kubelet[3220]: W0904 17:35:19.403698 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.404027 kubelet[3220]: E0904 17:35:19.403943 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.404231 kubelet[3220]: E0904 17:35:19.404218 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.404433 kubelet[3220]: W0904 17:35:19.404308 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.404564 kubelet[3220]: E0904 17:35:19.404487 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.404806 kubelet[3220]: E0904 17:35:19.404794 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.405133 kubelet[3220]: W0904 17:35:19.404878 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.405221 kubelet[3220]: E0904 17:35:19.405209 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.405621 kubelet[3220]: E0904 17:35:19.405520 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.405621 kubelet[3220]: W0904 17:35:19.405533 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.405857 kubelet[3220]: E0904 17:35:19.405744 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.406262 kubelet[3220]: E0904 17:35:19.406114 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.406262 kubelet[3220]: W0904 17:35:19.406128 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.406462 kubelet[3220]: E0904 17:35:19.406417 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.406753 kubelet[3220]: E0904 17:35:19.406578 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.406753 kubelet[3220]: W0904 17:35:19.406589 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.406916 kubelet[3220]: E0904 17:35:19.406877 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.407796 kubelet[3220]: E0904 17:35:19.407691 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.407796 kubelet[3220]: W0904 17:35:19.407705 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.407988 kubelet[3220]: E0904 17:35:19.407916 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.408344 kubelet[3220]: E0904 17:35:19.408191 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.408344 kubelet[3220]: W0904 17:35:19.408204 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.408755 kubelet[3220]: E0904 17:35:19.408626 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.408755 kubelet[3220]: W0904 17:35:19.408637 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.408890 kubelet[3220]: E0904 17:35:19.408877 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.409129 kubelet[3220]: E0904 17:35:19.409078 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.409745 kubelet[3220]: E0904 17:35:19.409347 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.409745 kubelet[3220]: W0904 17:35:19.409360 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.409745 kubelet[3220]: E0904 17:35:19.409377 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.410056 kubelet[3220]: E0904 17:35:19.410045 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.410131 kubelet[3220]: W0904 17:35:19.410120 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.410242 kubelet[3220]: E0904 17:35:19.410230 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.432003 kubelet[3220]: E0904 17:35:19.431953 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.432309 kubelet[3220]: W0904 17:35:19.432200 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.432309 kubelet[3220]: E0904 17:35:19.432240 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.501737 kubelet[3220]: E0904 17:35:19.501625 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.501737 kubelet[3220]: W0904 17:35:19.501652 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.501737 kubelet[3220]: E0904 17:35:19.501679 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.502830 containerd[1879]: time="2024-09-04T17:35:19.502636465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c457fbdd-2tkrh,Uid:2a2a5c82-d754-4b67-bfe8-d6ae7734e019,Namespace:calico-system,Attempt:0,} returns sandbox id \"92b7291c51071a84eb442e56307d8fc4ef7aaddb15a0406c9305a399d5bc35b4\"" Sep 4 17:35:19.505798 containerd[1879]: time="2024-09-04T17:35:19.505608730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:35:19.608182 kubelet[3220]: E0904 17:35:19.604100 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.608182 kubelet[3220]: W0904 17:35:19.604130 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.608182 kubelet[3220]: E0904 17:35:19.604157 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.706286 kubelet[3220]: E0904 17:35:19.705982 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.706286 kubelet[3220]: W0904 17:35:19.706010 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.706286 kubelet[3220]: E0904 17:35:19.706216 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.808464 kubelet[3220]: E0904 17:35:19.808428 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.808464 kubelet[3220]: W0904 17:35:19.808460 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.809096 kubelet[3220]: E0904 17:35:19.808488 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:19.910343 kubelet[3220]: E0904 17:35:19.910203 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:19.910343 kubelet[3220]: W0904 17:35:19.910233 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:19.910343 kubelet[3220]: E0904 17:35:19.910261 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.011733 kubelet[3220]: E0904 17:35:20.011693 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.011733 kubelet[3220]: W0904 17:35:20.011717 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.012158 kubelet[3220]: E0904 17:35:20.011745 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.116298 kubelet[3220]: E0904 17:35:20.116209 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.116298 kubelet[3220]: W0904 17:35:20.116292 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.116544 kubelet[3220]: E0904 17:35:20.116345 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.176940 kubelet[3220]: E0904 17:35:20.176401 3220 secret.go:194] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 17:35:20.176940 kubelet[3220]: E0904 17:35:20.176573 3220 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-node-certs podName:8e845aa6-c231-45f5-9e15-3c1544c4f5ef nodeName:}" failed. No retries permitted until 2024-09-04 17:35:20.676525697 +0000 UTC m=+24.720627747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/8e845aa6-c231-45f5-9e15-3c1544c4f5ef-node-certs") pod "calico-node-7dfd8" (UID: "8e845aa6-c231-45f5-9e15-3c1544c4f5ef") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:35:20.226651 kubelet[3220]: E0904 17:35:20.219451 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.226651 kubelet[3220]: W0904 17:35:20.223664 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.226651 kubelet[3220]: E0904 17:35:20.223699 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.326376 kubelet[3220]: E0904 17:35:20.326263 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.326376 kubelet[3220]: W0904 17:35:20.326373 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.327244 kubelet[3220]: E0904 17:35:20.326497 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.428884 kubelet[3220]: E0904 17:35:20.428666 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.428884 kubelet[3220]: W0904 17:35:20.428700 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.428884 kubelet[3220]: E0904 17:35:20.428731 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.530870 kubelet[3220]: E0904 17:35:20.530823 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.531027 kubelet[3220]: W0904 17:35:20.530887 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.531027 kubelet[3220]: E0904 17:35:20.530916 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.632708 kubelet[3220]: E0904 17:35:20.632586 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.632708 kubelet[3220]: W0904 17:35:20.632614 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.632708 kubelet[3220]: E0904 17:35:20.632640 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.733547 kubelet[3220]: E0904 17:35:20.733514 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.733547 kubelet[3220]: W0904 17:35:20.733546 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.733740 kubelet[3220]: E0904 17:35:20.733574 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.733969 kubelet[3220]: E0904 17:35:20.733955 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.734211 kubelet[3220]: W0904 17:35:20.733968 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.734211 kubelet[3220]: E0904 17:35:20.733986 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.738604 kubelet[3220]: E0904 17:35:20.734235 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.738604 kubelet[3220]: W0904 17:35:20.738436 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.738604 kubelet[3220]: E0904 17:35:20.738472 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.742232 kubelet[3220]: E0904 17:35:20.738931 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.742232 kubelet[3220]: W0904 17:35:20.738954 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.742232 kubelet[3220]: E0904 17:35:20.738973 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.742232 kubelet[3220]: E0904 17:35:20.742471 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.742232 kubelet[3220]: W0904 17:35:20.742497 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.742990 kubelet[3220]: E0904 17:35:20.742541 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.772375 kubelet[3220]: E0904 17:35:20.771117 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:20.773405 kubelet[3220]: W0904 17:35:20.771151 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:20.773489 kubelet[3220]: E0904 17:35:20.773432 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:20.836340 containerd[1879]: time="2024-09-04T17:35:20.836203196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7dfd8,Uid:8e845aa6-c231-45f5-9e15-3c1544c4f5ef,Namespace:calico-system,Attempt:0,}" Sep 4 17:35:20.916910 containerd[1879]: time="2024-09-04T17:35:20.916051938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:20.916910 containerd[1879]: time="2024-09-04T17:35:20.916154208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:20.916910 containerd[1879]: time="2024-09-04T17:35:20.916177328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:20.916910 containerd[1879]: time="2024-09-04T17:35:20.916290102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:20.979580 systemd[1]: Started cri-containerd-9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f.scope - libcontainer container 9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f. Sep 4 17:35:20.993280 systemd[1]: run-containerd-runc-k8s.io-9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f-runc.HcYdGa.mount: Deactivated successfully. Sep 4 17:35:21.089135 containerd[1879]: time="2024-09-04T17:35:21.088799295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7dfd8,Uid:8e845aa6-c231-45f5-9e15-3c1544c4f5ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f\"" Sep 4 17:35:21.242153 kubelet[3220]: E0904 17:35:21.241672 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:23.261570 kubelet[3220]: E0904 17:35:23.261400 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:23.380371 containerd[1879]: time="2024-09-04T17:35:23.379880465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:23.411603 containerd[1879]: time="2024-09-04T17:35:23.411440526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:35:23.414822 containerd[1879]: time="2024-09-04T17:35:23.413875006Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:23.421358 containerd[1879]: time="2024-09-04T17:35:23.420551778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:23.421986 containerd[1879]: time="2024-09-04T17:35:23.421923985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.916254006s" Sep 4 17:35:23.422157 containerd[1879]: time="2024-09-04T17:35:23.422127550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:35:23.427472 containerd[1879]: time="2024-09-04T17:35:23.427303167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:35:23.456964 containerd[1879]: time="2024-09-04T17:35:23.456891013Z" level=info msg="CreateContainer within sandbox \"92b7291c51071a84eb442e56307d8fc4ef7aaddb15a0406c9305a399d5bc35b4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:35:23.494112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758613195.mount: Deactivated successfully. Sep 4 17:35:23.506547 containerd[1879]: time="2024-09-04T17:35:23.506444847Z" level=info msg="CreateContainer within sandbox \"92b7291c51071a84eb442e56307d8fc4ef7aaddb15a0406c9305a399d5bc35b4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f1b2849ecf3a196e4c22a210dd0a267f1aef533bfbb1f27e5f5e152959137dc2\"" Sep 4 17:35:23.508027 containerd[1879]: time="2024-09-04T17:35:23.507972258Z" level=info msg="StartContainer for \"f1b2849ecf3a196e4c22a210dd0a267f1aef533bfbb1f27e5f5e152959137dc2\"" Sep 4 17:35:23.630027 systemd[1]: Started cri-containerd-f1b2849ecf3a196e4c22a210dd0a267f1aef533bfbb1f27e5f5e152959137dc2.scope - libcontainer container f1b2849ecf3a196e4c22a210dd0a267f1aef533bfbb1f27e5f5e152959137dc2. Sep 4 17:35:23.753985 containerd[1879]: time="2024-09-04T17:35:23.753042331Z" level=info msg="StartContainer for \"f1b2849ecf3a196e4c22a210dd0a267f1aef533bfbb1f27e5f5e152959137dc2\" returns successfully" Sep 4 17:35:24.545136 kubelet[3220]: E0904 17:35:24.545100 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.545136 kubelet[3220]: W0904 17:35:24.545160 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.545136 kubelet[3220]: E0904 17:35:24.545196 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.549671 kubelet[3220]: E0904 17:35:24.549288 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.549671 kubelet[3220]: W0904 17:35:24.549338 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.549671 kubelet[3220]: E0904 17:35:24.549370 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.550557 kubelet[3220]: E0904 17:35:24.550305 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.550557 kubelet[3220]: W0904 17:35:24.550381 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.550557 kubelet[3220]: E0904 17:35:24.550405 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.551588 kubelet[3220]: E0904 17:35:24.550847 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.551588 kubelet[3220]: W0904 17:35:24.550863 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.551588 kubelet[3220]: E0904 17:35:24.550878 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.552091 kubelet[3220]: E0904 17:35:24.551909 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.552091 kubelet[3220]: W0904 17:35:24.551958 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.552091 kubelet[3220]: E0904 17:35:24.551974 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.552477 kubelet[3220]: E0904 17:35:24.552202 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.552477 kubelet[3220]: W0904 17:35:24.552211 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.552477 kubelet[3220]: E0904 17:35:24.552223 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.552767 kubelet[3220]: E0904 17:35:24.552647 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.552767 kubelet[3220]: W0904 17:35:24.552662 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.552767 kubelet[3220]: E0904 17:35:24.552675 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.553061 kubelet[3220]: E0904 17:35:24.552971 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.553061 kubelet[3220]: W0904 17:35:24.552982 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.553061 kubelet[3220]: E0904 17:35:24.552993 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.553479 kubelet[3220]: E0904 17:35:24.553405 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.553479 kubelet[3220]: W0904 17:35:24.553417 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.553479 kubelet[3220]: E0904 17:35:24.553430 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.554649 kubelet[3220]: E0904 17:35:24.554518 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.554649 kubelet[3220]: W0904 17:35:24.554532 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.554649 kubelet[3220]: E0904 17:35:24.554546 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.555436 kubelet[3220]: E0904 17:35:24.555340 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.555436 kubelet[3220]: W0904 17:35:24.555355 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.555436 kubelet[3220]: E0904 17:35:24.555368 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.557207 kubelet[3220]: E0904 17:35:24.556776 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.557207 kubelet[3220]: W0904 17:35:24.556790 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.557207 kubelet[3220]: E0904 17:35:24.556805 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.560155 kubelet[3220]: E0904 17:35:24.558011 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.560155 kubelet[3220]: W0904 17:35:24.558025 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.560155 kubelet[3220]: E0904 17:35:24.558041 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.561047 kubelet[3220]: E0904 17:35:24.560651 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.561047 kubelet[3220]: W0904 17:35:24.560670 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.561047 kubelet[3220]: E0904 17:35:24.560691 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.561047 kubelet[3220]: E0904 17:35:24.560950 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.561047 kubelet[3220]: W0904 17:35:24.560961 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.561047 kubelet[3220]: E0904 17:35:24.560974 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.597688 kubelet[3220]: E0904 17:35:24.597552 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.597688 kubelet[3220]: W0904 17:35:24.597687 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.597917 kubelet[3220]: E0904 17:35:24.597815 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.602469 kubelet[3220]: E0904 17:35:24.602433 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.602469 kubelet[3220]: W0904 17:35:24.602465 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.602702 kubelet[3220]: E0904 17:35:24.602512 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.605754 kubelet[3220]: E0904 17:35:24.603159 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.605754 kubelet[3220]: W0904 17:35:24.603184 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.605754 kubelet[3220]: E0904 17:35:24.603205 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.605754 kubelet[3220]: E0904 17:35:24.603577 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.605754 kubelet[3220]: W0904 17:35:24.603592 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.605754 kubelet[3220]: E0904 17:35:24.603607 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.605754 kubelet[3220]: E0904 17:35:24.603818 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.605754 kubelet[3220]: W0904 17:35:24.603829 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.605754 kubelet[3220]: E0904 17:35:24.603842 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.605754 kubelet[3220]: E0904 17:35:24.604124 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.607539 kubelet[3220]: W0904 17:35:24.604135 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.607539 kubelet[3220]: E0904 17:35:24.604163 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.607539 kubelet[3220]: E0904 17:35:24.604644 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.607539 kubelet[3220]: W0904 17:35:24.604654 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.607539 kubelet[3220]: E0904 17:35:24.604668 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.607539 kubelet[3220]: E0904 17:35:24.604981 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.607539 kubelet[3220]: W0904 17:35:24.604991 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.607539 kubelet[3220]: E0904 17:35:24.605005 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.607539 kubelet[3220]: E0904 17:35:24.605371 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.607539 kubelet[3220]: W0904 17:35:24.605382 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.610126 kubelet[3220]: E0904 17:35:24.605395 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.610126 kubelet[3220]: E0904 17:35:24.605652 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.610126 kubelet[3220]: W0904 17:35:24.605662 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.610126 kubelet[3220]: E0904 17:35:24.605675 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.610126 kubelet[3220]: E0904 17:35:24.607631 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.610126 kubelet[3220]: W0904 17:35:24.607645 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.610126 kubelet[3220]: E0904 17:35:24.607660 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.610126 kubelet[3220]: E0904 17:35:24.608053 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.610126 kubelet[3220]: W0904 17:35:24.608065 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.610126 kubelet[3220]: E0904 17:35:24.608104 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.610747 kubelet[3220]: E0904 17:35:24.609055 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.610747 kubelet[3220]: W0904 17:35:24.609091 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.610747 kubelet[3220]: E0904 17:35:24.609210 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.610747 kubelet[3220]: E0904 17:35:24.609482 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.610747 kubelet[3220]: W0904 17:35:24.609493 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.610747 kubelet[3220]: E0904 17:35:24.609509 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.610747 kubelet[3220]: E0904 17:35:24.609753 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.610747 kubelet[3220]: W0904 17:35:24.609782 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.610747 kubelet[3220]: E0904 17:35:24.609796 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.610747 kubelet[3220]: E0904 17:35:24.610027 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.618046 kubelet[3220]: W0904 17:35:24.610038 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.618046 kubelet[3220]: E0904 17:35:24.610051 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.618046 kubelet[3220]: E0904 17:35:24.610416 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.618046 kubelet[3220]: W0904 17:35:24.610427 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.618046 kubelet[3220]: E0904 17:35:24.610446 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:24.618046 kubelet[3220]: E0904 17:35:24.617298 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:24.618046 kubelet[3220]: W0904 17:35:24.617336 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:24.618046 kubelet[3220]: E0904 17:35:24.617367 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.064238 containerd[1879]: time="2024-09-04T17:35:25.064163015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:25.084288 containerd[1879]: time="2024-09-04T17:35:25.082154914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:35:25.140354 containerd[1879]: time="2024-09-04T17:35:25.140283431Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:25.181359 containerd[1879]: time="2024-09-04T17:35:25.180457910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:25.181775 containerd[1879]: time="2024-09-04T17:35:25.181693753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.754006757s" Sep 4 17:35:25.181895 containerd[1879]: time="2024-09-04T17:35:25.181880934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:35:25.209006 containerd[1879]: time="2024-09-04T17:35:25.208944320Z" level=info msg="CreateContainer within sandbox \"9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:35:25.239615 kubelet[3220]: E0904 17:35:25.239563 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:25.450254 containerd[1879]: time="2024-09-04T17:35:25.450128443Z" level=info msg="CreateContainer within sandbox \"9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7\"" Sep 4 17:35:25.456399 containerd[1879]: time="2024-09-04T17:35:25.455976031Z" level=info msg="StartContainer for \"a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7\"" Sep 4 17:35:25.537881 kubelet[3220]: I0904 17:35:25.536914 3220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:35:25.548992 systemd[1]: run-containerd-runc-k8s.io-a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7-runc.mocD6v.mount: Deactivated successfully. Sep 4 17:35:25.562747 systemd[1]: Started cri-containerd-a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7.scope - libcontainer container a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7. Sep 4 17:35:25.572894 kubelet[3220]: E0904 17:35:25.572761 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.573566 kubelet[3220]: W0904 17:35:25.572965 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.573566 kubelet[3220]: E0904 17:35:25.573013 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.574747 kubelet[3220]: E0904 17:35:25.574721 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.574929 kubelet[3220]: W0904 17:35:25.574745 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.574929 kubelet[3220]: E0904 17:35:25.574774 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.575632 kubelet[3220]: E0904 17:35:25.575609 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.575752 kubelet[3220]: W0904 17:35:25.575727 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.575821 kubelet[3220]: E0904 17:35:25.575759 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.576787 kubelet[3220]: E0904 17:35:25.576769 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.577137 kubelet[3220]: W0904 17:35:25.576889 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.577137 kubelet[3220]: E0904 17:35:25.576911 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.577525 kubelet[3220]: E0904 17:35:25.577510 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.577855 kubelet[3220]: W0904 17:35:25.577611 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.577855 kubelet[3220]: E0904 17:35:25.577632 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.578433 kubelet[3220]: E0904 17:35:25.578238 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.578433 kubelet[3220]: W0904 17:35:25.578252 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.578433 kubelet[3220]: E0904 17:35:25.578267 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.580267 kubelet[3220]: E0904 17:35:25.580108 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.580267 kubelet[3220]: W0904 17:35:25.580129 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.580267 kubelet[3220]: E0904 17:35:25.580145 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.581015 kubelet[3220]: E0904 17:35:25.580894 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.582034 kubelet[3220]: W0904 17:35:25.581896 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.582034 kubelet[3220]: E0904 17:35:25.581946 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.584061 kubelet[3220]: E0904 17:35:25.583737 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.584061 kubelet[3220]: W0904 17:35:25.583756 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.584061 kubelet[3220]: E0904 17:35:25.583776 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.586292 kubelet[3220]: E0904 17:35:25.586184 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.586655 kubelet[3220]: W0904 17:35:25.586412 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.586655 kubelet[3220]: E0904 17:35:25.586442 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.587249 kubelet[3220]: E0904 17:35:25.587231 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.593595 kubelet[3220]: W0904 17:35:25.593543 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.593791 kubelet[3220]: E0904 17:35:25.593621 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.596752 kubelet[3220]: E0904 17:35:25.596713 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.596752 kubelet[3220]: W0904 17:35:25.596752 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.597305 kubelet[3220]: E0904 17:35:25.597064 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.600615 kubelet[3220]: E0904 17:35:25.600153 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.600762 kubelet[3220]: W0904 17:35:25.600618 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.600762 kubelet[3220]: E0904 17:35:25.600649 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.603439 kubelet[3220]: E0904 17:35:25.603360 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.603439 kubelet[3220]: W0904 17:35:25.603395 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.603439 kubelet[3220]: E0904 17:35:25.603436 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.604993 kubelet[3220]: E0904 17:35:25.604963 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.605153 kubelet[3220]: W0904 17:35:25.605004 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.605153 kubelet[3220]: E0904 17:35:25.605046 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.639969 kubelet[3220]: E0904 17:35:25.638711 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.639969 kubelet[3220]: W0904 17:35:25.638779 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.639969 kubelet[3220]: E0904 17:35:25.639400 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.641121 kubelet[3220]: E0904 17:35:25.641090 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.641121 kubelet[3220]: W0904 17:35:25.641118 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.641423 kubelet[3220]: E0904 17:35:25.641141 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.643392 kubelet[3220]: E0904 17:35:25.641991 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.643392 kubelet[3220]: W0904 17:35:25.642007 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.643392 kubelet[3220]: E0904 17:35:25.642026 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.643595 kubelet[3220]: E0904 17:35:25.643549 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.643595 kubelet[3220]: W0904 17:35:25.643563 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.643689 kubelet[3220]: E0904 17:35:25.643597 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.644743 kubelet[3220]: E0904 17:35:25.644712 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.644839 kubelet[3220]: W0904 17:35:25.644732 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.646080 kubelet[3220]: E0904 17:35:25.646054 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.647501 kubelet[3220]: E0904 17:35:25.647482 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.647580 kubelet[3220]: W0904 17:35:25.647503 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.647647 kubelet[3220]: E0904 17:35:25.647607 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.647929 kubelet[3220]: E0904 17:35:25.647903 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.647997 kubelet[3220]: W0904 17:35:25.647951 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.650637 kubelet[3220]: E0904 17:35:25.648283 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.650637 kubelet[3220]: E0904 17:35:25.648557 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.650637 kubelet[3220]: W0904 17:35:25.648587 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.650637 kubelet[3220]: E0904 17:35:25.648670 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.650637 kubelet[3220]: E0904 17:35:25.648950 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.650637 kubelet[3220]: W0904 17:35:25.648978 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.650637 kubelet[3220]: E0904 17:35:25.648998 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.650637 kubelet[3220]: E0904 17:35:25.649988 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.650637 kubelet[3220]: W0904 17:35:25.650003 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.650637 kubelet[3220]: E0904 17:35:25.650075 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.655912 kubelet[3220]: E0904 17:35:25.655878 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.655912 kubelet[3220]: W0904 17:35:25.655905 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.656125 kubelet[3220]: E0904 17:35:25.656012 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.660072 kubelet[3220]: E0904 17:35:25.658360 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.660072 kubelet[3220]: W0904 17:35:25.658380 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.660072 kubelet[3220]: E0904 17:35:25.658410 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.661075 kubelet[3220]: E0904 17:35:25.660931 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.661561 kubelet[3220]: W0904 17:35:25.661352 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.661561 kubelet[3220]: E0904 17:35:25.661503 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.663435 kubelet[3220]: E0904 17:35:25.663016 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.663435 kubelet[3220]: W0904 17:35:25.663036 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.663435 kubelet[3220]: E0904 17:35:25.663187 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.664955 kubelet[3220]: E0904 17:35:25.663774 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.664955 kubelet[3220]: W0904 17:35:25.663789 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.664955 kubelet[3220]: E0904 17:35:25.663840 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.664955 kubelet[3220]: E0904 17:35:25.664152 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.664955 kubelet[3220]: W0904 17:35:25.664163 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.664955 kubelet[3220]: E0904 17:35:25.664195 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.665658 kubelet[3220]: E0904 17:35:25.665557 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.665658 kubelet[3220]: W0904 17:35:25.665575 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.665658 kubelet[3220]: E0904 17:35:25.665597 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.666590 kubelet[3220]: E0904 17:35:25.666572 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:35:25.666590 kubelet[3220]: W0904 17:35:25.666591 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:35:25.666733 kubelet[3220]: E0904 17:35:25.666608 3220 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:35:25.671305 containerd[1879]: time="2024-09-04T17:35:25.669078696Z" level=info msg="StartContainer for \"a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7\" returns successfully" Sep 4 17:35:25.716615 systemd[1]: cri-containerd-a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7.scope: Deactivated successfully. Sep 4 17:35:25.952441 containerd[1879]: time="2024-09-04T17:35:25.861875787Z" level=info msg="shim disconnected" id=a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7 namespace=k8s.io Sep 4 17:35:25.952441 containerd[1879]: time="2024-09-04T17:35:25.952441002Z" level=warning msg="cleaning up after shim disconnected" id=a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7 namespace=k8s.io Sep 4 17:35:25.952884 containerd[1879]: time="2024-09-04T17:35:25.952462160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:35:26.440132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a14e9be724e3abcbbe5de0547232b597749b001aa96861abca17435994e182b7-rootfs.mount: Deactivated successfully. Sep 4 17:35:26.550757 containerd[1879]: time="2024-09-04T17:35:26.550712718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:35:26.604075 kubelet[3220]: I0904 17:35:26.601276 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c457fbdd-2tkrh" podStartSLOduration=4.680412005 podStartE2EDuration="8.601252942s" podCreationTimestamp="2024-09-04 17:35:18 +0000 UTC" firstStartedPulling="2024-09-04 17:35:19.504825666 +0000 UTC m=+23.548927707" lastFinishedPulling="2024-09-04 17:35:23.425666592 +0000 UTC m=+27.469768644" observedRunningTime="2024-09-04 17:35:24.536296533 +0000 UTC m=+28.580398594" watchObservedRunningTime="2024-09-04 17:35:26.601252942 +0000 UTC m=+30.645355004" Sep 4 17:35:27.239694 kubelet[3220]: E0904 17:35:27.239631 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:29.239887 kubelet[3220]: E0904 17:35:29.239823 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:31.244106 kubelet[3220]: E0904 17:35:31.244024 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:33.240484 kubelet[3220]: E0904 17:35:33.240416 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:33.407234 containerd[1879]: time="2024-09-04T17:35:33.407160321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:33.414472 containerd[1879]: time="2024-09-04T17:35:33.414307589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:35:33.423628 containerd[1879]: time="2024-09-04T17:35:33.423083046Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:33.431534 containerd[1879]: time="2024-09-04T17:35:33.431216975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:33.437405 containerd[1879]: time="2024-09-04T17:35:33.435695647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 6.884931761s" Sep 4 17:35:33.437405 containerd[1879]: time="2024-09-04T17:35:33.435740730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:35:33.462254 containerd[1879]: time="2024-09-04T17:35:33.462059389Z" level=info msg="CreateContainer within sandbox \"9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:35:33.588371 containerd[1879]: time="2024-09-04T17:35:33.587832371Z" level=info msg="CreateContainer within sandbox \"9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bfeed02d090144b5aeb0100327db02a63b1197fcaeca22817eb852c201c4904e\"" Sep 4 17:35:33.599023 containerd[1879]: time="2024-09-04T17:35:33.595613925Z" level=info msg="StartContainer for \"bfeed02d090144b5aeb0100327db02a63b1197fcaeca22817eb852c201c4904e\"" Sep 4 17:35:33.799611 systemd[1]: Started cri-containerd-bfeed02d090144b5aeb0100327db02a63b1197fcaeca22817eb852c201c4904e.scope - libcontainer container bfeed02d090144b5aeb0100327db02a63b1197fcaeca22817eb852c201c4904e. Sep 4 17:35:33.976874 containerd[1879]: time="2024-09-04T17:35:33.976717860Z" level=info msg="StartContainer for \"bfeed02d090144b5aeb0100327db02a63b1197fcaeca22817eb852c201c4904e\" returns successfully" Sep 4 17:35:35.240474 kubelet[3220]: E0904 17:35:35.239985 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:36.541557 systemd[1]: cri-containerd-bfeed02d090144b5aeb0100327db02a63b1197fcaeca22817eb852c201c4904e.scope: Deactivated successfully. Sep 4 17:35:36.590366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfeed02d090144b5aeb0100327db02a63b1197fcaeca22817eb852c201c4904e-rootfs.mount: Deactivated successfully. Sep 4 17:35:36.606736 containerd[1879]: time="2024-09-04T17:35:36.606665049Z" level=info msg="shim disconnected" id=bfeed02d090144b5aeb0100327db02a63b1197fcaeca22817eb852c201c4904e namespace=k8s.io Sep 4 17:35:36.606736 containerd[1879]: time="2024-09-04T17:35:36.606725337Z" level=warning msg="cleaning up after shim disconnected" id=bfeed02d090144b5aeb0100327db02a63b1197fcaeca22817eb852c201c4904e namespace=k8s.io Sep 4 17:35:36.606736 containerd[1879]: time="2024-09-04T17:35:36.606737461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:35:36.636632 kubelet[3220]: I0904 17:35:36.636144 3220 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:35:36.661500 containerd[1879]: time="2024-09-04T17:35:36.661428426Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:35:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:35:36.688309 kubelet[3220]: I0904 17:35:36.688190 3220 topology_manager.go:215] "Topology Admit Handler" podUID="8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4g6g9" Sep 4 17:35:36.691554 kubelet[3220]: I0904 17:35:36.690992 3220 topology_manager.go:215] "Topology Admit Handler" podUID="38219aef-26b5-45ca-8f97-2ec892749d8f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t2qdb" Sep 4 17:35:36.703744 kubelet[3220]: I0904 17:35:36.702410 3220 topology_manager.go:215] "Topology Admit Handler" podUID="00276485-11d4-4be8-a266-bce8f1f7d45a" podNamespace="calico-system" podName="calico-kube-controllers-6b7b5765f6-8d75q" Sep 4 17:35:36.712548 systemd[1]: Created slice kubepods-burstable-pod8ab8b6bd_cf5e_4eed_958b_c1ff433c2b39.slice - libcontainer container kubepods-burstable-pod8ab8b6bd_cf5e_4eed_958b_c1ff433c2b39.slice. Sep 4 17:35:36.735156 systemd[1]: Created slice kubepods-burstable-pod38219aef_26b5_45ca_8f97_2ec892749d8f.slice - libcontainer container kubepods-burstable-pod38219aef_26b5_45ca_8f97_2ec892749d8f.slice. Sep 4 17:35:36.752287 systemd[1]: Created slice kubepods-besteffort-pod00276485_11d4_4be8_a266_bce8f1f7d45a.slice - libcontainer container kubepods-besteffort-pod00276485_11d4_4be8_a266_bce8f1f7d45a.slice. Sep 4 17:35:36.807074 kubelet[3220]: I0904 17:35:36.806923 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39-config-volume\") pod \"coredns-7db6d8ff4d-4g6g9\" (UID: \"8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39\") " pod="kube-system/coredns-7db6d8ff4d-4g6g9" Sep 4 17:35:36.807074 kubelet[3220]: I0904 17:35:36.806986 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38219aef-26b5-45ca-8f97-2ec892749d8f-config-volume\") pod \"coredns-7db6d8ff4d-t2qdb\" (UID: \"38219aef-26b5-45ca-8f97-2ec892749d8f\") " pod="kube-system/coredns-7db6d8ff4d-t2qdb" Sep 4 17:35:36.807074 kubelet[3220]: I0904 17:35:36.807019 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9rkg\" (UniqueName: \"kubernetes.io/projected/8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39-kube-api-access-n9rkg\") pod \"coredns-7db6d8ff4d-4g6g9\" (UID: \"8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39\") " pod="kube-system/coredns-7db6d8ff4d-4g6g9" Sep 4 17:35:36.807074 kubelet[3220]: I0904 17:35:36.807049 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grp5d\" (UniqueName: \"kubernetes.io/projected/38219aef-26b5-45ca-8f97-2ec892749d8f-kube-api-access-grp5d\") pod \"coredns-7db6d8ff4d-t2qdb\" (UID: \"38219aef-26b5-45ca-8f97-2ec892749d8f\") " pod="kube-system/coredns-7db6d8ff4d-t2qdb" Sep 4 17:35:36.807074 kubelet[3220]: I0904 17:35:36.807077 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00276485-11d4-4be8-a266-bce8f1f7d45a-tigera-ca-bundle\") pod \"calico-kube-controllers-6b7b5765f6-8d75q\" (UID: \"00276485-11d4-4be8-a266-bce8f1f7d45a\") " pod="calico-system/calico-kube-controllers-6b7b5765f6-8d75q" Sep 4 17:35:36.807483 kubelet[3220]: I0904 17:35:36.807100 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhhhs\" (UniqueName: \"kubernetes.io/projected/00276485-11d4-4be8-a266-bce8f1f7d45a-kube-api-access-vhhhs\") pod \"calico-kube-controllers-6b7b5765f6-8d75q\" (UID: \"00276485-11d4-4be8-a266-bce8f1f7d45a\") " pod="calico-system/calico-kube-controllers-6b7b5765f6-8d75q" Sep 4 17:35:37.022771 containerd[1879]: time="2024-09-04T17:35:37.022718367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g6g9,Uid:8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39,Namespace:kube-system,Attempt:0,}" Sep 4 17:35:37.050375 containerd[1879]: time="2024-09-04T17:35:37.049923342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t2qdb,Uid:38219aef-26b5-45ca-8f97-2ec892749d8f,Namespace:kube-system,Attempt:0,}" Sep 4 17:35:37.075388 containerd[1879]: time="2024-09-04T17:35:37.063376378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b7b5765f6-8d75q,Uid:00276485-11d4-4be8-a266-bce8f1f7d45a,Namespace:calico-system,Attempt:0,}" Sep 4 17:35:37.264290 systemd[1]: Created slice kubepods-besteffort-podb4cc4158_2c05_4c12_9faa_641987eb3d31.slice - libcontainer container kubepods-besteffort-podb4cc4158_2c05_4c12_9faa_641987eb3d31.slice. Sep 4 17:35:37.276352 containerd[1879]: time="2024-09-04T17:35:37.275884130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6kpnd,Uid:b4cc4158-2c05-4c12-9faa-641987eb3d31,Namespace:calico-system,Attempt:0,}" Sep 4 17:35:37.656819 containerd[1879]: time="2024-09-04T17:35:37.654818945Z" level=error msg="Failed to destroy network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.661367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9-shm.mount: Deactivated successfully. Sep 4 17:35:37.727255 containerd[1879]: time="2024-09-04T17:35:37.726125270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:35:37.751418 containerd[1879]: time="2024-09-04T17:35:37.749267825Z" level=error msg="encountered an error cleaning up failed sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.751418 containerd[1879]: time="2024-09-04T17:35:37.749417882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t2qdb,Uid:38219aef-26b5-45ca-8f97-2ec892749d8f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.782365 containerd[1879]: time="2024-09-04T17:35:37.779103513Z" level=error msg="Failed to destroy network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.783516 containerd[1879]: time="2024-09-04T17:35:37.783070424Z" level=error msg="encountered an error cleaning up failed sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.783516 containerd[1879]: time="2024-09-04T17:35:37.783170356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b7b5765f6-8d75q,Uid:00276485-11d4-4be8-a266-bce8f1f7d45a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.783516 containerd[1879]: time="2024-09-04T17:35:37.783407689Z" level=error msg="Failed to destroy network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.784971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a-shm.mount: Deactivated successfully. Sep 4 17:35:37.788527 containerd[1879]: time="2024-09-04T17:35:37.786294443Z" level=error msg="encountered an error cleaning up failed sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.788527 containerd[1879]: time="2024-09-04T17:35:37.786381255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g6g9,Uid:8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.788807 containerd[1879]: time="2024-09-04T17:35:37.788764141Z" level=error msg="Failed to destroy network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.790276 containerd[1879]: time="2024-09-04T17:35:37.789810460Z" level=error msg="encountered an error cleaning up failed sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.793685 containerd[1879]: time="2024-09-04T17:35:37.790504388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6kpnd,Uid:b4cc4158-2c05-4c12-9faa-641987eb3d31,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.793260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc-shm.mount: Deactivated successfully. Sep 4 17:35:37.793982 kubelet[3220]: E0904 17:35:37.792744 3220 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.793982 kubelet[3220]: E0904 17:35:37.792832 3220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6kpnd" Sep 4 17:35:37.793982 kubelet[3220]: E0904 17:35:37.792865 3220 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6kpnd" Sep 4 17:35:37.796534 kubelet[3220]: E0904 17:35:37.792927 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6kpnd_calico-system(b4cc4158-2c05-4c12-9faa-641987eb3d31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6kpnd_calico-system(b4cc4158-2c05-4c12-9faa-641987eb3d31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:37.796534 kubelet[3220]: E0904 17:35:37.793196 3220 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.796534 kubelet[3220]: E0904 17:35:37.793234 3220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t2qdb" Sep 4 17:35:37.796720 kubelet[3220]: E0904 17:35:37.793253 3220 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-t2qdb" Sep 4 17:35:37.796720 kubelet[3220]: E0904 17:35:37.793290 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-t2qdb_kube-system(38219aef-26b5-45ca-8f97-2ec892749d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-t2qdb_kube-system(38219aef-26b5-45ca-8f97-2ec892749d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t2qdb" podUID="38219aef-26b5-45ca-8f97-2ec892749d8f" Sep 4 17:35:37.796720 kubelet[3220]: E0904 17:35:37.793360 3220 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.796870 kubelet[3220]: E0904 17:35:37.793390 3220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b7b5765f6-8d75q" Sep 4 17:35:37.796870 kubelet[3220]: E0904 17:35:37.793409 3220 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b7b5765f6-8d75q" Sep 4 17:35:37.796870 kubelet[3220]: E0904 17:35:37.793443 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b7b5765f6-8d75q_calico-system(00276485-11d4-4be8-a266-bce8f1f7d45a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b7b5765f6-8d75q_calico-system(00276485-11d4-4be8-a266-bce8f1f7d45a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b7b5765f6-8d75q" podUID="00276485-11d4-4be8-a266-bce8f1f7d45a" Sep 4 17:35:37.797027 kubelet[3220]: E0904 17:35:37.793478 3220 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:37.797027 kubelet[3220]: E0904 17:35:37.793501 3220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g6g9" Sep 4 17:35:37.797027 kubelet[3220]: E0904 17:35:37.793523 3220 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4g6g9" Sep 4 17:35:37.797158 kubelet[3220]: E0904 17:35:37.793552 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4g6g9_kube-system(8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4g6g9_kube-system(8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4g6g9" podUID="8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39" Sep 4 17:35:37.801584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644-shm.mount: Deactivated successfully. Sep 4 17:35:38.715303 kubelet[3220]: I0904 17:35:38.715255 3220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:35:38.717874 containerd[1879]: time="2024-09-04T17:35:38.717817622Z" level=info msg="StopPodSandbox for \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\"" Sep 4 17:35:38.726814 containerd[1879]: time="2024-09-04T17:35:38.726758976Z" level=info msg="Ensure that sandbox ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a in task-service has been cleanup successfully" Sep 4 17:35:38.734122 kubelet[3220]: I0904 17:35:38.734076 3220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:38.742784 containerd[1879]: time="2024-09-04T17:35:38.741694958Z" level=info msg="StopPodSandbox for \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\"" Sep 4 17:35:38.743464 containerd[1879]: time="2024-09-04T17:35:38.743415856Z" level=info msg="Ensure that sandbox d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9 in task-service has been cleanup successfully" Sep 4 17:35:38.747878 kubelet[3220]: I0904 17:35:38.745417 3220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:38.748980 containerd[1879]: time="2024-09-04T17:35:38.748367166Z" level=info msg="StopPodSandbox for \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\"" Sep 4 17:35:38.748980 containerd[1879]: time="2024-09-04T17:35:38.748625922Z" level=info msg="Ensure that sandbox b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc in task-service has been cleanup successfully" Sep 4 17:35:38.766567 kubelet[3220]: I0904 17:35:38.766524 3220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:38.768727 containerd[1879]: time="2024-09-04T17:35:38.768633672Z" level=info msg="StopPodSandbox for \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\"" Sep 4 17:35:38.769374 containerd[1879]: time="2024-09-04T17:35:38.768877157Z" level=info msg="Ensure that sandbox 4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644 in task-service has been cleanup successfully" Sep 4 17:35:38.954157 containerd[1879]: time="2024-09-04T17:35:38.954082799Z" level=error msg="StopPodSandbox for \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\" failed" error="failed to destroy network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:38.955582 kubelet[3220]: E0904 17:35:38.954953 3220 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:38.960472 kubelet[3220]: E0904 17:35:38.955366 3220 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9"} Sep 4 17:35:38.960472 kubelet[3220]: E0904 17:35:38.958695 3220 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38219aef-26b5-45ca-8f97-2ec892749d8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:35:38.960472 kubelet[3220]: E0904 17:35:38.959881 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38219aef-26b5-45ca-8f97-2ec892749d8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-t2qdb" podUID="38219aef-26b5-45ca-8f97-2ec892749d8f" Sep 4 17:35:38.980016 containerd[1879]: time="2024-09-04T17:35:38.979924840Z" level=error msg="StopPodSandbox for \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\" failed" error="failed to destroy network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:38.980646 kubelet[3220]: E0904 17:35:38.980209 3220 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:35:38.980646 kubelet[3220]: E0904 17:35:38.980279 3220 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a"} Sep 4 17:35:38.980646 kubelet[3220]: E0904 17:35:38.980353 3220 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00276485-11d4-4be8-a266-bce8f1f7d45a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:35:38.980646 kubelet[3220]: E0904 17:35:38.980389 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00276485-11d4-4be8-a266-bce8f1f7d45a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b7b5765f6-8d75q" podUID="00276485-11d4-4be8-a266-bce8f1f7d45a" Sep 4 17:35:38.983282 containerd[1879]: time="2024-09-04T17:35:38.983223602Z" level=error msg="StopPodSandbox for \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\" failed" error="failed to destroy network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:38.983928 containerd[1879]: time="2024-09-04T17:35:38.983759116Z" level=error msg="StopPodSandbox for \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\" failed" error="failed to destroy network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:38.984431 kubelet[3220]: E0904 17:35:38.984032 3220 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:38.984431 kubelet[3220]: E0904 17:35:38.984032 3220 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:38.984431 kubelet[3220]: E0904 17:35:38.984088 3220 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc"} Sep 4 17:35:38.984431 kubelet[3220]: E0904 17:35:38.984136 3220 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:35:38.984431 kubelet[3220]: E0904 17:35:38.984088 3220 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644"} Sep 4 17:35:38.984817 kubelet[3220]: E0904 17:35:38.984170 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4g6g9" podUID="8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39" Sep 4 17:35:38.984817 kubelet[3220]: E0904 17:35:38.984338 3220 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4cc4158-2c05-4c12-9faa-641987eb3d31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:35:38.984817 kubelet[3220]: E0904 17:35:38.984373 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4cc4158-2c05-4c12-9faa-641987eb3d31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6kpnd" podUID="b4cc4158-2c05-4c12-9faa-641987eb3d31" Sep 4 17:35:42.697243 kubelet[3220]: I0904 17:35:42.696379 3220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:35:47.855011 systemd[1]: Started sshd@7-172.31.21.246:22-139.178.68.195:40408.service - OpenSSH per-connection server daemon (139.178.68.195:40408). Sep 4 17:35:48.158112 sshd[4215]: Accepted publickey for core from 139.178.68.195 port 40408 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:35:48.162800 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:35:48.167570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444088038.mount: Deactivated successfully. Sep 4 17:35:48.198378 systemd-logind[1856]: New session 8 of user core. Sep 4 17:35:48.208586 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:35:48.284169 containerd[1879]: time="2024-09-04T17:35:48.275463382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:35:48.295054 containerd[1879]: time="2024-09-04T17:35:48.294949630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:48.333526 containerd[1879]: time="2024-09-04T17:35:48.333192902Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:48.339811 containerd[1879]: time="2024-09-04T17:35:48.339131206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:48.350539 containerd[1879]: time="2024-09-04T17:35:48.350476653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 10.608788361s" Sep 4 17:35:48.351221 containerd[1879]: time="2024-09-04T17:35:48.350959234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:35:48.529531 containerd[1879]: time="2024-09-04T17:35:48.529463511Z" level=info msg="CreateContainer within sandbox \"9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:35:48.659910 sshd[4215]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:48.665050 systemd[1]: sshd@7-172.31.21.246:22-139.178.68.195:40408.service: Deactivated successfully. Sep 4 17:35:48.668205 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:35:48.669355 systemd-logind[1856]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:35:48.686537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617239944.mount: Deactivated successfully. Sep 4 17:35:48.689557 systemd-logind[1856]: Removed session 8. Sep 4 17:35:48.722956 containerd[1879]: time="2024-09-04T17:35:48.722904746Z" level=info msg="CreateContainer within sandbox \"9590bd583a545eb1a8999d003ffb6058fa05f1d2a4f52acbe66f586bf000f28f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a0556935f690acac12f1031aa2eb9fbe941acaa6c215842d3a54acdd5b29461d\"" Sep 4 17:35:48.729581 containerd[1879]: time="2024-09-04T17:35:48.729539083Z" level=info msg="StartContainer for \"a0556935f690acac12f1031aa2eb9fbe941acaa6c215842d3a54acdd5b29461d\"" Sep 4 17:35:49.029629 systemd[1]: Started cri-containerd-a0556935f690acac12f1031aa2eb9fbe941acaa6c215842d3a54acdd5b29461d.scope - libcontainer container a0556935f690acac12f1031aa2eb9fbe941acaa6c215842d3a54acdd5b29461d. Sep 4 17:35:49.098795 containerd[1879]: time="2024-09-04T17:35:49.098735238Z" level=info msg="StartContainer for \"a0556935f690acac12f1031aa2eb9fbe941acaa6c215842d3a54acdd5b29461d\" returns successfully" Sep 4 17:35:49.253104 containerd[1879]: time="2024-09-04T17:35:49.248676043Z" level=info msg="StopPodSandbox for \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\"" Sep 4 17:35:49.426131 containerd[1879]: time="2024-09-04T17:35:49.424520236Z" level=error msg="StopPodSandbox for \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\" failed" error="failed to destroy network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:35:49.428286 kubelet[3220]: E0904 17:35:49.428026 3220 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:35:49.428286 kubelet[3220]: E0904 17:35:49.428099 3220 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a"} Sep 4 17:35:49.428286 kubelet[3220]: E0904 17:35:49.428149 3220 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00276485-11d4-4be8-a266-bce8f1f7d45a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:35:49.428286 kubelet[3220]: E0904 17:35:49.428196 3220 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00276485-11d4-4be8-a266-bce8f1f7d45a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b7b5765f6-8d75q" podUID="00276485-11d4-4be8-a266-bce8f1f7d45a" Sep 4 17:35:49.494486 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:35:49.495505 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:35:51.246836 containerd[1879]: time="2024-09-04T17:35:51.246784513Z" level=info msg="StopPodSandbox for \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\"" Sep 4 17:35:51.458623 kubelet[3220]: I0904 17:35:51.456525 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7dfd8" podStartSLOduration=6.140366655 podStartE2EDuration="33.435798516s" podCreationTimestamp="2024-09-04 17:35:18 +0000 UTC" firstStartedPulling="2024-09-04 17:35:21.093109069 +0000 UTC m=+25.137211111" lastFinishedPulling="2024-09-04 17:35:48.388540923 +0000 UTC m=+52.432642972" observedRunningTime="2024-09-04 17:35:49.944076982 +0000 UTC m=+53.988179043" watchObservedRunningTime="2024-09-04 17:35:51.435798516 +0000 UTC m=+55.479900580" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.432 [INFO][4348] k8s.go 608: Cleaning up netns ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.433 [INFO][4348] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" iface="eth0" netns="/var/run/netns/cni-385b5f5c-9180-e376-c332-cd4c71e7f004" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.433 [INFO][4348] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" iface="eth0" netns="/var/run/netns/cni-385b5f5c-9180-e376-c332-cd4c71e7f004" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.435 [INFO][4348] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" iface="eth0" netns="/var/run/netns/cni-385b5f5c-9180-e376-c332-cd4c71e7f004" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.436 [INFO][4348] k8s.go 615: Releasing IP address(es) ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.436 [INFO][4348] utils.go 188: Calico CNI releasing IP address ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.849 [INFO][4379] ipam_plugin.go 417: Releasing address using handleID ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" HandleID="k8s-pod-network.b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.851 [INFO][4379] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.852 [INFO][4379] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.876 [WARNING][4379] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" HandleID="k8s-pod-network.b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.876 [INFO][4379] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" HandleID="k8s-pod-network.b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.879 [INFO][4379] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:51.889349 containerd[1879]: 2024-09-04 17:35:51.883 [INFO][4348] k8s.go 621: Teardown processing complete. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:51.892966 containerd[1879]: time="2024-09-04T17:35:51.892551371Z" level=info msg="TearDown network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\" successfully" Sep 4 17:35:51.892966 containerd[1879]: time="2024-09-04T17:35:51.892595333Z" level=info msg="StopPodSandbox for \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\" returns successfully" Sep 4 17:35:51.894658 systemd[1]: run-netns-cni\x2d385b5f5c\x2d9180\x2de376\x2dc332\x2dcd4c71e7f004.mount: Deactivated successfully. Sep 4 17:35:51.900651 containerd[1879]: time="2024-09-04T17:35:51.899731497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g6g9,Uid:8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39,Namespace:kube-system,Attempt:1,}" Sep 4 17:35:52.406580 (udev-worker)[4293]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:35:52.450000 systemd-networkd[1721]: cali60b7307b239: Link UP Sep 4 17:35:52.450512 systemd-networkd[1721]: cali60b7307b239: Gained carrier Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.054 [INFO][4433] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.075 [INFO][4433] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0 coredns-7db6d8ff4d- kube-system 8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39 738 0 2024-09-04 17:35:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-246 coredns-7db6d8ff4d-4g6g9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali60b7307b239 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g6g9" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.075 [INFO][4433] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g6g9" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.235 [INFO][4443] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" HandleID="k8s-pod-network.8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.261 [INFO][4443] ipam_plugin.go 270: Auto assigning IP ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" HandleID="k8s-pod-network.8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035edb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-246", "pod":"coredns-7db6d8ff4d-4g6g9", "timestamp":"2024-09-04 17:35:52.235489558 +0000 UTC"}, Hostname:"ip-172-31-21-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.262 [INFO][4443] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.262 [INFO][4443] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.262 [INFO][4443] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-246' Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.266 [INFO][4443] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" host="ip-172-31-21-246" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.292 [INFO][4443] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-246" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.318 [INFO][4443] ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.327 [INFO][4443] ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.334 [INFO][4443] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.335 [INFO][4443] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" host="ip-172-31-21-246" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.337 [INFO][4443] ipam.go 1685: Creating new handle: k8s-pod-network.8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.354 [INFO][4443] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" host="ip-172-31-21-246" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.368 [INFO][4443] ipam.go 1216: Successfully claimed IPs: [192.168.77.1/26] block=192.168.77.0/26 handle="k8s-pod-network.8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" host="ip-172-31-21-246" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.369 [INFO][4443] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.1/26] handle="k8s-pod-network.8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" host="ip-172-31-21-246" Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.369 [INFO][4443] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:52.508442 containerd[1879]: 2024-09-04 17:35:52.369 [INFO][4443] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.1/26] IPv6=[] ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" HandleID="k8s-pod-network.8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:52.511979 containerd[1879]: 2024-09-04 17:35:52.377 [INFO][4433] k8s.go 386: Populated endpoint ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g6g9" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"", Pod:"coredns-7db6d8ff4d-4g6g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60b7307b239", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:52.511979 containerd[1879]: 2024-09-04 17:35:52.378 [INFO][4433] k8s.go 387: Calico CNI using IPs: [192.168.77.1/32] ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g6g9" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:52.511979 containerd[1879]: 2024-09-04 17:35:52.378 [INFO][4433] dataplane_linux.go 68: Setting the host side veth name to cali60b7307b239 ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g6g9" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:52.511979 containerd[1879]: 2024-09-04 17:35:52.417 [INFO][4433] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g6g9" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:52.511979 containerd[1879]: 2024-09-04 17:35:52.425 [INFO][4433] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g6g9" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa", Pod:"coredns-7db6d8ff4d-4g6g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60b7307b239", MAC:"4e:fe:f7:73:af:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:52.511979 containerd[1879]: 2024-09-04 17:35:52.500 [INFO][4433] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4g6g9" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:52.670996 containerd[1879]: time="2024-09-04T17:35:52.666390642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:52.670996 containerd[1879]: time="2024-09-04T17:35:52.670282957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:52.670996 containerd[1879]: time="2024-09-04T17:35:52.670332395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:52.672175 containerd[1879]: time="2024-09-04T17:35:52.670528528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:52.745924 systemd[1]: Started cri-containerd-8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa.scope - libcontainer container 8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa. Sep 4 17:35:52.875059 containerd[1879]: time="2024-09-04T17:35:52.874948210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4g6g9,Uid:8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39,Namespace:kube-system,Attempt:1,} returns sandbox id \"8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa\"" Sep 4 17:35:52.920266 containerd[1879]: time="2024-09-04T17:35:52.919131493Z" level=info msg="CreateContainer within sandbox \"8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:35:52.969438 containerd[1879]: time="2024-09-04T17:35:52.969243174Z" level=info msg="CreateContainer within sandbox \"8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e86fea6269aebaca3f53853571976f52d0f9e3eefa38df4ce650e99a99bdddb9\"" Sep 4 17:35:52.971288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388619139.mount: Deactivated successfully. Sep 4 17:35:52.978395 containerd[1879]: time="2024-09-04T17:35:52.977544757Z" level=info msg="StartContainer for \"e86fea6269aebaca3f53853571976f52d0f9e3eefa38df4ce650e99a99bdddb9\"" Sep 4 17:35:53.105698 systemd[1]: Started cri-containerd-e86fea6269aebaca3f53853571976f52d0f9e3eefa38df4ce650e99a99bdddb9.scope - libcontainer container e86fea6269aebaca3f53853571976f52d0f9e3eefa38df4ce650e99a99bdddb9. Sep 4 17:35:53.201183 containerd[1879]: time="2024-09-04T17:35:53.200513292Z" level=info msg="StartContainer for \"e86fea6269aebaca3f53853571976f52d0f9e3eefa38df4ce650e99a99bdddb9\" returns successfully" Sep 4 17:35:53.242767 containerd[1879]: time="2024-09-04T17:35:53.242646595Z" level=info msg="StopPodSandbox for \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\"" Sep 4 17:35:53.499454 kernel: bpftool[4598]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.461 [INFO][4578] k8s.go 608: Cleaning up netns ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.462 [INFO][4578] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" iface="eth0" netns="/var/run/netns/cni-0bd194cc-4836-0f7e-4fa6-fb85d12fb66e" Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.463 [INFO][4578] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" iface="eth0" netns="/var/run/netns/cni-0bd194cc-4836-0f7e-4fa6-fb85d12fb66e" Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.463 [INFO][4578] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" iface="eth0" netns="/var/run/netns/cni-0bd194cc-4836-0f7e-4fa6-fb85d12fb66e" Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.463 [INFO][4578] k8s.go 615: Releasing IP address(es) ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.463 [INFO][4578] utils.go 188: Calico CNI releasing IP address ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.559 [INFO][4593] ipam_plugin.go 417: Releasing address using handleID ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" HandleID="k8s-pod-network.4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.561 [INFO][4593] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.562 [INFO][4593] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.577 [WARNING][4593] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" HandleID="k8s-pod-network.4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.577 [INFO][4593] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" HandleID="k8s-pod-network.4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.581 [INFO][4593] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:53.597736 containerd[1879]: 2024-09-04 17:35:53.589 [INFO][4578] k8s.go 621: Teardown processing complete. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:53.599380 containerd[1879]: time="2024-09-04T17:35:53.598201462Z" level=info msg="TearDown network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\" successfully" Sep 4 17:35:53.599380 containerd[1879]: time="2024-09-04T17:35:53.598415345Z" level=info msg="StopPodSandbox for \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\" returns successfully" Sep 4 17:35:53.601360 containerd[1879]: time="2024-09-04T17:35:53.601293894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6kpnd,Uid:b4cc4158-2c05-4c12-9faa-641987eb3d31,Namespace:calico-system,Attempt:1,}" Sep 4 17:35:53.710742 systemd[1]: Started sshd@8-172.31.21.246:22-139.178.68.195:40416.service - OpenSSH per-connection server daemon (139.178.68.195:40416). Sep 4 17:35:53.966044 systemd[1]: run-netns-cni\x2d0bd194cc\x2d4836\x2d0f7e\x2d4fa6\x2dfb85d12fb66e.mount: Deactivated successfully. Sep 4 17:35:53.970801 sshd[4616]: Accepted publickey for core from 139.178.68.195 port 40416 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:35:53.975792 sshd[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:35:53.993660 systemd-logind[1856]: New session 9 of user core. Sep 4 17:35:53.998610 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:35:54.113676 (udev-worker)[4294]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:35:54.115451 systemd-networkd[1721]: calif97c0980444: Link UP Sep 4 17:35:54.119892 systemd-networkd[1721]: calif97c0980444: Gained carrier Sep 4 17:35:54.155090 kubelet[3220]: I0904 17:35:54.155003 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4g6g9" podStartSLOduration=44.154976772 podStartE2EDuration="44.154976772s" podCreationTimestamp="2024-09-04 17:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:35:54.062871256 +0000 UTC m=+58.106973319" watchObservedRunningTime="2024-09-04 17:35:54.154976772 +0000 UTC m=+58.199078833" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:53.721 [INFO][4604] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0 csi-node-driver- calico-system b4cc4158-2c05-4c12-9faa-641987eb3d31 756 0 2024-09-04 17:35:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-21-246 csi-node-driver-6kpnd eth0 default [] [] [kns.calico-system ksa.calico-system.default] calif97c0980444 [] []}} ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Namespace="calico-system" Pod="csi-node-driver-6kpnd" WorkloadEndpoint="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:53.721 [INFO][4604] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Namespace="calico-system" Pod="csi-node-driver-6kpnd" WorkloadEndpoint="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:53.877 [INFO][4617] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" HandleID="k8s-pod-network.6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:53.905 [INFO][4617] ipam_plugin.go 270: Auto assigning IP ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" HandleID="k8s-pod-network.6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005b8e10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-246", "pod":"csi-node-driver-6kpnd", "timestamp":"2024-09-04 17:35:53.877485004 +0000 UTC"}, Hostname:"ip-172-31-21-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:53.906 [INFO][4617] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:53.906 [INFO][4617] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:53.906 [INFO][4617] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-246' Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:53.924 [INFO][4617] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" host="ip-172-31-21-246" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:53.937 [INFO][4617] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-246" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.022 [INFO][4617] ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.055 [INFO][4617] ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.059 [INFO][4617] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.060 [INFO][4617] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" host="ip-172-31-21-246" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.068 [INFO][4617] ipam.go 1685: Creating new handle: k8s-pod-network.6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.079 [INFO][4617] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" host="ip-172-31-21-246" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.104 [INFO][4617] ipam.go 1216: Successfully claimed IPs: [192.168.77.2/26] block=192.168.77.0/26 handle="k8s-pod-network.6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" host="ip-172-31-21-246" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.105 [INFO][4617] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.2/26] handle="k8s-pod-network.6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" host="ip-172-31-21-246" Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.105 [INFO][4617] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:54.160285 containerd[1879]: 2024-09-04 17:35:54.105 [INFO][4617] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.2/26] IPv6=[] ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" HandleID="k8s-pod-network.6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:54.163185 containerd[1879]: 2024-09-04 17:35:54.109 [INFO][4604] k8s.go 386: Populated endpoint ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Namespace="calico-system" Pod="csi-node-driver-6kpnd" WorkloadEndpoint="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4cc4158-2c05-4c12-9faa-641987eb3d31", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"", Pod:"csi-node-driver-6kpnd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif97c0980444", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:54.163185 containerd[1879]: 2024-09-04 17:35:54.109 [INFO][4604] k8s.go 387: Calico CNI using IPs: [192.168.77.2/32] ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Namespace="calico-system" Pod="csi-node-driver-6kpnd" WorkloadEndpoint="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:54.163185 containerd[1879]: 2024-09-04 17:35:54.109 [INFO][4604] dataplane_linux.go 68: Setting the host side veth name to calif97c0980444 ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Namespace="calico-system" Pod="csi-node-driver-6kpnd" WorkloadEndpoint="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:54.163185 containerd[1879]: 2024-09-04 17:35:54.122 [INFO][4604] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Namespace="calico-system" Pod="csi-node-driver-6kpnd" WorkloadEndpoint="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:54.163185 containerd[1879]: 2024-09-04 17:35:54.122 [INFO][4604] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Namespace="calico-system" Pod="csi-node-driver-6kpnd" WorkloadEndpoint="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4cc4158-2c05-4c12-9faa-641987eb3d31", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f", Pod:"csi-node-driver-6kpnd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif97c0980444", MAC:"2e:2d:04:1a:f4:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:54.163185 containerd[1879]: 2024-09-04 17:35:54.145 [INFO][4604] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f" Namespace="calico-system" Pod="csi-node-driver-6kpnd" WorkloadEndpoint="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:54.252493 systemd-networkd[1721]: cali60b7307b239: Gained IPv6LL Sep 4 17:35:54.267247 containerd[1879]: time="2024-09-04T17:35:54.267103808Z" level=info msg="StopPodSandbox for \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\"" Sep 4 17:35:54.356773 containerd[1879]: time="2024-09-04T17:35:54.355220053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:54.356773 containerd[1879]: time="2024-09-04T17:35:54.356443206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:54.356773 containerd[1879]: time="2024-09-04T17:35:54.356487188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:54.356773 containerd[1879]: time="2024-09-04T17:35:54.356648535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:54.449707 systemd[1]: Started cri-containerd-6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f.scope - libcontainer container 6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f. Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.545 [INFO][4679] k8s.go 608: Cleaning up netns ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.546 [INFO][4679] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" iface="eth0" netns="/var/run/netns/cni-581ee538-07c5-91ab-cc02-454f3a07870d" Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.547 [INFO][4679] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" iface="eth0" netns="/var/run/netns/cni-581ee538-07c5-91ab-cc02-454f3a07870d" Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.549 [INFO][4679] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" iface="eth0" netns="/var/run/netns/cni-581ee538-07c5-91ab-cc02-454f3a07870d" Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.550 [INFO][4679] k8s.go 615: Releasing IP address(es) ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.552 [INFO][4679] utils.go 188: Calico CNI releasing IP address ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.636 [INFO][4703] ipam_plugin.go 417: Releasing address using handleID ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" HandleID="k8s-pod-network.d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.643 [INFO][4703] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.643 [INFO][4703] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.672 [WARNING][4703] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" HandleID="k8s-pod-network.d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.672 [INFO][4703] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" HandleID="k8s-pod-network.d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.684 [INFO][4703] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:54.693219 containerd[1879]: 2024-09-04 17:35:54.687 [INFO][4679] k8s.go 621: Teardown processing complete. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:54.700572 containerd[1879]: time="2024-09-04T17:35:54.697528319Z" level=info msg="TearDown network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\" successfully" Sep 4 17:35:54.700572 containerd[1879]: time="2024-09-04T17:35:54.697810995Z" level=info msg="StopPodSandbox for \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\" returns successfully" Sep 4 17:35:54.705248 containerd[1879]: time="2024-09-04T17:35:54.704699714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t2qdb,Uid:38219aef-26b5-45ca-8f97-2ec892749d8f,Namespace:kube-system,Attempt:1,}" Sep 4 17:35:54.710503 systemd[1]: run-netns-cni\x2d581ee538\x2d07c5\x2d91ab\x2dcc02\x2d454f3a07870d.mount: Deactivated successfully. Sep 4 17:35:54.726810 sshd[4616]: pam_unix(sshd:session): session closed for user core Sep 4 17:35:54.750212 systemd[1]: sshd@8-172.31.21.246:22-139.178.68.195:40416.service: Deactivated successfully. Sep 4 17:35:54.760990 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:35:54.772441 systemd-logind[1856]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:35:54.777306 systemd-logind[1856]: Removed session 9. Sep 4 17:35:54.890828 containerd[1879]: time="2024-09-04T17:35:54.890764289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6kpnd,Uid:b4cc4158-2c05-4c12-9faa-641987eb3d31,Namespace:calico-system,Attempt:1,} returns sandbox id \"6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f\"" Sep 4 17:35:54.900748 containerd[1879]: time="2024-09-04T17:35:54.900606054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:35:55.202764 systemd-networkd[1721]: califccaa39c64b: Link UP Sep 4 17:35:55.206639 systemd-networkd[1721]: califccaa39c64b: Gained carrier Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:54.857 [INFO][4710] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0 coredns-7db6d8ff4d- kube-system 38219aef-26b5-45ca-8f97-2ec892749d8f 771 0 2024-09-04 17:35:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-246 coredns-7db6d8ff4d-t2qdb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califccaa39c64b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t2qdb" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:54.857 [INFO][4710] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t2qdb" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:54.960 [INFO][4732] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" HandleID="k8s-pod-network.9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:54.994 [INFO][4732] ipam_plugin.go 270: Auto assigning IP ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" HandleID="k8s-pod-network.9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038ada0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-246", "pod":"coredns-7db6d8ff4d-t2qdb", "timestamp":"2024-09-04 17:35:54.960436694 +0000 UTC"}, Hostname:"ip-172-31-21-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:54.994 [INFO][4732] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:54.994 [INFO][4732] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:54.999 [INFO][4732] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-246' Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.070 [INFO][4732] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" host="ip-172-31-21-246" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.101 [INFO][4732] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-246" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.134 [INFO][4732] ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.139 [INFO][4732] ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.145 [INFO][4732] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.145 [INFO][4732] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" host="ip-172-31-21-246" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.151 [INFO][4732] ipam.go 1685: Creating new handle: k8s-pod-network.9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30 Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.169 [INFO][4732] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" host="ip-172-31-21-246" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.183 [INFO][4732] ipam.go 1216: Successfully claimed IPs: [192.168.77.3/26] block=192.168.77.0/26 handle="k8s-pod-network.9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" host="ip-172-31-21-246" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.184 [INFO][4732] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.3/26] handle="k8s-pod-network.9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" host="ip-172-31-21-246" Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.184 [INFO][4732] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:55.268412 containerd[1879]: 2024-09-04 17:35:55.184 [INFO][4732] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.3/26] IPv6=[] ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" HandleID="k8s-pod-network.9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:55.270477 containerd[1879]: 2024-09-04 17:35:55.191 [INFO][4710] k8s.go 386: Populated endpoint ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t2qdb" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"38219aef-26b5-45ca-8f97-2ec892749d8f", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"", Pod:"coredns-7db6d8ff4d-t2qdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califccaa39c64b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:55.270477 containerd[1879]: 2024-09-04 17:35:55.192 [INFO][4710] k8s.go 387: Calico CNI using IPs: [192.168.77.3/32] ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t2qdb" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:55.270477 containerd[1879]: 2024-09-04 17:35:55.192 [INFO][4710] dataplane_linux.go 68: Setting the host side veth name to califccaa39c64b ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t2qdb" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:55.270477 containerd[1879]: 2024-09-04 17:35:55.206 [INFO][4710] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t2qdb" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:55.270477 containerd[1879]: 2024-09-04 17:35:55.212 [INFO][4710] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t2qdb" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"38219aef-26b5-45ca-8f97-2ec892749d8f", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30", Pod:"coredns-7db6d8ff4d-t2qdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califccaa39c64b", MAC:"76:12:16:78:28:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:55.270477 containerd[1879]: 2024-09-04 17:35:55.261 [INFO][4710] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-t2qdb" WorkloadEndpoint="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:55.321710 containerd[1879]: time="2024-09-04T17:35:55.321407019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:35:55.321710 containerd[1879]: time="2024-09-04T17:35:55.321497782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:35:55.321710 containerd[1879]: time="2024-09-04T17:35:55.321535343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:55.322099 containerd[1879]: time="2024-09-04T17:35:55.321944415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:35:55.385551 systemd[1]: Started cri-containerd-9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30.scope - libcontainer container 9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30. Sep 4 17:35:55.472715 containerd[1879]: time="2024-09-04T17:35:55.472523197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t2qdb,Uid:38219aef-26b5-45ca-8f97-2ec892749d8f,Namespace:kube-system,Attempt:1,} returns sandbox id \"9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30\"" Sep 4 17:35:55.482902 containerd[1879]: time="2024-09-04T17:35:55.482841930Z" level=info msg="CreateContainer within sandbox \"9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:35:55.602471 systemd-networkd[1721]: vxlan.calico: Link UP Sep 4 17:35:55.602482 systemd-networkd[1721]: vxlan.calico: Gained carrier Sep 4 17:35:55.607930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278784613.mount: Deactivated successfully. Sep 4 17:35:55.637776 containerd[1879]: time="2024-09-04T17:35:55.637731869Z" level=info msg="CreateContainer within sandbox \"9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47210fcd4365b9732bd4f091a900adb19f17d5b5b33f65057ef0569986e75e85\"" Sep 4 17:35:55.639522 containerd[1879]: time="2024-09-04T17:35:55.639485699Z" level=info msg="StartContainer for \"47210fcd4365b9732bd4f091a900adb19f17d5b5b33f65057ef0569986e75e85\"" Sep 4 17:35:55.788207 systemd[1]: Started cri-containerd-47210fcd4365b9732bd4f091a900adb19f17d5b5b33f65057ef0569986e75e85.scope - libcontainer container 47210fcd4365b9732bd4f091a900adb19f17d5b5b33f65057ef0569986e75e85. Sep 4 17:35:55.925776 containerd[1879]: time="2024-09-04T17:35:55.925502823Z" level=info msg="StartContainer for \"47210fcd4365b9732bd4f091a900adb19f17d5b5b33f65057ef0569986e75e85\" returns successfully" Sep 4 17:35:55.977456 systemd-networkd[1721]: calif97c0980444: Gained IPv6LL Sep 4 17:35:56.361800 containerd[1879]: time="2024-09-04T17:35:56.361747921Z" level=info msg="StopPodSandbox for \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\"" Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.499 [WARNING][4886] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"38219aef-26b5-45ca-8f97-2ec892749d8f", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30", Pod:"coredns-7db6d8ff4d-t2qdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califccaa39c64b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.499 [INFO][4886] k8s.go 608: Cleaning up netns ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.501 [INFO][4886] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" iface="eth0" netns="" Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.501 [INFO][4886] k8s.go 615: Releasing IP address(es) ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.501 [INFO][4886] utils.go 188: Calico CNI releasing IP address ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.577 [INFO][4896] ipam_plugin.go 417: Releasing address using handleID ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" HandleID="k8s-pod-network.d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.578 [INFO][4896] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.578 [INFO][4896] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.588 [WARNING][4896] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" HandleID="k8s-pod-network.d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.588 [INFO][4896] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" HandleID="k8s-pod-network.d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.592 [INFO][4896] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:56.603417 containerd[1879]: 2024-09-04 17:35:56.596 [INFO][4886] k8s.go 621: Teardown processing complete. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:56.605578 containerd[1879]: time="2024-09-04T17:35:56.603478156Z" level=info msg="TearDown network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\" successfully" Sep 4 17:35:56.605578 containerd[1879]: time="2024-09-04T17:35:56.603509780Z" level=info msg="StopPodSandbox for \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\" returns successfully" Sep 4 17:35:56.635001 containerd[1879]: time="2024-09-04T17:35:56.634844285Z" level=info msg="RemovePodSandbox for \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\"" Sep 4 17:35:56.635001 containerd[1879]: time="2024-09-04T17:35:56.634919068Z" level=info msg="Forcibly stopping sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\"" Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.814 [WARNING][4914] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"38219aef-26b5-45ca-8f97-2ec892749d8f", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"9ad938b05024bd9accdd3791eada04e8740a83efda7a17cf4f0fc74074aaef30", Pod:"coredns-7db6d8ff4d-t2qdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califccaa39c64b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.817 [INFO][4914] k8s.go 608: Cleaning up netns ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.817 [INFO][4914] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" iface="eth0" netns="" Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.818 [INFO][4914] k8s.go 615: Releasing IP address(es) ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.818 [INFO][4914] utils.go 188: Calico CNI releasing IP address ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.909 [INFO][4920] ipam_plugin.go 417: Releasing address using handleID ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" HandleID="k8s-pod-network.d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.911 [INFO][4920] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.911 [INFO][4920] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.927 [WARNING][4920] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" HandleID="k8s-pod-network.d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.927 [INFO][4920] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" HandleID="k8s-pod-network.d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--t2qdb-eth0" Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.931 [INFO][4920] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:56.954798 containerd[1879]: 2024-09-04 17:35:56.941 [INFO][4914] k8s.go 621: Teardown processing complete. ContainerID="d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9" Sep 4 17:35:56.958632 containerd[1879]: time="2024-09-04T17:35:56.956953048Z" level=info msg="TearDown network for sandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\" successfully" Sep 4 17:35:57.003619 systemd-networkd[1721]: califccaa39c64b: Gained IPv6LL Sep 4 17:35:57.049499 containerd[1879]: time="2024-09-04T17:35:57.049357969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:35:57.049499 containerd[1879]: time="2024-09-04T17:35:57.049453285Z" level=info msg="RemovePodSandbox \"d04ba679a69b8665834dd1d75cdd973b02c494add7d41a60933f70c6c8ea7cc9\" returns successfully" Sep 4 17:35:57.050552 kubelet[3220]: I0904 17:35:57.050459 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t2qdb" podStartSLOduration=47.050432199 podStartE2EDuration="47.050432199s" podCreationTimestamp="2024-09-04 17:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:35:56.071165823 +0000 UTC m=+60.115267885" watchObservedRunningTime="2024-09-04 17:35:57.050432199 +0000 UTC m=+61.094534260" Sep 4 17:35:57.060884 containerd[1879]: time="2024-09-04T17:35:57.060833672Z" level=info msg="StopPodSandbox for \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\"" Sep 4 17:35:57.069436 containerd[1879]: time="2024-09-04T17:35:57.069353765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:57.072361 containerd[1879]: time="2024-09-04T17:35:57.072088017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:35:57.078785 containerd[1879]: time="2024-09-04T17:35:57.076990693Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:57.105754 containerd[1879]: time="2024-09-04T17:35:57.104287025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:35:57.109498 containerd[1879]: time="2024-09-04T17:35:57.109434941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.20867172s" Sep 4 17:35:57.109896 containerd[1879]: time="2024-09-04T17:35:57.109772872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:35:57.118630 containerd[1879]: time="2024-09-04T17:35:57.118265166Z" level=info msg="CreateContainer within sandbox \"6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:35:57.281659 containerd[1879]: time="2024-09-04T17:35:57.281596390Z" level=info msg="CreateContainer within sandbox \"6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1042cbfaa6f97047db796e316cb36d757e6ca43d6e757363e49b84f85fe592a6\"" Sep 4 17:35:57.296858 containerd[1879]: time="2024-09-04T17:35:57.296806593Z" level=info msg="StartContainer for \"1042cbfaa6f97047db796e316cb36d757e6ca43d6e757363e49b84f85fe592a6\"" Sep 4 17:35:57.443628 systemd[1]: Started cri-containerd-1042cbfaa6f97047db796e316cb36d757e6ca43d6e757363e49b84f85fe592a6.scope - libcontainer container 1042cbfaa6f97047db796e316cb36d757e6ca43d6e757363e49b84f85fe592a6. Sep 4 17:35:57.641276 systemd-networkd[1721]: vxlan.calico: Gained IPv6LL Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.447 [WARNING][4951] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa", Pod:"coredns-7db6d8ff4d-4g6g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60b7307b239", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.451 [INFO][4951] k8s.go 608: Cleaning up netns ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.451 [INFO][4951] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" iface="eth0" netns="" Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.451 [INFO][4951] k8s.go 615: Releasing IP address(es) ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.451 [INFO][4951] utils.go 188: Calico CNI releasing IP address ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.613 [INFO][4988] ipam_plugin.go 417: Releasing address using handleID ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" HandleID="k8s-pod-network.b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.614 [INFO][4988] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.614 [INFO][4988] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.664 [WARNING][4988] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" HandleID="k8s-pod-network.b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.664 [INFO][4988] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" HandleID="k8s-pod-network.b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.673 [INFO][4988] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:57.718537 containerd[1879]: 2024-09-04 17:35:57.696 [INFO][4951] k8s.go 621: Teardown processing complete. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:57.721252 containerd[1879]: time="2024-09-04T17:35:57.720073241Z" level=info msg="TearDown network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\" successfully" Sep 4 17:35:57.721252 containerd[1879]: time="2024-09-04T17:35:57.720213825Z" level=info msg="StopPodSandbox for \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\" returns successfully" Sep 4 17:35:57.725092 containerd[1879]: time="2024-09-04T17:35:57.724034722Z" level=info msg="RemovePodSandbox for \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\"" Sep 4 17:35:57.725092 containerd[1879]: time="2024-09-04T17:35:57.724077709Z" level=info msg="Forcibly stopping sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\"" Sep 4 17:35:57.797708 containerd[1879]: time="2024-09-04T17:35:57.793524297Z" level=info msg="StartContainer for \"1042cbfaa6f97047db796e316cb36d757e6ca43d6e757363e49b84f85fe592a6\" returns successfully" Sep 4 17:35:57.800477 containerd[1879]: time="2024-09-04T17:35:57.798741970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:57.940 [WARNING][5031] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ab8b6bd-cf5e-4eed-958b-c1ff433c2b39", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"8ec85f001d72bc4354c2ea31531aa160335cb63284b6abf3105e5df62234ebfa", Pod:"coredns-7db6d8ff4d-4g6g9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali60b7307b239", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:57.940 [INFO][5031] k8s.go 608: Cleaning up netns ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:57.940 [INFO][5031] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" iface="eth0" netns="" Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:57.940 [INFO][5031] k8s.go 615: Releasing IP address(es) ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:57.940 [INFO][5031] utils.go 188: Calico CNI releasing IP address ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:58.063 [INFO][5041] ipam_plugin.go 417: Releasing address using handleID ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" HandleID="k8s-pod-network.b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:58.064 [INFO][5041] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:58.064 [INFO][5041] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:58.080 [WARNING][5041] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" HandleID="k8s-pod-network.b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:58.080 [INFO][5041] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" HandleID="k8s-pod-network.b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Workload="ip--172--31--21--246-k8s-coredns--7db6d8ff4d--4g6g9-eth0" Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:58.084 [INFO][5041] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:58.092899 containerd[1879]: 2024-09-04 17:35:58.089 [INFO][5031] k8s.go 621: Teardown processing complete. ContainerID="b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc" Sep 4 17:35:58.094069 containerd[1879]: time="2024-09-04T17:35:58.092956371Z" level=info msg="TearDown network for sandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\" successfully" Sep 4 17:35:58.105412 containerd[1879]: time="2024-09-04T17:35:58.099309893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:35:58.108826 containerd[1879]: time="2024-09-04T17:35:58.105452462Z" level=info msg="RemovePodSandbox \"b7d7a3af23ade65e01e40c9827036a0b4d501af2f98b5f4dcb051358b2d8dacc\" returns successfully" Sep 4 17:35:58.109230 containerd[1879]: time="2024-09-04T17:35:58.109194352Z" level=info msg="StopPodSandbox for \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\"" Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.256 [WARNING][5060] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4cc4158-2c05-4c12-9faa-641987eb3d31", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f", Pod:"csi-node-driver-6kpnd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif97c0980444", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.256 [INFO][5060] k8s.go 608: Cleaning up netns ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.256 [INFO][5060] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" iface="eth0" netns="" Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.256 [INFO][5060] k8s.go 615: Releasing IP address(es) ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.256 [INFO][5060] utils.go 188: Calico CNI releasing IP address ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.316 [INFO][5066] ipam_plugin.go 417: Releasing address using handleID ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" HandleID="k8s-pod-network.4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.317 [INFO][5066] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.317 [INFO][5066] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.327 [WARNING][5066] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" HandleID="k8s-pod-network.4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.327 [INFO][5066] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" HandleID="k8s-pod-network.4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.331 [INFO][5066] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:58.337485 containerd[1879]: 2024-09-04 17:35:58.335 [INFO][5060] k8s.go 621: Teardown processing complete. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:58.338623 containerd[1879]: time="2024-09-04T17:35:58.337544330Z" level=info msg="TearDown network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\" successfully" Sep 4 17:35:58.338623 containerd[1879]: time="2024-09-04T17:35:58.337576102Z" level=info msg="StopPodSandbox for \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\" returns successfully" Sep 4 17:35:58.339376 containerd[1879]: time="2024-09-04T17:35:58.339043772Z" level=info msg="RemovePodSandbox for \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\"" Sep 4 17:35:58.339376 containerd[1879]: time="2024-09-04T17:35:58.339086014Z" level=info msg="Forcibly stopping sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\"" Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.440 [WARNING][5084] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b4cc4158-2c05-4c12-9faa-641987eb3d31", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f", Pod:"csi-node-driver-6kpnd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif97c0980444", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.443 [INFO][5084] k8s.go 608: Cleaning up netns ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.443 [INFO][5084] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" iface="eth0" netns="" Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.443 [INFO][5084] k8s.go 615: Releasing IP address(es) ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.443 [INFO][5084] utils.go 188: Calico CNI releasing IP address ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.495 [INFO][5090] ipam_plugin.go 417: Releasing address using handleID ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" HandleID="k8s-pod-network.4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.495 [INFO][5090] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.496 [INFO][5090] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.512 [WARNING][5090] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" HandleID="k8s-pod-network.4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.512 [INFO][5090] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" HandleID="k8s-pod-network.4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Workload="ip--172--31--21--246-k8s-csi--node--driver--6kpnd-eth0" Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.519 [INFO][5090] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:35:58.532886 containerd[1879]: 2024-09-04 17:35:58.524 [INFO][5084] k8s.go 621: Teardown processing complete. ContainerID="4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644" Sep 4 17:35:58.532886 containerd[1879]: time="2024-09-04T17:35:58.532862375Z" level=info msg="TearDown network for sandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\" successfully" Sep 4 17:35:58.540113 containerd[1879]: time="2024-09-04T17:35:58.539996499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:35:58.540280 containerd[1879]: time="2024-09-04T17:35:58.540146323Z" level=info msg="RemovePodSandbox \"4d360353bd847f4d274114a17ab57862b408204894a3072ade2e5aa72471d644\" returns successfully" Sep 4 17:35:59.790889 systemd[1]: Started sshd@9-172.31.21.246:22-139.178.68.195:41062.service - OpenSSH per-connection server daemon (139.178.68.195:41062). Sep 4 17:36:00.255664 sshd[5102]: Accepted publickey for core from 139.178.68.195 port 41062 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:00.257877 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:00.272407 systemd-logind[1856]: New session 10 of user core. Sep 4 17:36:00.280978 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:36:00.470828 containerd[1879]: time="2024-09-04T17:36:00.470769567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:00.474469 containerd[1879]: time="2024-09-04T17:36:00.474033116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:36:00.476348 containerd[1879]: time="2024-09-04T17:36:00.475998233Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:00.486497 containerd[1879]: time="2024-09-04T17:36:00.486444741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:00.490778 containerd[1879]: time="2024-09-04T17:36:00.490716534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.690846422s" Sep 4 17:36:00.490778 containerd[1879]: time="2024-09-04T17:36:00.490778259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:36:00.519959 containerd[1879]: time="2024-09-04T17:36:00.519072210Z" level=info msg="CreateContainer within sandbox \"6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:36:00.584350 containerd[1879]: time="2024-09-04T17:36:00.579963057Z" level=info msg="CreateContainer within sandbox \"6e20dc2ab48ee0df887b171cf207693445d9cd8c4b558ee2c473e5b1bd75a67f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"513be8e7aad82b137f36f2a780bf5ce87c504d0ed063646bd1b7891a27719bff\"" Sep 4 17:36:00.592342 containerd[1879]: time="2024-09-04T17:36:00.588075105Z" level=info msg="StartContainer for \"513be8e7aad82b137f36f2a780bf5ce87c504d0ed063646bd1b7891a27719bff\"" Sep 4 17:36:00.596703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822252399.mount: Deactivated successfully. Sep 4 17:36:00.611564 ntpd[1851]: Listen normally on 6 vxlan.calico 192.168.77.0:123 Sep 4 17:36:00.613653 ntpd[1851]: Listen normally on 7 cali60b7307b239 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 4 17:36:00.616305 ntpd[1851]: 4 Sep 17:36:00 ntpd[1851]: Listen normally on 6 vxlan.calico 192.168.77.0:123 Sep 4 17:36:00.616305 ntpd[1851]: 4 Sep 17:36:00 ntpd[1851]: Listen normally on 7 cali60b7307b239 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 4 17:36:00.616305 ntpd[1851]: 4 Sep 17:36:00 ntpd[1851]: Listen normally on 8 calif97c0980444 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 4 17:36:00.616305 ntpd[1851]: 4 Sep 17:36:00 ntpd[1851]: Listen normally on 9 califccaa39c64b [fe80::ecee:eeff:feee:eeee%6]:123 Sep 4 17:36:00.616305 ntpd[1851]: 4 Sep 17:36:00 ntpd[1851]: Listen normally on 10 vxlan.calico [fe80::643e:90ff:fe2d:cce8%7]:123 Sep 4 17:36:00.613718 ntpd[1851]: Listen normally on 8 calif97c0980444 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 4 17:36:00.613774 ntpd[1851]: Listen normally on 9 califccaa39c64b [fe80::ecee:eeff:feee:eeee%6]:123 Sep 4 17:36:00.613818 ntpd[1851]: Listen normally on 10 vxlan.calico [fe80::643e:90ff:fe2d:cce8%7]:123 Sep 4 17:36:00.728249 systemd[1]: Started cri-containerd-513be8e7aad82b137f36f2a780bf5ce87c504d0ed063646bd1b7891a27719bff.scope - libcontainer container 513be8e7aad82b137f36f2a780bf5ce87c504d0ed063646bd1b7891a27719bff. Sep 4 17:36:00.959958 containerd[1879]: time="2024-09-04T17:36:00.956530699Z" level=info msg="StartContainer for \"513be8e7aad82b137f36f2a780bf5ce87c504d0ed063646bd1b7891a27719bff\" returns successfully" Sep 4 17:36:01.437450 sshd[5102]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:01.453705 systemd[1]: sshd@9-172.31.21.246:22-139.178.68.195:41062.service: Deactivated successfully. Sep 4 17:36:01.467119 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:36:01.503566 systemd-logind[1856]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:36:01.512844 systemd-logind[1856]: Removed session 10. Sep 4 17:36:01.709744 kubelet[3220]: I0904 17:36:01.709205 3220 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:36:01.727385 kubelet[3220]: I0904 17:36:01.721427 3220 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:36:04.248469 containerd[1879]: time="2024-09-04T17:36:04.247736627Z" level=info msg="StopPodSandbox for \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\"" Sep 4 17:36:04.387777 kubelet[3220]: I0904 17:36:04.386901 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6kpnd" podStartSLOduration=39.789771234 podStartE2EDuration="45.386849914s" podCreationTimestamp="2024-09-04 17:35:19 +0000 UTC" firstStartedPulling="2024-09-04 17:35:54.898016259 +0000 UTC m=+58.942118302" lastFinishedPulling="2024-09-04 17:36:00.495094931 +0000 UTC m=+64.539196982" observedRunningTime="2024-09-04 17:36:01.145202689 +0000 UTC m=+65.189304754" watchObservedRunningTime="2024-09-04 17:36:04.386849914 +0000 UTC m=+68.430951976" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.379 [INFO][5174] k8s.go 608: Cleaning up netns ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.379 [INFO][5174] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" iface="eth0" netns="/var/run/netns/cni-3a7be516-6341-eb90-876f-3b13d7cd57c1" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.381 [INFO][5174] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" iface="eth0" netns="/var/run/netns/cni-3a7be516-6341-eb90-876f-3b13d7cd57c1" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.381 [INFO][5174] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" iface="eth0" netns="/var/run/netns/cni-3a7be516-6341-eb90-876f-3b13d7cd57c1" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.381 [INFO][5174] k8s.go 615: Releasing IP address(es) ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.382 [INFO][5174] utils.go 188: Calico CNI releasing IP address ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.432 [INFO][5180] ipam_plugin.go 417: Releasing address using handleID ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" HandleID="k8s-pod-network.ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.432 [INFO][5180] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.432 [INFO][5180] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.439 [WARNING][5180] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" HandleID="k8s-pod-network.ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.439 [INFO][5180] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" HandleID="k8s-pod-network.ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.451 [INFO][5180] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:36:04.464933 containerd[1879]: 2024-09-04 17:36:04.454 [INFO][5174] k8s.go 621: Teardown processing complete. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:04.473352 containerd[1879]: time="2024-09-04T17:36:04.473194193Z" level=info msg="TearDown network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\" successfully" Sep 4 17:36:04.473352 containerd[1879]: time="2024-09-04T17:36:04.473280497Z" level=info msg="StopPodSandbox for \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\" returns successfully" Sep 4 17:36:04.476244 containerd[1879]: time="2024-09-04T17:36:04.476117655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b7b5765f6-8d75q,Uid:00276485-11d4-4be8-a266-bce8f1f7d45a,Namespace:calico-system,Attempt:1,}" Sep 4 17:36:04.487961 systemd[1]: run-netns-cni\x2d3a7be516\x2d6341\x2deb90\x2d876f\x2d3b13d7cd57c1.mount: Deactivated successfully. Sep 4 17:36:05.147153 systemd-networkd[1721]: cali4433362e6c2: Link UP Sep 4 17:36:05.157878 systemd-networkd[1721]: cali4433362e6c2: Gained carrier Sep 4 17:36:05.169211 (udev-worker)[5206]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.647 [INFO][5186] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0 calico-kube-controllers-6b7b5765f6- calico-system 00276485-11d4-4be8-a266-bce8f1f7d45a 844 0 2024-09-04 17:35:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b7b5765f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-246 calico-kube-controllers-6b7b5765f6-8d75q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4433362e6c2 [] []}} ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Namespace="calico-system" Pod="calico-kube-controllers-6b7b5765f6-8d75q" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.647 [INFO][5186] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Namespace="calico-system" Pod="calico-kube-controllers-6b7b5765f6-8d75q" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.728 [INFO][5197] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" HandleID="k8s-pod-network.466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.749 [INFO][5197] ipam_plugin.go 270: Auto assigning IP ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" HandleID="k8s-pod-network.466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000379250), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-246", "pod":"calico-kube-controllers-6b7b5765f6-8d75q", "timestamp":"2024-09-04 17:36:04.728373349 +0000 UTC"}, Hostname:"ip-172-31-21-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.749 [INFO][5197] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.749 [INFO][5197] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.749 [INFO][5197] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-246' Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.755 [INFO][5197] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" host="ip-172-31-21-246" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.793 [INFO][5197] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-246" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.818 [INFO][5197] ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.838 [INFO][5197] ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.888 [INFO][5197] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.888 [INFO][5197] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" host="ip-172-31-21-246" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.917 [INFO][5197] ipam.go 1685: Creating new handle: k8s-pod-network.466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079 Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:04.988 [INFO][5197] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" host="ip-172-31-21-246" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:05.068 [INFO][5197] ipam.go 1216: Successfully claimed IPs: [192.168.77.4/26] block=192.168.77.0/26 handle="k8s-pod-network.466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" host="ip-172-31-21-246" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:05.068 [INFO][5197] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.4/26] handle="k8s-pod-network.466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" host="ip-172-31-21-246" Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:05.068 [INFO][5197] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:36:05.199564 containerd[1879]: 2024-09-04 17:36:05.070 [INFO][5197] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.4/26] IPv6=[] ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" HandleID="k8s-pod-network.466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:05.201161 containerd[1879]: 2024-09-04 17:36:05.105 [INFO][5186] k8s.go 386: Populated endpoint ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Namespace="calico-system" Pod="calico-kube-controllers-6b7b5765f6-8d75q" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0", GenerateName:"calico-kube-controllers-6b7b5765f6-", Namespace:"calico-system", SelfLink:"", UID:"00276485-11d4-4be8-a266-bce8f1f7d45a", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b7b5765f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"", Pod:"calico-kube-controllers-6b7b5765f6-8d75q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4433362e6c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:36:05.201161 containerd[1879]: 2024-09-04 17:36:05.106 [INFO][5186] k8s.go 387: Calico CNI using IPs: [192.168.77.4/32] ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Namespace="calico-system" Pod="calico-kube-controllers-6b7b5765f6-8d75q" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:05.201161 containerd[1879]: 2024-09-04 17:36:05.110 [INFO][5186] dataplane_linux.go 68: Setting the host side veth name to cali4433362e6c2 ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Namespace="calico-system" Pod="calico-kube-controllers-6b7b5765f6-8d75q" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:05.201161 containerd[1879]: 2024-09-04 17:36:05.124 [INFO][5186] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Namespace="calico-system" Pod="calico-kube-controllers-6b7b5765f6-8d75q" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:05.201161 containerd[1879]: 2024-09-04 17:36:05.141 [INFO][5186] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Namespace="calico-system" Pod="calico-kube-controllers-6b7b5765f6-8d75q" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0", GenerateName:"calico-kube-controllers-6b7b5765f6-", Namespace:"calico-system", SelfLink:"", UID:"00276485-11d4-4be8-a266-bce8f1f7d45a", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b7b5765f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079", Pod:"calico-kube-controllers-6b7b5765f6-8d75q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4433362e6c2", MAC:"46:1d:16:7a:94:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:36:05.201161 containerd[1879]: 2024-09-04 17:36:05.192 [INFO][5186] k8s.go 500: Wrote updated endpoint to datastore ContainerID="466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079" Namespace="calico-system" Pod="calico-kube-controllers-6b7b5765f6-8d75q" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:05.315614 containerd[1879]: time="2024-09-04T17:36:05.312227147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:36:05.315614 containerd[1879]: time="2024-09-04T17:36:05.312808134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:36:05.315614 containerd[1879]: time="2024-09-04T17:36:05.312967991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:05.315614 containerd[1879]: time="2024-09-04T17:36:05.313888521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:05.381159 systemd[1]: run-containerd-runc-k8s.io-466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079-runc.IDxvRF.mount: Deactivated successfully. Sep 4 17:36:05.451363 systemd[1]: Started cri-containerd-466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079.scope - libcontainer container 466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079. Sep 4 17:36:05.548182 containerd[1879]: time="2024-09-04T17:36:05.548109373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b7b5765f6-8d75q,Uid:00276485-11d4-4be8-a266-bce8f1f7d45a,Namespace:calico-system,Attempt:1,} returns sandbox id \"466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079\"" Sep 4 17:36:05.551743 containerd[1879]: time="2024-09-04T17:36:05.551703894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:36:06.484003 systemd[1]: Started sshd@10-172.31.21.246:22-139.178.68.195:60782.service - OpenSSH per-connection server daemon (139.178.68.195:60782). Sep 4 17:36:06.710692 sshd[5268]: Accepted publickey for core from 139.178.68.195 port 60782 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:06.718676 sshd[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:06.735416 systemd-logind[1856]: New session 11 of user core. Sep 4 17:36:06.745529 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:36:07.048009 systemd-networkd[1721]: cali4433362e6c2: Gained IPv6LL Sep 4 17:36:07.415633 sshd[5268]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:07.422600 systemd[1]: sshd@10-172.31.21.246:22-139.178.68.195:60782.service: Deactivated successfully. Sep 4 17:36:07.429632 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:36:07.433854 systemd-logind[1856]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:36:07.452100 systemd-logind[1856]: Removed session 11. Sep 4 17:36:07.459022 systemd[1]: Started sshd@11-172.31.21.246:22-139.178.68.195:60790.service - OpenSSH per-connection server daemon (139.178.68.195:60790). Sep 4 17:36:07.693179 sshd[5285]: Accepted publickey for core from 139.178.68.195 port 60790 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:07.694976 sshd[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:07.708163 systemd-logind[1856]: New session 12 of user core. Sep 4 17:36:07.717029 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:36:08.287642 sshd[5285]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:08.299244 systemd[1]: sshd@11-172.31.21.246:22-139.178.68.195:60790.service: Deactivated successfully. Sep 4 17:36:08.309088 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:36:08.315567 systemd-logind[1856]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:36:08.343975 systemd[1]: Started sshd@12-172.31.21.246:22-139.178.68.195:60802.service - OpenSSH per-connection server daemon (139.178.68.195:60802). Sep 4 17:36:08.347718 systemd-logind[1856]: Removed session 12. Sep 4 17:36:08.621917 sshd[5302]: Accepted publickey for core from 139.178.68.195 port 60802 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:08.624184 sshd[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:08.634687 systemd-logind[1856]: New session 13 of user core. Sep 4 17:36:08.640607 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:36:08.999094 containerd[1879]: time="2024-09-04T17:36:08.997308357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:09.001702 containerd[1879]: time="2024-09-04T17:36:09.001452964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:36:09.003953 containerd[1879]: time="2024-09-04T17:36:09.003541401Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:09.012779 containerd[1879]: time="2024-09-04T17:36:09.012717621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:09.013541 containerd[1879]: time="2024-09-04T17:36:09.013492282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.46148476s" Sep 4 17:36:09.013669 containerd[1879]: time="2024-09-04T17:36:09.013546863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:36:09.026454 sshd[5302]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:09.043070 containerd[1879]: time="2024-09-04T17:36:09.041762967Z" level=info msg="CreateContainer within sandbox \"466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:36:09.050821 systemd[1]: sshd@12-172.31.21.246:22-139.178.68.195:60802.service: Deactivated successfully. Sep 4 17:36:09.054567 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:36:09.057839 systemd-logind[1856]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:36:09.061052 systemd-logind[1856]: Removed session 13. Sep 4 17:36:09.076366 containerd[1879]: time="2024-09-04T17:36:09.076171708Z" level=info msg="CreateContainer within sandbox \"466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d37933a1d8c2bcfabcb0516d919e930303f6977faf321f6e6c63fae5d47e6ba7\"" Sep 4 17:36:09.077243 containerd[1879]: time="2024-09-04T17:36:09.077205045Z" level=info msg="StartContainer for \"d37933a1d8c2bcfabcb0516d919e930303f6977faf321f6e6c63fae5d47e6ba7\"" Sep 4 17:36:09.128740 systemd[1]: Started cri-containerd-d37933a1d8c2bcfabcb0516d919e930303f6977faf321f6e6c63fae5d47e6ba7.scope - libcontainer container d37933a1d8c2bcfabcb0516d919e930303f6977faf321f6e6c63fae5d47e6ba7. Sep 4 17:36:09.203260 containerd[1879]: time="2024-09-04T17:36:09.203210255Z" level=info msg="StartContainer for \"d37933a1d8c2bcfabcb0516d919e930303f6977faf321f6e6c63fae5d47e6ba7\" returns successfully" Sep 4 17:36:09.609115 ntpd[1851]: Listen normally on 11 cali4433362e6c2 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:36:09.613143 ntpd[1851]: 4 Sep 17:36:09 ntpd[1851]: Listen normally on 11 cali4433362e6c2 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:36:10.309595 kubelet[3220]: I0904 17:36:10.307097 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b7b5765f6-8d75q" podStartSLOduration=47.842868442 podStartE2EDuration="51.307074413s" podCreationTimestamp="2024-09-04 17:35:19 +0000 UTC" firstStartedPulling="2024-09-04 17:36:05.550457205 +0000 UTC m=+69.594559245" lastFinishedPulling="2024-09-04 17:36:09.014663175 +0000 UTC m=+73.058765216" observedRunningTime="2024-09-04 17:36:10.303021612 +0000 UTC m=+74.347123696" watchObservedRunningTime="2024-09-04 17:36:10.307074413 +0000 UTC m=+74.351176472" Sep 4 17:36:10.417578 systemd[1]: run-containerd-runc-k8s.io-d37933a1d8c2bcfabcb0516d919e930303f6977faf321f6e6c63fae5d47e6ba7-runc.3tUCfV.mount: Deactivated successfully. Sep 4 17:36:14.061735 systemd[1]: Started sshd@13-172.31.21.246:22-139.178.68.195:60816.service - OpenSSH per-connection server daemon (139.178.68.195:60816). Sep 4 17:36:14.270796 sshd[5377]: Accepted publickey for core from 139.178.68.195 port 60816 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:14.272091 sshd[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:14.279392 systemd-logind[1856]: New session 14 of user core. Sep 4 17:36:14.289662 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:36:14.763374 sshd[5377]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:14.771417 systemd-logind[1856]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:36:14.772719 systemd[1]: sshd@13-172.31.21.246:22-139.178.68.195:60816.service: Deactivated successfully. Sep 4 17:36:14.785300 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:36:14.786753 systemd-logind[1856]: Removed session 14. Sep 4 17:36:19.814505 systemd[1]: Started sshd@14-172.31.21.246:22-139.178.68.195:39164.service - OpenSSH per-connection server daemon (139.178.68.195:39164). Sep 4 17:36:20.130838 sshd[5445]: Accepted publickey for core from 139.178.68.195 port 39164 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:20.135824 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:20.152228 systemd-logind[1856]: New session 15 of user core. Sep 4 17:36:20.154703 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:36:20.695120 sshd[5445]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:20.702809 systemd[1]: sshd@14-172.31.21.246:22-139.178.68.195:39164.service: Deactivated successfully. Sep 4 17:36:20.712234 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:36:20.713546 systemd-logind[1856]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:36:20.717622 systemd-logind[1856]: Removed session 15. Sep 4 17:36:25.740676 systemd[1]: Started sshd@15-172.31.21.246:22-139.178.68.195:39168.service - OpenSSH per-connection server daemon (139.178.68.195:39168). Sep 4 17:36:25.921807 sshd[5461]: Accepted publickey for core from 139.178.68.195 port 39168 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:25.924147 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:25.935754 systemd-logind[1856]: New session 16 of user core. Sep 4 17:36:25.941750 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:36:26.209251 sshd[5461]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:26.213689 systemd[1]: sshd@15-172.31.21.246:22-139.178.68.195:39168.service: Deactivated successfully. Sep 4 17:36:26.216562 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:36:26.218271 systemd-logind[1856]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:36:26.220498 systemd-logind[1856]: Removed session 16. Sep 4 17:36:31.250741 systemd[1]: Started sshd@16-172.31.21.246:22-139.178.68.195:51078.service - OpenSSH per-connection server daemon (139.178.68.195:51078). Sep 4 17:36:31.461359 sshd[5478]: Accepted publickey for core from 139.178.68.195 port 51078 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:31.464924 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:31.472306 systemd-logind[1856]: New session 17 of user core. Sep 4 17:36:31.486686 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:36:31.962402 sshd[5478]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:31.967624 systemd-logind[1856]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:36:31.969734 systemd[1]: sshd@16-172.31.21.246:22-139.178.68.195:51078.service: Deactivated successfully. Sep 4 17:36:31.973719 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:36:31.975176 systemd-logind[1856]: Removed session 17. Sep 4 17:36:31.997009 systemd[1]: Started sshd@17-172.31.21.246:22-139.178.68.195:51084.service - OpenSSH per-connection server daemon (139.178.68.195:51084). Sep 4 17:36:32.226532 sshd[5491]: Accepted publickey for core from 139.178.68.195 port 51084 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:32.240360 sshd[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:32.266098 systemd-logind[1856]: New session 18 of user core. Sep 4 17:36:32.276669 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:36:33.114649 sshd[5491]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:33.124903 systemd-logind[1856]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:36:33.125760 systemd[1]: sshd@17-172.31.21.246:22-139.178.68.195:51084.service: Deactivated successfully. Sep 4 17:36:33.128761 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:36:33.131459 systemd-logind[1856]: Removed session 18. Sep 4 17:36:33.181942 systemd[1]: Started sshd@18-172.31.21.246:22-139.178.68.195:51088.service - OpenSSH per-connection server daemon (139.178.68.195:51088). Sep 4 17:36:33.379121 sshd[5502]: Accepted publickey for core from 139.178.68.195 port 51088 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:33.380021 sshd[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:33.394400 systemd-logind[1856]: New session 19 of user core. Sep 4 17:36:33.404567 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:36:36.286242 sshd[5502]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:36.294669 systemd-logind[1856]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:36:36.297188 systemd[1]: sshd@18-172.31.21.246:22-139.178.68.195:51088.service: Deactivated successfully. Sep 4 17:36:36.302340 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:36:36.323201 systemd[1]: Started sshd@19-172.31.21.246:22-139.178.68.195:56540.service - OpenSSH per-connection server daemon (139.178.68.195:56540). Sep 4 17:36:36.325137 systemd-logind[1856]: Removed session 19. Sep 4 17:36:36.547655 sshd[5520]: Accepted publickey for core from 139.178.68.195 port 56540 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:36.550039 sshd[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:36.557231 systemd-logind[1856]: New session 20 of user core. Sep 4 17:36:36.568186 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:36:37.611144 sshd[5520]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:37.641209 systemd[1]: sshd@19-172.31.21.246:22-139.178.68.195:56540.service: Deactivated successfully. Sep 4 17:36:37.644605 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:36:37.649282 systemd-logind[1856]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:36:37.657793 systemd[1]: Started sshd@20-172.31.21.246:22-139.178.68.195:56542.service - OpenSSH per-connection server daemon (139.178.68.195:56542). Sep 4 17:36:37.669819 systemd-logind[1856]: Removed session 20. Sep 4 17:36:37.865609 sshd[5555]: Accepted publickey for core from 139.178.68.195 port 56542 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:37.872724 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:37.887098 systemd-logind[1856]: New session 21 of user core. Sep 4 17:36:37.894717 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:36:38.148567 sshd[5555]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:38.156079 systemd[1]: sshd@20-172.31.21.246:22-139.178.68.195:56542.service: Deactivated successfully. Sep 4 17:36:38.159066 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:36:38.160845 systemd-logind[1856]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:36:38.162816 systemd-logind[1856]: Removed session 21. Sep 4 17:36:43.194086 systemd[1]: Started sshd@21-172.31.21.246:22-139.178.68.195:56558.service - OpenSSH per-connection server daemon (139.178.68.195:56558). Sep 4 17:36:43.401829 sshd[5575]: Accepted publickey for core from 139.178.68.195 port 56558 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:43.404377 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:43.437508 systemd-logind[1856]: New session 22 of user core. Sep 4 17:36:43.444629 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:36:43.712942 sshd[5575]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:43.737747 systemd[1]: sshd@21-172.31.21.246:22-139.178.68.195:56558.service: Deactivated successfully. Sep 4 17:36:43.748745 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:36:43.756116 systemd-logind[1856]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:36:43.757821 systemd-logind[1856]: Removed session 22. Sep 4 17:36:48.218345 kubelet[3220]: I0904 17:36:48.210674 3220 topology_manager.go:215] "Topology Admit Handler" podUID="0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce" podNamespace="calico-apiserver" podName="calico-apiserver-775c886fb7-rvllt" Sep 4 17:36:48.280994 systemd[1]: Created slice kubepods-besteffort-pod0946ce9d_0c2c_4e79_9a89_2ebc8e32d7ce.slice - libcontainer container kubepods-besteffort-pod0946ce9d_0c2c_4e79_9a89_2ebc8e32d7ce.slice. Sep 4 17:36:48.342630 kubelet[3220]: I0904 17:36:48.342282 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce-calico-apiserver-certs\") pod \"calico-apiserver-775c886fb7-rvllt\" (UID: \"0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce\") " pod="calico-apiserver/calico-apiserver-775c886fb7-rvllt" Sep 4 17:36:48.342630 kubelet[3220]: I0904 17:36:48.342487 3220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghblg\" (UniqueName: \"kubernetes.io/projected/0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce-kube-api-access-ghblg\") pod \"calico-apiserver-775c886fb7-rvllt\" (UID: \"0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce\") " pod="calico-apiserver/calico-apiserver-775c886fb7-rvllt" Sep 4 17:36:48.459364 kubelet[3220]: E0904 17:36:48.455540 3220 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:36:48.513923 kubelet[3220]: E0904 17:36:48.513879 3220 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce-calico-apiserver-certs podName:0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce nodeName:}" failed. No retries permitted until 2024-09-04 17:36:48.98324141 +0000 UTC m=+113.027343477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce-calico-apiserver-certs") pod "calico-apiserver-775c886fb7-rvllt" (UID: "0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce") : secret "calico-apiserver-certs" not found Sep 4 17:36:48.758112 systemd[1]: Started sshd@22-172.31.21.246:22-139.178.68.195:36010.service - OpenSSH per-connection server daemon (139.178.68.195:36010). Sep 4 17:36:48.985275 sshd[5626]: Accepted publickey for core from 139.178.68.195 port 36010 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:48.987570 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:48.997582 systemd-logind[1856]: New session 23 of user core. Sep 4 17:36:49.006732 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:36:49.202340 containerd[1879]: time="2024-09-04T17:36:49.202115969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775c886fb7-rvllt,Uid:0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:36:49.529286 sshd[5626]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:49.540609 systemd[1]: sshd@22-172.31.21.246:22-139.178.68.195:36010.service: Deactivated successfully. Sep 4 17:36:49.547338 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:36:49.554407 systemd-logind[1856]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:36:49.556930 systemd-logind[1856]: Removed session 23. Sep 4 17:36:49.717289 systemd-networkd[1721]: cali9b54726fd78: Link UP Sep 4 17:36:49.719699 systemd-networkd[1721]: cali9b54726fd78: Gained carrier Sep 4 17:36:49.725375 (udev-worker)[5658]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.567 [INFO][5637] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0 calico-apiserver-775c886fb7- calico-apiserver 0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce 1103 0 2024-09-04 17:36:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:775c886fb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-246 calico-apiserver-775c886fb7-rvllt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b54726fd78 [] []}} ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Namespace="calico-apiserver" Pod="calico-apiserver-775c886fb7-rvllt" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.569 [INFO][5637] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Namespace="calico-apiserver" Pod="calico-apiserver-775c886fb7-rvllt" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.622 [INFO][5651] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" HandleID="k8s-pod-network.664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Workload="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.633 [INFO][5651] ipam_plugin.go 270: Auto assigning IP ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" HandleID="k8s-pod-network.664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Workload="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003181d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-246", "pod":"calico-apiserver-775c886fb7-rvllt", "timestamp":"2024-09-04 17:36:49.622039031 +0000 UTC"}, Hostname:"ip-172-31-21-246", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.634 [INFO][5651] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.634 [INFO][5651] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.635 [INFO][5651] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-246' Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.640 [INFO][5651] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" host="ip-172-31-21-246" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.646 [INFO][5651] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-246" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.660 [INFO][5651] ipam.go 489: Trying affinity for 192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.666 [INFO][5651] ipam.go 155: Attempting to load block cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.671 [INFO][5651] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.77.0/26 host="ip-172-31-21-246" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.672 [INFO][5651] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.77.0/26 handle="k8s-pod-network.664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" host="ip-172-31-21-246" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.681 [INFO][5651] ipam.go 1685: Creating new handle: k8s-pod-network.664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91 Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.691 [INFO][5651] ipam.go 1203: Writing block in order to claim IPs block=192.168.77.0/26 handle="k8s-pod-network.664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" host="ip-172-31-21-246" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.701 [INFO][5651] ipam.go 1216: Successfully claimed IPs: [192.168.77.5/26] block=192.168.77.0/26 handle="k8s-pod-network.664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" host="ip-172-31-21-246" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.701 [INFO][5651] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.77.5/26] handle="k8s-pod-network.664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" host="ip-172-31-21-246" Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.701 [INFO][5651] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:36:49.743184 containerd[1879]: 2024-09-04 17:36:49.701 [INFO][5651] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.77.5/26] IPv6=[] ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" HandleID="k8s-pod-network.664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Workload="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" Sep 4 17:36:49.744154 containerd[1879]: 2024-09-04 17:36:49.708 [INFO][5637] k8s.go 386: Populated endpoint ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Namespace="calico-apiserver" Pod="calico-apiserver-775c886fb7-rvllt" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0", GenerateName:"calico-apiserver-775c886fb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775c886fb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"", Pod:"calico-apiserver-775c886fb7-rvllt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b54726fd78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:36:49.744154 containerd[1879]: 2024-09-04 17:36:49.709 [INFO][5637] k8s.go 387: Calico CNI using IPs: [192.168.77.5/32] ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Namespace="calico-apiserver" Pod="calico-apiserver-775c886fb7-rvllt" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" Sep 4 17:36:49.744154 containerd[1879]: 2024-09-04 17:36:49.709 [INFO][5637] dataplane_linux.go 68: Setting the host side veth name to cali9b54726fd78 ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Namespace="calico-apiserver" Pod="calico-apiserver-775c886fb7-rvllt" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" Sep 4 17:36:49.744154 containerd[1879]: 2024-09-04 17:36:49.713 [INFO][5637] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Namespace="calico-apiserver" Pod="calico-apiserver-775c886fb7-rvllt" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" Sep 4 17:36:49.744154 containerd[1879]: 2024-09-04 17:36:49.713 [INFO][5637] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Namespace="calico-apiserver" Pod="calico-apiserver-775c886fb7-rvllt" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0", GenerateName:"calico-apiserver-775c886fb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775c886fb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91", Pod:"calico-apiserver-775c886fb7-rvllt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b54726fd78", MAC:"32:43:6d:4f:38:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:36:49.744154 containerd[1879]: 2024-09-04 17:36:49.737 [INFO][5637] k8s.go 500: Wrote updated endpoint to datastore ContainerID="664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91" Namespace="calico-apiserver" Pod="calico-apiserver-775c886fb7-rvllt" WorkloadEndpoint="ip--172--31--21--246-k8s-calico--apiserver--775c886fb7--rvllt-eth0" Sep 4 17:36:49.833708 containerd[1879]: time="2024-09-04T17:36:49.833466387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:36:49.833708 containerd[1879]: time="2024-09-04T17:36:49.833563244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:36:49.833708 containerd[1879]: time="2024-09-04T17:36:49.833584111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:49.835945 containerd[1879]: time="2024-09-04T17:36:49.835767829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:36:49.979875 systemd[1]: Started cri-containerd-664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91.scope - libcontainer container 664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91. Sep 4 17:36:50.085003 containerd[1879]: time="2024-09-04T17:36:50.084721682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775c886fb7-rvllt,Uid:0946ce9d-0c2c-4e79-9a89-2ebc8e32d7ce,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91\"" Sep 4 17:36:50.092977 containerd[1879]: time="2024-09-04T17:36:50.092668646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:36:51.215175 systemd-networkd[1721]: cali9b54726fd78: Gained IPv6LL Sep 4 17:36:53.608855 ntpd[1851]: Listen normally on 12 cali9b54726fd78 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:36:53.613335 ntpd[1851]: 4 Sep 17:36:53 ntpd[1851]: Listen normally on 12 cali9b54726fd78 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:36:54.081886 containerd[1879]: time="2024-09-04T17:36:54.081397674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:36:54.095350 containerd[1879]: time="2024-09-04T17:36:54.095054957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 4.002282187s" Sep 4 17:36:54.096556 containerd[1879]: time="2024-09-04T17:36:54.095600336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:36:54.114195 containerd[1879]: time="2024-09-04T17:36:54.114140898Z" level=info msg="CreateContainer within sandbox \"664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:36:54.125488 containerd[1879]: time="2024-09-04T17:36:54.125428679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:54.128075 containerd[1879]: time="2024-09-04T17:36:54.128013823Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:54.131288 containerd[1879]: time="2024-09-04T17:36:54.131230413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:54.147890 containerd[1879]: time="2024-09-04T17:36:54.147826913Z" level=info msg="CreateContainer within sandbox \"664aa1d877ee1d2766be9383cff67a4abf47f7ccd1e3d754117bbaefcac17b91\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d1d7df0c2d3d5872319b56bcfba345c107ee0c338cdb7c34938b4985ffaffebb\"" Sep 4 17:36:54.152945 containerd[1879]: time="2024-09-04T17:36:54.152868011Z" level=info msg="StartContainer for \"d1d7df0c2d3d5872319b56bcfba345c107ee0c338cdb7c34938b4985ffaffebb\"" Sep 4 17:36:54.251618 systemd[1]: Started cri-containerd-d1d7df0c2d3d5872319b56bcfba345c107ee0c338cdb7c34938b4985ffaffebb.scope - libcontainer container d1d7df0c2d3d5872319b56bcfba345c107ee0c338cdb7c34938b4985ffaffebb. Sep 4 17:36:54.360722 containerd[1879]: time="2024-09-04T17:36:54.358818893Z" level=info msg="StartContainer for \"d1d7df0c2d3d5872319b56bcfba345c107ee0c338cdb7c34938b4985ffaffebb\" returns successfully" Sep 4 17:36:54.574131 systemd[1]: Started sshd@23-172.31.21.246:22-139.178.68.195:36018.service - OpenSSH per-connection server daemon (139.178.68.195:36018). Sep 4 17:36:54.827466 sshd[5762]: Accepted publickey for core from 139.178.68.195 port 36018 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:36:54.831111 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:54.842753 systemd-logind[1856]: New session 24 of user core. Sep 4 17:36:54.852023 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:36:55.971608 sshd[5762]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:55.983374 systemd[1]: sshd@23-172.31.21.246:22-139.178.68.195:36018.service: Deactivated successfully. Sep 4 17:36:55.986960 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:36:55.988625 systemd-logind[1856]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:36:55.991768 systemd-logind[1856]: Removed session 24. Sep 4 17:36:56.302647 kubelet[3220]: I0904 17:36:56.299637 3220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-775c886fb7-rvllt" podStartSLOduration=4.181465029 podStartE2EDuration="8.194372137s" podCreationTimestamp="2024-09-04 17:36:48 +0000 UTC" firstStartedPulling="2024-09-04 17:36:50.089376862 +0000 UTC m=+114.133478909" lastFinishedPulling="2024-09-04 17:36:54.102283961 +0000 UTC m=+118.146386017" observedRunningTime="2024-09-04 17:36:54.522735735 +0000 UTC m=+118.566837797" watchObservedRunningTime="2024-09-04 17:36:56.194372137 +0000 UTC m=+120.238474194" Sep 4 17:36:58.589560 containerd[1879]: time="2024-09-04T17:36:58.589246301Z" level=info msg="StopPodSandbox for \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\"" Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.716 [WARNING][5799] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0", GenerateName:"calico-kube-controllers-6b7b5765f6-", Namespace:"calico-system", SelfLink:"", UID:"00276485-11d4-4be8-a266-bce8f1f7d45a", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b7b5765f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079", Pod:"calico-kube-controllers-6b7b5765f6-8d75q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4433362e6c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.717 [INFO][5799] k8s.go 608: Cleaning up netns ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.717 [INFO][5799] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" iface="eth0" netns="" Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.717 [INFO][5799] k8s.go 615: Releasing IP address(es) ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.717 [INFO][5799] utils.go 188: Calico CNI releasing IP address ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.791 [INFO][5806] ipam_plugin.go 417: Releasing address using handleID ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" HandleID="k8s-pod-network.ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.791 [INFO][5806] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.791 [INFO][5806] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.802 [WARNING][5806] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" HandleID="k8s-pod-network.ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.802 [INFO][5806] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" HandleID="k8s-pod-network.ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.806 [INFO][5806] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:36:58.814486 containerd[1879]: 2024-09-04 17:36:58.810 [INFO][5799] k8s.go 621: Teardown processing complete. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:58.829477 containerd[1879]: time="2024-09-04T17:36:58.829416772Z" level=info msg="TearDown network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\" successfully" Sep 4 17:36:58.829710 containerd[1879]: time="2024-09-04T17:36:58.829687223Z" level=info msg="StopPodSandbox for \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\" returns successfully" Sep 4 17:36:58.831339 containerd[1879]: time="2024-09-04T17:36:58.831031697Z" level=info msg="RemovePodSandbox for \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\"" Sep 4 17:36:58.831339 containerd[1879]: time="2024-09-04T17:36:58.831071052Z" level=info msg="Forcibly stopping sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\"" Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.891 [WARNING][5825] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0", GenerateName:"calico-kube-controllers-6b7b5765f6-", Namespace:"calico-system", SelfLink:"", UID:"00276485-11d4-4be8-a266-bce8f1f7d45a", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b7b5765f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-246", ContainerID:"466147d65682c9e48045788e84f62f4aa7fc49841b014b19149cf078861d1079", Pod:"calico-kube-controllers-6b7b5765f6-8d75q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4433362e6c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.891 [INFO][5825] k8s.go 608: Cleaning up netns ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.891 [INFO][5825] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" iface="eth0" netns="" Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.891 [INFO][5825] k8s.go 615: Releasing IP address(es) ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.891 [INFO][5825] utils.go 188: Calico CNI releasing IP address ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.926 [INFO][5831] ipam_plugin.go 417: Releasing address using handleID ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" HandleID="k8s-pod-network.ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.931 [INFO][5831] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.931 [INFO][5831] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.944 [WARNING][5831] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" HandleID="k8s-pod-network.ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.944 [INFO][5831] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" HandleID="k8s-pod-network.ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Workload="ip--172--31--21--246-k8s-calico--kube--controllers--6b7b5765f6--8d75q-eth0" Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.947 [INFO][5831] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:36:58.968400 containerd[1879]: 2024-09-04 17:36:58.960 [INFO][5825] k8s.go 621: Teardown processing complete. ContainerID="ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a" Sep 4 17:36:58.968400 containerd[1879]: time="2024-09-04T17:36:58.967408242Z" level=info msg="TearDown network for sandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\" successfully" Sep 4 17:36:59.053826 containerd[1879]: time="2024-09-04T17:36:59.053766859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:36:59.054257 containerd[1879]: time="2024-09-04T17:36:59.053870785Z" level=info msg="RemovePodSandbox \"ec099b6cec8722304573670473fe789f449a6eec846b65ab19453f439171fd6a\" returns successfully" Sep 4 17:37:01.040831 systemd[1]: Started sshd@24-172.31.21.246:22-139.178.68.195:33684.service - OpenSSH per-connection server daemon (139.178.68.195:33684). Sep 4 17:37:01.312088 sshd[5842]: Accepted publickey for core from 139.178.68.195 port 33684 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:37:01.319868 sshd[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:01.338571 systemd-logind[1856]: New session 25 of user core. Sep 4 17:37:01.342757 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:37:01.890883 systemd[1]: run-containerd-runc-k8s.io-d37933a1d8c2bcfabcb0516d919e930303f6977faf321f6e6c63fae5d47e6ba7-runc.VZj0IP.mount: Deactivated successfully. Sep 4 17:37:02.917749 sshd[5842]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:02.933875 systemd[1]: sshd@24-172.31.21.246:22-139.178.68.195:33684.service: Deactivated successfully. Sep 4 17:37:02.949044 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:37:02.958873 systemd-logind[1856]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:37:02.973339 systemd-logind[1856]: Removed session 25. Sep 4 17:37:07.139756 systemd[1]: run-containerd-runc-k8s.io-d37933a1d8c2bcfabcb0516d919e930303f6977faf321f6e6c63fae5d47e6ba7-runc.g2tfDr.mount: Deactivated successfully. Sep 4 17:37:07.953999 systemd[1]: Started sshd@25-172.31.21.246:22-139.178.68.195:60854.service - OpenSSH per-connection server daemon (139.178.68.195:60854). Sep 4 17:37:08.147103 sshd[5896]: Accepted publickey for core from 139.178.68.195 port 60854 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:37:08.148975 sshd[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:08.154402 systemd-logind[1856]: New session 26 of user core. Sep 4 17:37:08.160759 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:37:08.450456 sshd[5896]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:08.456414 systemd-logind[1856]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:37:08.457500 systemd[1]: sshd@25-172.31.21.246:22-139.178.68.195:60854.service: Deactivated successfully. Sep 4 17:37:08.462370 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:37:08.463957 systemd-logind[1856]: Removed session 26. Sep 4 17:37:13.493888 systemd[1]: Started sshd@26-172.31.21.246:22-139.178.68.195:60868.service - OpenSSH per-connection server daemon (139.178.68.195:60868). Sep 4 17:37:13.678902 sshd[5918]: Accepted publickey for core from 139.178.68.195 port 60868 ssh2: RSA SHA256:7R68OPxBD1aKub0NQezDW73KPeSGi+cl3Ia6CweCJtQ Sep 4 17:37:13.681137 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:13.693859 systemd-logind[1856]: New session 27 of user core. Sep 4 17:37:13.697670 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:37:13.969633 sshd[5918]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:13.976753 systemd-logind[1856]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:37:13.978077 systemd[1]: sshd@26-172.31.21.246:22-139.178.68.195:60868.service: Deactivated successfully. Sep 4 17:37:13.982166 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:37:13.983955 systemd-logind[1856]: Removed session 27. Sep 4 17:37:15.865326 systemd[1]: run-containerd-runc-k8s.io-a0556935f690acac12f1031aa2eb9fbe941acaa6c215842d3a54acdd5b29461d-runc.DyJtGK.mount: Deactivated successfully. Sep 4 17:37:29.114880 systemd[1]: cri-containerd-f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3.scope: Deactivated successfully. Sep 4 17:37:29.115376 systemd[1]: cri-containerd-f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3.scope: Consumed 6.950s CPU time. Sep 4 17:37:29.140342 kubelet[3220]: E0904 17:37:29.139789 3220 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-246?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:37:29.172620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3-rootfs.mount: Deactivated successfully. Sep 4 17:37:29.175648 containerd[1879]: time="2024-09-04T17:37:29.166686006Z" level=info msg="shim disconnected" id=f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3 namespace=k8s.io Sep 4 17:37:29.175648 containerd[1879]: time="2024-09-04T17:37:29.174882852Z" level=warning msg="cleaning up after shim disconnected" id=f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3 namespace=k8s.io Sep 4 17:37:29.175648 containerd[1879]: time="2024-09-04T17:37:29.174907536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:29.242191 containerd[1879]: time="2024-09-04T17:37:29.242123605Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:37:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:37:29.393138 systemd[1]: cri-containerd-115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90.scope: Deactivated successfully. Sep 4 17:37:29.399664 systemd[1]: cri-containerd-115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90.scope: Consumed 4.002s CPU time, 23.3M memory peak, 0B memory swap peak. Sep 4 17:37:29.462646 containerd[1879]: time="2024-09-04T17:37:29.462490409Z" level=info msg="shim disconnected" id=115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90 namespace=k8s.io Sep 4 17:37:29.462646 containerd[1879]: time="2024-09-04T17:37:29.462617929Z" level=warning msg="cleaning up after shim disconnected" id=115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90 namespace=k8s.io Sep 4 17:37:29.463385 containerd[1879]: time="2024-09-04T17:37:29.462659241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:29.468590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90-rootfs.mount: Deactivated successfully. Sep 4 17:37:29.886615 kubelet[3220]: I0904 17:37:29.886571 3220 scope.go:117] "RemoveContainer" containerID="f0c6aa7fc9d29182cda529eb33982ad74925c70cd68d869382baa3734d6b14d3" Sep 4 17:37:29.894174 kubelet[3220]: I0904 17:37:29.893091 3220 scope.go:117] "RemoveContainer" containerID="115e2c1c6ac82f8f1f1f139ff68b468434961487004a85deab99865863d4bd90" Sep 4 17:37:29.917828 containerd[1879]: time="2024-09-04T17:37:29.917117491Z" level=info msg="CreateContainer within sandbox \"2c7350915685b983b03d17ea456252f2a379b44b2e830757dcd33d11952661c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:37:29.920338 containerd[1879]: time="2024-09-04T17:37:29.920284596Z" level=info msg="CreateContainer within sandbox \"b3fcddf0f58c009b767690d6cbefec220ef11e31853d6c5f939d324013808e4c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:37:30.000452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962501239.mount: Deactivated successfully. Sep 4 17:37:30.021060 containerd[1879]: time="2024-09-04T17:37:30.021000952Z" level=info msg="CreateContainer within sandbox \"b3fcddf0f58c009b767690d6cbefec220ef11e31853d6c5f939d324013808e4c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ffcc307f580aaf80c4f0cad7076b6fd511a695fe71ebad2d29e4b684dda44c62\"" Sep 4 17:37:30.021983 containerd[1879]: time="2024-09-04T17:37:30.021880081Z" level=info msg="StartContainer for \"ffcc307f580aaf80c4f0cad7076b6fd511a695fe71ebad2d29e4b684dda44c62\"" Sep 4 17:37:30.032943 containerd[1879]: time="2024-09-04T17:37:30.032891494Z" level=info msg="CreateContainer within sandbox \"2c7350915685b983b03d17ea456252f2a379b44b2e830757dcd33d11952661c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8d5fd741ce84e5a073f7191e2523e47b4b8fa3af7e1990eaa6dbfa4cc6ef1b77\"" Sep 4 17:37:30.035670 containerd[1879]: time="2024-09-04T17:37:30.035625394Z" level=info msg="StartContainer for \"8d5fd741ce84e5a073f7191e2523e47b4b8fa3af7e1990eaa6dbfa4cc6ef1b77\"" Sep 4 17:37:30.193691 systemd[1]: Started cri-containerd-8d5fd741ce84e5a073f7191e2523e47b4b8fa3af7e1990eaa6dbfa4cc6ef1b77.scope - libcontainer container 8d5fd741ce84e5a073f7191e2523e47b4b8fa3af7e1990eaa6dbfa4cc6ef1b77. Sep 4 17:37:30.196955 systemd[1]: Started cri-containerd-ffcc307f580aaf80c4f0cad7076b6fd511a695fe71ebad2d29e4b684dda44c62.scope - libcontainer container ffcc307f580aaf80c4f0cad7076b6fd511a695fe71ebad2d29e4b684dda44c62. Sep 4 17:37:30.265883 containerd[1879]: time="2024-09-04T17:37:30.265210168Z" level=info msg="StartContainer for \"ffcc307f580aaf80c4f0cad7076b6fd511a695fe71ebad2d29e4b684dda44c62\" returns successfully" Sep 4 17:37:30.287564 containerd[1879]: time="2024-09-04T17:37:30.287506827Z" level=info msg="StartContainer for \"8d5fd741ce84e5a073f7191e2523e47b4b8fa3af7e1990eaa6dbfa4cc6ef1b77\" returns successfully" Sep 4 17:37:32.779695 systemd[1]: cri-containerd-c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5.scope: Deactivated successfully. Sep 4 17:37:32.780011 systemd[1]: cri-containerd-c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5.scope: Consumed 2.483s CPU time, 19.8M memory peak, 0B memory swap peak. Sep 4 17:37:32.838404 containerd[1879]: time="2024-09-04T17:37:32.838269203Z" level=info msg="shim disconnected" id=c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5 namespace=k8s.io Sep 4 17:37:32.838404 containerd[1879]: time="2024-09-04T17:37:32.838400404Z" level=warning msg="cleaning up after shim disconnected" id=c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5 namespace=k8s.io Sep 4 17:37:32.840903 containerd[1879]: time="2024-09-04T17:37:32.838414457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:32.847220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5-rootfs.mount: Deactivated successfully. Sep 4 17:37:33.924887 kubelet[3220]: I0904 17:37:33.924849 3220 scope.go:117] "RemoveContainer" containerID="c6d44df93c64c8ce03d2250361ab0688bc2a40c4fc31ba97632e59e50a5c3ef5" Sep 4 17:37:33.927697 containerd[1879]: time="2024-09-04T17:37:33.927650632Z" level=info msg="CreateContainer within sandbox \"8784e52476935f54b1cdc091f71a3f43ef6e2b3d538b6cd8f9d766977b1c0252\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:37:33.960569 containerd[1879]: time="2024-09-04T17:37:33.960524639Z" level=info msg="CreateContainer within sandbox \"8784e52476935f54b1cdc091f71a3f43ef6e2b3d538b6cd8f9d766977b1c0252\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"34ca1ac1e8271105d5f4c4a8bee06d5db949d201e16cd8c3fbd5962e14ac4fe8\"" Sep 4 17:37:33.961269 containerd[1879]: time="2024-09-04T17:37:33.961231639Z" level=info msg="StartContainer for \"34ca1ac1e8271105d5f4c4a8bee06d5db949d201e16cd8c3fbd5962e14ac4fe8\"" Sep 4 17:37:34.034306 systemd[1]: run-containerd-runc-k8s.io-34ca1ac1e8271105d5f4c4a8bee06d5db949d201e16cd8c3fbd5962e14ac4fe8-runc.3KcF0Y.mount: Deactivated successfully. Sep 4 17:37:34.044650 systemd[1]: Started cri-containerd-34ca1ac1e8271105d5f4c4a8bee06d5db949d201e16cd8c3fbd5962e14ac4fe8.scope - libcontainer container 34ca1ac1e8271105d5f4c4a8bee06d5db949d201e16cd8c3fbd5962e14ac4fe8. Sep 4 17:37:34.110677 containerd[1879]: time="2024-09-04T17:37:34.110624057Z" level=info msg="StartContainer for \"34ca1ac1e8271105d5f4c4a8bee06d5db949d201e16cd8c3fbd5962e14ac4fe8\" returns successfully" Sep 4 17:37:39.156988 kubelet[3220]: E0904 17:37:39.156930 3220 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-21-246)" Sep 4 17:37:45.782481 systemd[1]: run-containerd-runc-k8s.io-a0556935f690acac12f1031aa2eb9fbe941acaa6c215842d3a54acdd5b29461d-runc.Wbt1tg.mount: Deactivated successfully.