Aug 5 22:11:47.636618 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:27 -00 2024 Aug 5 22:11:47.636659 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:11:47.636675 kernel: BIOS-provided physical RAM map: Aug 5 22:11:47.636685 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 5 22:11:47.636696 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 5 22:11:47.636707 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 5 22:11:47.636723 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Aug 5 22:11:47.636734 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Aug 5 22:11:47.636746 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Aug 5 22:11:47.636758 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 5 22:11:47.636769 kernel: NX (Execute Disable) protection: active Aug 5 22:11:47.636781 kernel: APIC: Static calls initialized Aug 5 22:11:47.636793 kernel: SMBIOS 2.7 present. Aug 5 22:11:47.636805 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Aug 5 22:11:47.636824 kernel: Hypervisor detected: KVM Aug 5 22:11:47.636838 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:11:47.636852 kernel: kvm-clock: using sched offset of 8748427167 cycles Aug 5 22:11:47.636866 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:11:47.636881 kernel: tsc: Detected 2499.998 MHz processor Aug 5 22:11:47.636894 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:11:47.636909 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:11:47.636926 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Aug 5 22:11:47.636940 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 5 22:11:47.636954 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:11:47.636968 kernel: Using GB pages for direct mapping Aug 5 22:11:47.636981 kernel: ACPI: Early table checksum verification disabled Aug 5 22:11:47.636995 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Aug 5 22:11:47.639357 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Aug 5 22:11:47.639385 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 5 22:11:47.639401 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Aug 5 22:11:47.639425 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Aug 5 22:11:47.639440 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:11:47.639454 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 5 22:11:47.639468 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Aug 5 22:11:47.639482 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 5 22:11:47.639496 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Aug 5 22:11:47.639510 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Aug 5 22:11:47.639546 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:11:47.639563 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Aug 5 22:11:47.639578 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Aug 5 22:11:47.639598 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Aug 5 22:11:47.639612 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Aug 5 22:11:47.639627 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Aug 5 22:11:47.639642 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Aug 5 22:11:47.639660 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Aug 5 22:11:47.639919 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Aug 5 22:11:47.639943 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Aug 5 22:11:47.639959 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Aug 5 22:11:47.639973 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:11:47.639988 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:11:47.640002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Aug 5 22:11:47.640017 kernel: NUMA: Initialized distance table, cnt=1 Aug 5 22:11:47.640032 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Aug 5 22:11:47.640052 kernel: Zone ranges: Aug 5 22:11:47.640067 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:11:47.640082 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Aug 5 22:11:47.640097 kernel: Normal empty Aug 5 22:11:47.640112 kernel: Movable zone start for each node Aug 5 22:11:47.640126 kernel: Early memory node ranges Aug 5 22:11:47.640141 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 5 22:11:47.640156 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Aug 5 22:11:47.640171 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Aug 5 22:11:47.640187 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:11:47.640203 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 5 22:11:47.640217 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Aug 5 22:11:47.640232 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 5 22:11:47.640246 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:11:47.640261 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Aug 5 22:11:47.640275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:11:47.640290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:11:47.640305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:11:47.640323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:11:47.640339 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:11:47.640353 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 5 22:11:47.640368 kernel: TSC deadline timer available Aug 5 22:11:47.640383 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:11:47.640398 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:11:47.640413 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Aug 5 22:11:47.640428 kernel: Booting paravirtualized kernel on KVM Aug 5 22:11:47.640443 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:11:47.640458 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:11:47.640476 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:11:47.640491 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:11:47.640505 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:11:47.640534 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:11:47.640548 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:11:47.640572 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:11:47.640588 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:11:47.640602 kernel: random: crng init done Aug 5 22:11:47.640621 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:11:47.640637 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:11:47.640652 kernel: Fallback order for Node 0: 0 Aug 5 22:11:47.641096 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Aug 5 22:11:47.641114 kernel: Policy zone: DMA32 Aug 5 22:11:47.641129 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:11:47.641144 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Aug 5 22:11:47.641159 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:11:47.642418 kernel: Kernel/User page tables isolation: enabled Aug 5 22:11:47.642441 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:11:47.642457 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:11:47.642473 kernel: Dynamic Preempt: voluntary Aug 5 22:11:47.642488 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:11:47.642504 kernel: rcu: RCU event tracing is enabled. Aug 5 22:11:47.642530 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:11:47.642546 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:11:47.642559 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:11:47.642575 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:11:47.642596 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:11:47.642612 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:11:47.642626 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 5 22:11:47.642641 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:11:47.642656 kernel: Console: colour VGA+ 80x25 Aug 5 22:11:47.642671 kernel: printk: console [ttyS0] enabled Aug 5 22:11:47.642686 kernel: ACPI: Core revision 20230628 Aug 5 22:11:47.642701 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Aug 5 22:11:47.642716 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:11:47.642734 kernel: x2apic enabled Aug 5 22:11:47.642749 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:11:47.643348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 5 22:11:47.643372 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Aug 5 22:11:47.643388 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 5 22:11:47.643402 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Aug 5 22:11:47.643418 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:11:47.643434 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:11:47.643449 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:11:47.643464 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:11:47.643479 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 5 22:11:47.643494 kernel: RETBleed: Vulnerable Aug 5 22:11:47.643513 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:11:47.643544 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:11:47.643560 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:11:47.643575 kernel: GDS: Unknown: Dependent on hypervisor status Aug 5 22:11:47.643591 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:11:47.643606 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:11:47.643626 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:11:47.643642 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 5 22:11:47.643658 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 5 22:11:47.643674 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 5 22:11:47.643689 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 5 22:11:47.643705 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 5 22:11:47.643723 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 5 22:11:47.643739 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:11:47.643755 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 5 22:11:47.643770 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 5 22:11:47.643785 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Aug 5 22:11:47.643804 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Aug 5 22:11:47.643820 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Aug 5 22:11:47.643836 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Aug 5 22:11:47.644584 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Aug 5 22:11:47.644605 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:11:47.644621 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:11:47.644637 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:11:47.644653 kernel: SELinux: Initializing. Aug 5 22:11:47.644669 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:11:47.644685 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:11:47.644702 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 5 22:11:47.644718 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:11:47.644739 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:11:47.644755 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:11:47.644771 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 5 22:11:47.644787 kernel: signal: max sigframe size: 3632 Aug 5 22:11:47.644803 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:11:47.644820 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:11:47.644836 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:11:47.644852 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:11:47.644868 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:11:47.644888 kernel: .... node #0, CPUs: #1 Aug 5 22:11:47.644906 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 5 22:11:47.644920 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:11:47.644934 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:11:47.644948 kernel: smpboot: Max logical packages: 1 Aug 5 22:11:47.644965 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Aug 5 22:11:47.644981 kernel: devtmpfs: initialized Aug 5 22:11:47.644997 kernel: x86/mm: Memory block size: 128MB Aug 5 22:11:47.645550 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:11:47.645570 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:11:47.645586 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:11:47.645600 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:11:47.645617 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:11:47.645632 kernel: audit: type=2000 audit(1722895905.084:1): state=initialized audit_enabled=0 res=1 Aug 5 22:11:47.645649 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:11:47.645665 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:11:47.645681 kernel: cpuidle: using governor menu Aug 5 22:11:47.645702 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:11:47.645717 kernel: dca service started, version 1.12.1 Aug 5 22:11:47.645734 kernel: PCI: Using configuration type 1 for base access Aug 5 22:11:47.645750 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:11:47.645766 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:11:47.645782 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:11:47.645798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:11:47.645814 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:11:47.645830 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:11:47.645849 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:11:47.645865 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:11:47.645881 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:11:47.645896 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 5 22:11:47.645912 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:11:47.645928 kernel: ACPI: Interpreter enabled Aug 5 22:11:47.645944 kernel: ACPI: PM: (supports S0 S5) Aug 5 22:11:47.645960 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:11:47.645976 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:11:47.645996 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:11:47.646012 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 5 22:11:47.646028 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:11:47.646292 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:11:47.646438 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 5 22:11:47.646592 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 5 22:11:47.646613 kernel: acpiphp: Slot [3] registered Aug 5 22:11:47.646634 kernel: acpiphp: Slot [4] registered Aug 5 22:11:47.646651 kernel: acpiphp: Slot [5] registered Aug 5 22:11:47.646667 kernel: acpiphp: Slot [6] registered Aug 5 22:11:47.646682 kernel: acpiphp: Slot [7] registered Aug 5 22:11:47.646698 kernel: acpiphp: Slot [8] registered Aug 5 22:11:47.646713 kernel: acpiphp: Slot [9] registered Aug 5 22:11:47.646729 kernel: acpiphp: Slot [10] registered Aug 5 22:11:47.646745 kernel: acpiphp: Slot [11] registered Aug 5 22:11:47.646761 kernel: acpiphp: Slot [12] registered Aug 5 22:11:47.646777 kernel: acpiphp: Slot [13] registered Aug 5 22:11:47.646796 kernel: acpiphp: Slot [14] registered Aug 5 22:11:47.646812 kernel: acpiphp: Slot [15] registered Aug 5 22:11:47.646828 kernel: acpiphp: Slot [16] registered Aug 5 22:11:47.646843 kernel: acpiphp: Slot [17] registered Aug 5 22:11:47.646859 kernel: acpiphp: Slot [18] registered Aug 5 22:11:47.646874 kernel: acpiphp: Slot [19] registered Aug 5 22:11:47.646890 kernel: acpiphp: Slot [20] registered Aug 5 22:11:47.646905 kernel: acpiphp: Slot [21] registered Aug 5 22:11:47.646921 kernel: acpiphp: Slot [22] registered Aug 5 22:11:47.646941 kernel: acpiphp: Slot [23] registered Aug 5 22:11:47.646956 kernel: acpiphp: Slot [24] registered Aug 5 22:11:47.646972 kernel: acpiphp: Slot [25] registered Aug 5 22:11:47.646988 kernel: acpiphp: Slot [26] registered Aug 5 22:11:47.647003 kernel: acpiphp: Slot [27] registered Aug 5 22:11:47.647019 kernel: acpiphp: Slot [28] registered Aug 5 22:11:47.647035 kernel: acpiphp: Slot [29] registered Aug 5 22:11:47.647050 kernel: acpiphp: Slot [30] registered Aug 5 22:11:47.647066 kernel: acpiphp: Slot [31] registered Aug 5 22:11:47.647082 kernel: PCI host bridge to bus 0000:00 Aug 5 22:11:47.649128 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:11:47.649333 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:11:47.649464 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:11:47.649604 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 5 22:11:47.649727 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:11:47.649892 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:11:47.650068 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:11:47.650360 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Aug 5 22:11:47.650506 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 5 22:11:47.650675 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Aug 5 22:11:47.650827 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Aug 5 22:11:47.651692 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Aug 5 22:11:47.651847 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Aug 5 22:11:47.651993 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Aug 5 22:11:47.652469 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Aug 5 22:11:47.652667 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Aug 5 22:11:47.652813 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 46875 usecs Aug 5 22:11:47.655063 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Aug 5 22:11:47.655477 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Aug 5 22:11:47.657243 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 5 22:11:47.657397 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:11:47.657605 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 5 22:11:47.657751 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Aug 5 22:11:47.657899 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 5 22:11:47.658038 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Aug 5 22:11:47.658060 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:11:47.658077 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:11:47.658099 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:11:47.658115 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:11:47.658131 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:11:47.658147 kernel: iommu: Default domain type: Translated Aug 5 22:11:47.658163 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:11:47.658179 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:11:47.658196 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:11:47.658212 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 5 22:11:47.658228 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Aug 5 22:11:47.660328 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Aug 5 22:11:47.660798 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Aug 5 22:11:47.660983 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:11:47.661006 kernel: vgaarb: loaded Aug 5 22:11:47.661023 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 5 22:11:47.661041 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Aug 5 22:11:47.661059 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:11:47.661076 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:11:47.661100 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:11:47.661118 kernel: pnp: PnP ACPI init Aug 5 22:11:47.661134 kernel: pnp: PnP ACPI: found 5 devices Aug 5 22:11:47.661153 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:11:47.661168 kernel: NET: Registered PF_INET protocol family Aug 5 22:11:47.661185 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:11:47.661202 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 5 22:11:47.661220 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:11:47.661237 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:11:47.661259 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 5 22:11:47.661276 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 5 22:11:47.661293 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:11:47.661310 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:11:47.661326 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:11:47.661344 kernel: NET: Registered PF_XDP protocol family Aug 5 22:11:47.661509 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:11:47.661687 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:11:47.662443 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:11:47.662753 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 5 22:11:47.663074 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:11:47.663105 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:11:47.663123 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:11:47.663140 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 5 22:11:47.663156 kernel: clocksource: Switched to clocksource tsc Aug 5 22:11:47.663173 kernel: Initialise system trusted keyrings Aug 5 22:11:47.663196 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 5 22:11:47.663212 kernel: Key type asymmetric registered Aug 5 22:11:47.663228 kernel: Asymmetric key parser 'x509' registered Aug 5 22:11:47.663244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:11:47.663261 kernel: io scheduler mq-deadline registered Aug 5 22:11:47.663277 kernel: io scheduler kyber registered Aug 5 22:11:47.663293 kernel: io scheduler bfq registered Aug 5 22:11:47.663354 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:11:47.663373 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:11:47.663390 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:11:47.663410 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:11:47.663664 kernel: i8042: Warning: Keylock active Aug 5 22:11:47.663682 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:11:47.663697 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:11:47.663871 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 5 22:11:47.664001 kernel: rtc_cmos 00:00: registered as rtc0 Aug 5 22:11:47.664134 kernel: rtc_cmos 00:00: setting system clock to 2024-08-05T22:11:46 UTC (1722895906) Aug 5 22:11:47.664288 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 5 22:11:47.664308 kernel: intel_pstate: CPU model not supported Aug 5 22:11:47.664324 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:11:47.664340 kernel: Segment Routing with IPv6 Aug 5 22:11:47.664356 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:11:47.664372 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:11:47.664388 kernel: Key type dns_resolver registered Aug 5 22:11:47.664404 kernel: IPI shorthand broadcast: enabled Aug 5 22:11:47.664420 kernel: sched_clock: Marking stable (1277024303, 446468698)->(1964176545, -240683544) Aug 5 22:11:47.664439 kernel: registered taskstats version 1 Aug 5 22:11:47.664455 kernel: Loading compiled-in X.509 certificates Aug 5 22:11:47.664471 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: e31e857530e65c19b206dbf3ab8297cc37ac5d55' Aug 5 22:11:47.664487 kernel: Key type .fscrypt registered Aug 5 22:11:47.664502 kernel: Key type fscrypt-provisioning registered Aug 5 22:11:47.664547 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:11:47.664563 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:11:47.664606 kernel: ima: No architecture policies found Aug 5 22:11:47.664622 kernel: clk: Disabling unused clocks Aug 5 22:11:47.664642 kernel: Freeing unused kernel image (initmem) memory: 49328K Aug 5 22:11:47.664658 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:11:47.664674 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:11:47.664690 kernel: Run /init as init process Aug 5 22:11:47.664705 kernel: with arguments: Aug 5 22:11:47.664721 kernel: /init Aug 5 22:11:47.664736 kernel: with environment: Aug 5 22:11:47.664751 kernel: HOME=/ Aug 5 22:11:47.664766 kernel: TERM=linux Aug 5 22:11:47.664782 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:11:47.664808 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:11:47.664842 systemd[1]: Detected virtualization amazon. Aug 5 22:11:47.664862 systemd[1]: Detected architecture x86-64. Aug 5 22:11:47.664879 systemd[1]: Running in initrd. Aug 5 22:11:47.664899 systemd[1]: No hostname configured, using default hostname. Aug 5 22:11:47.664916 systemd[1]: Hostname set to . Aug 5 22:11:47.664934 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:11:47.664951 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:11:47.664973 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:11:47.664993 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:11:47.665011 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:11:47.665030 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:11:47.665049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:11:47.665067 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:11:47.665089 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:11:47.665109 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:11:47.665127 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:11:47.665145 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:11:47.665163 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:11:47.665181 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:11:47.665199 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:11:47.665220 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:11:47.665237 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:11:47.665255 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:11:47.665272 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:11:47.665290 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:11:47.668716 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:11:47.668765 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:11:47.668785 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:11:47.668815 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:11:47.668833 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:11:47.668850 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:11:47.668868 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:11:47.668889 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:11:47.668907 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:11:47.668931 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:11:47.668949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:11:47.669020 systemd-journald[179]: Collecting audit messages is disabled. Aug 5 22:11:47.669059 systemd-journald[179]: Journal started Aug 5 22:11:47.669099 systemd-journald[179]: Runtime Journal (/run/log/journal/ec22068263c0b7e79481193356d4dc5d) is 4.8M, max 38.6M, 33.7M free. Aug 5 22:11:47.679021 systemd-modules-load[180]: Inserted module 'overlay' Aug 5 22:11:47.684119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:11:47.692552 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:11:47.690639 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:11:47.708655 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:11:47.713408 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:11:47.728895 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:11:47.743735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:11:47.921167 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:11:47.921209 kernel: Bridge firewalling registered Aug 5 22:11:47.787728 systemd-modules-load[180]: Inserted module 'br_netfilter' Aug 5 22:11:47.931111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:11:47.933606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:11:47.953802 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:11:47.960531 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:11:47.971747 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:11:47.975116 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:11:48.011272 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:11:48.037247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:11:48.040468 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:11:48.047964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:11:48.059058 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:11:48.062916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:11:48.140454 dracut-cmdline[211]: dracut-dracut-053 Aug 5 22:11:48.165464 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:11:48.214245 systemd-resolved[209]: Positive Trust Anchors: Aug 5 22:11:48.214565 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:11:48.214628 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:11:48.241682 systemd-resolved[209]: Defaulting to hostname 'linux'. Aug 5 22:11:48.253309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:11:48.266683 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:11:48.502568 kernel: SCSI subsystem initialized Aug 5 22:11:48.539623 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:11:48.565738 kernel: iscsi: registered transport (tcp) Aug 5 22:11:48.636960 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:11:48.637251 kernel: QLogic iSCSI HBA Driver Aug 5 22:11:48.764143 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:11:48.783789 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:11:48.873737 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:11:48.874751 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:11:48.874804 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:11:48.971807 kernel: raid6: avx512x4 gen() 9848 MB/s Aug 5 22:11:48.988591 kernel: raid6: avx512x2 gen() 6996 MB/s Aug 5 22:11:49.007684 kernel: raid6: avx512x1 gen() 7467 MB/s Aug 5 22:11:49.025870 kernel: raid6: avx2x4 gen() 9769 MB/s Aug 5 22:11:49.042573 kernel: raid6: avx2x2 gen() 8926 MB/s Aug 5 22:11:49.060923 kernel: raid6: avx2x1 gen() 6309 MB/s Aug 5 22:11:49.061000 kernel: raid6: using algorithm avx512x4 gen() 9848 MB/s Aug 5 22:11:49.079233 kernel: raid6: .... xor() 2911 MB/s, rmw enabled Aug 5 22:11:49.079457 kernel: raid6: using avx512x2 recovery algorithm Aug 5 22:11:49.147548 kernel: xor: automatically using best checksumming function avx Aug 5 22:11:49.671899 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:11:49.719776 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:11:49.738561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:11:49.856116 systemd-udevd[396]: Using default interface naming scheme 'v255'. Aug 5 22:11:49.878771 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:11:49.901356 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:11:50.025552 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Aug 5 22:11:50.199294 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:11:50.214730 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:11:50.373458 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:11:50.387969 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:11:50.433777 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:11:50.441413 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:11:50.443110 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:11:50.446016 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:11:50.456218 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:11:50.505470 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 5 22:11:50.521616 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 5 22:11:50.522083 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Aug 5 22:11:50.522265 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:74:15:07:77:99 Aug 5 22:11:50.522433 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:11:50.514169 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:11:50.551873 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:11:50.551938 kernel: AES CTR mode by8 optimization enabled Aug 5 22:11:50.635739 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:11:50.651077 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:11:50.651252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:11:50.658376 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:11:50.665662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:11:50.667407 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:11:50.674450 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:11:50.694327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:11:50.702901 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 5 22:11:50.703332 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 5 22:11:50.721739 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 5 22:11:50.737316 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:11:50.737396 kernel: GPT:9289727 != 16777215 Aug 5 22:11:50.737415 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:11:50.742801 kernel: GPT:9289727 != 16777215 Aug 5 22:11:50.742880 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:11:50.747649 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:11:51.074633 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (460) Aug 5 22:11:51.109626 kernel: BTRFS: device fsid d3844c60-0a2c-449a-9ee9-2a875f8d8e12 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (449) Aug 5 22:11:51.219005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:11:51.243902 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:11:51.412332 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 5 22:11:51.417228 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:11:51.445634 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 5 22:11:51.496095 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 5 22:11:51.513141 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 5 22:11:51.561785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:11:51.575746 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:11:51.599826 disk-uuid[630]: Primary Header is updated. Aug 5 22:11:51.599826 disk-uuid[630]: Secondary Entries is updated. Aug 5 22:11:51.599826 disk-uuid[630]: Secondary Header is updated. Aug 5 22:11:51.613540 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:11:51.637559 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:11:52.634165 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:11:52.636274 disk-uuid[631]: The operation has completed successfully. Aug 5 22:11:52.952827 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:11:52.953053 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:11:53.039769 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:11:53.065721 sh[889]: Success Aug 5 22:11:53.134017 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:11:53.324956 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:11:53.334024 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:11:53.337450 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:11:53.390513 kernel: BTRFS info (device dm-0): first mount of filesystem d3844c60-0a2c-449a-9ee9-2a875f8d8e12 Aug 5 22:11:53.390595 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:53.390615 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:11:53.397664 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:11:53.397792 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:11:53.490603 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 5 22:11:53.512684 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:11:53.515681 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:11:53.532973 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:11:53.551771 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:11:53.581161 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:53.581300 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:53.581322 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:11:53.596112 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:11:53.658557 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:53.656182 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:11:53.691247 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:11:53.728219 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:11:53.820749 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:11:53.829761 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:11:53.939349 systemd-networkd[1081]: lo: Link UP Aug 5 22:11:53.939364 systemd-networkd[1081]: lo: Gained carrier Aug 5 22:11:53.944441 systemd-networkd[1081]: Enumeration completed Aug 5 22:11:53.946204 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:11:53.946210 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:11:53.946221 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:11:53.953407 systemd[1]: Reached target network.target - Network. Aug 5 22:11:53.960413 systemd-networkd[1081]: eth0: Link UP Aug 5 22:11:53.960420 systemd-networkd[1081]: eth0: Gained carrier Aug 5 22:11:53.960435 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:11:53.981597 systemd-networkd[1081]: eth0: DHCPv4 address 172.31.21.119/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:11:54.261121 ignition[992]: Ignition 2.18.0 Aug 5 22:11:54.261135 ignition[992]: Stage: fetch-offline Aug 5 22:11:54.261386 ignition[992]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:54.261400 ignition[992]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:11:54.262452 ignition[992]: Ignition finished successfully Aug 5 22:11:54.267911 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:11:54.280821 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:11:54.347962 ignition[1091]: Ignition 2.18.0 Aug 5 22:11:54.347977 ignition[1091]: Stage: fetch Aug 5 22:11:54.348456 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:54.348469 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:11:54.348652 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:11:54.413136 ignition[1091]: PUT result: OK Aug 5 22:11:54.424985 ignition[1091]: parsed url from cmdline: "" Aug 5 22:11:54.424997 ignition[1091]: no config URL provided Aug 5 22:11:54.425007 ignition[1091]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:11:54.425023 ignition[1091]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:11:54.425049 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:11:54.454332 ignition[1091]: PUT result: OK Aug 5 22:11:54.475114 ignition[1091]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 5 22:11:54.483819 ignition[1091]: GET result: OK Aug 5 22:11:54.483936 ignition[1091]: parsing config with SHA512: 2d72bcd15b61c3427b9af699d8388698112b24f8bbdba0c5d587cb01a54efd05eb558f946db520df0e1c722f2bdba948073f746dccbc1608293a6e179fb8981d Aug 5 22:11:54.495499 unknown[1091]: fetched base config from "system" Aug 5 22:11:54.495530 unknown[1091]: fetched base config from "system" Aug 5 22:11:54.495539 unknown[1091]: fetched user config from "aws" Aug 5 22:11:54.498082 ignition[1091]: fetch: fetch complete Aug 5 22:11:54.498090 ignition[1091]: fetch: fetch passed Aug 5 22:11:54.499837 ignition[1091]: Ignition finished successfully Aug 5 22:11:54.512018 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:11:54.552356 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:11:54.658859 ignition[1099]: Ignition 2.18.0 Aug 5 22:11:54.658873 ignition[1099]: Stage: kargs Aug 5 22:11:54.659398 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:54.659411 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:11:54.659853 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:11:54.663163 ignition[1099]: PUT result: OK Aug 5 22:11:54.682087 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:11:54.672029 ignition[1099]: kargs: kargs passed Aug 5 22:11:54.672095 ignition[1099]: Ignition finished successfully Aug 5 22:11:54.706833 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:11:54.788002 ignition[1106]: Ignition 2.18.0 Aug 5 22:11:54.788134 ignition[1106]: Stage: disks Aug 5 22:11:54.788631 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:54.788644 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:11:54.788994 ignition[1106]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:11:54.791025 ignition[1106]: PUT result: OK Aug 5 22:11:54.808024 ignition[1106]: disks: disks passed Aug 5 22:11:54.808138 ignition[1106]: Ignition finished successfully Aug 5 22:11:54.812713 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:11:54.816103 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:11:54.826265 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:11:54.833936 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:11:54.836561 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:11:54.846973 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:11:54.859991 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:11:54.930786 systemd-fsck[1115]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:11:54.936379 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:11:54.944419 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:11:55.279803 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e865ac73-053b-4efa-9a0f-50dec3f650d9 r/w with ordered data mode. Quota mode: none. Aug 5 22:11:55.276604 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:11:55.278459 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:11:55.298789 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:11:55.303392 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:11:55.304857 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:11:55.304919 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:11:55.304951 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:11:55.337757 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:11:55.345834 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:11:55.361540 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1134) Aug 5 22:11:55.364571 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:55.367546 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:55.367603 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:11:55.371946 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:11:55.374013 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:11:55.655716 systemd-networkd[1081]: eth0: Gained IPv6LL Aug 5 22:11:55.757659 initrd-setup-root[1158]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:11:55.770859 initrd-setup-root[1165]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:11:55.779491 initrd-setup-root[1172]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:11:55.787915 initrd-setup-root[1179]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:11:56.130547 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:11:56.137711 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:11:56.148863 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:11:56.175543 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:56.177080 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:11:56.239322 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:11:56.241119 ignition[1247]: INFO : Ignition 2.18.0 Aug 5 22:11:56.241119 ignition[1247]: INFO : Stage: mount Aug 5 22:11:56.244204 ignition[1247]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:56.244204 ignition[1247]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:11:56.246592 ignition[1247]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:11:56.256726 ignition[1247]: INFO : PUT result: OK Aug 5 22:11:56.288573 ignition[1247]: INFO : mount: mount passed Aug 5 22:11:56.291036 ignition[1247]: INFO : Ignition finished successfully Aug 5 22:11:56.301087 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:11:56.322010 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:11:56.382640 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:11:56.421547 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1259) Aug 5 22:11:56.426546 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:56.426628 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:56.426648 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:11:56.432754 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:11:56.439436 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:11:56.527200 ignition[1276]: INFO : Ignition 2.18.0 Aug 5 22:11:56.527200 ignition[1276]: INFO : Stage: files Aug 5 22:11:56.530837 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:56.530837 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:11:56.530837 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:11:56.539404 ignition[1276]: INFO : PUT result: OK Aug 5 22:11:56.543393 ignition[1276]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:11:56.548199 ignition[1276]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:11:56.548199 ignition[1276]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:11:56.584932 ignition[1276]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:11:56.586762 ignition[1276]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:11:56.586762 ignition[1276]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:11:56.585479 unknown[1276]: wrote ssh authorized keys file for user: core Aug 5 22:11:56.599293 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 22:11:56.599293 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 22:11:56.599293 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:11:56.599293 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:11:56.703948 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 5 22:11:56.855887 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:11:56.864824 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:11:56.879150 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:11:56.879150 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:11:56.905627 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:11:56.905627 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:11:56.934583 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:11:56.934583 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:11:56.934583 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:11:56.965826 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:11:56.965826 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:11:56.965826 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:56.965826 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:56.965826 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:56.965826 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 5 22:11:57.436735 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 5 22:11:58.213637 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:58.213637 ignition[1276]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 5 22:11:58.233279 ignition[1276]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:11:58.236738 ignition[1276]: INFO : files: files passed Aug 5 22:11:58.236738 ignition[1276]: INFO : Ignition finished successfully Aug 5 22:11:58.241962 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:11:58.322766 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:11:58.356802 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:11:58.370479 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:11:58.370705 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:11:58.395232 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:11:58.397459 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:11:58.402453 initrd-setup-root-after-ignition[1310]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:11:58.409611 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:11:58.415941 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:11:58.428784 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:11:58.467787 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:11:58.467929 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:11:58.472146 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:11:58.474162 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:11:58.476338 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:11:58.481073 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:11:58.551708 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:11:58.570839 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:11:58.679914 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:11:58.688894 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:11:58.693989 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:11:58.698694 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:11:58.698888 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:11:58.702916 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:11:58.705670 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:11:58.710356 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:11:58.721044 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:11:58.729083 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:11:58.736332 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:11:58.746136 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:11:58.756772 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:11:58.770426 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:11:58.775989 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:11:58.786294 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:11:58.786626 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:11:58.795938 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:11:58.813155 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:11:58.828080 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:11:58.830350 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:11:58.836850 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:11:58.841449 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:11:58.852599 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:11:58.852813 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:11:58.858581 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:11:58.858738 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:11:58.913928 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:11:58.927071 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:11:58.936472 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:11:58.939193 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:11:58.996192 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:11:59.000984 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:11:59.021574 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:11:59.021995 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:11:59.055556 ignition[1330]: INFO : Ignition 2.18.0 Aug 5 22:11:59.055556 ignition[1330]: INFO : Stage: umount Aug 5 22:11:59.055556 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:59.055556 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:11:59.055556 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:11:59.068727 ignition[1330]: INFO : PUT result: OK Aug 5 22:11:59.070032 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:11:59.071756 ignition[1330]: INFO : umount: umount passed Aug 5 22:11:59.071756 ignition[1330]: INFO : Ignition finished successfully Aug 5 22:11:59.073217 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:11:59.073357 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:11:59.076304 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:11:59.076418 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:11:59.077950 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:11:59.078037 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:11:59.081640 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:11:59.081714 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:11:59.088342 systemd[1]: Stopped target network.target - Network. Aug 5 22:11:59.102837 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:11:59.102988 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:11:59.110097 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:11:59.112005 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:11:59.117036 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:11:59.131095 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:11:59.145664 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:11:59.148320 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:11:59.148752 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:11:59.155918 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:11:59.155981 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:11:59.158625 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:11:59.158710 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:11:59.167224 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:11:59.167310 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:11:59.193042 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:11:59.195181 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:11:59.199789 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:11:59.199921 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:11:59.203387 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:11:59.203481 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:11:59.204621 systemd-networkd[1081]: eth0: DHCPv6 lease lost Aug 5 22:11:59.207023 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:11:59.207130 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:11:59.210807 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:11:59.210875 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:11:59.234981 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:11:59.237711 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:11:59.238993 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:11:59.239549 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:11:59.240148 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:11:59.240272 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:11:59.286149 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:11:59.286426 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:11:59.297885 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:11:59.298110 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:11:59.316665 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:11:59.319959 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:11:59.336758 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:11:59.337113 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:11:59.341644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:11:59.341723 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:11:59.368588 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:11:59.368708 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:11:59.384168 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:11:59.387345 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:11:59.411946 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:11:59.412024 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:11:59.432177 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:11:59.432374 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:11:59.440768 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:11:59.444618 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:11:59.444770 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:11:59.450643 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:11:59.450721 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:11:59.461310 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:11:59.461399 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:11:59.469012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:11:59.469088 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:11:59.487008 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:11:59.487237 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:11:59.549377 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:11:59.551260 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:11:59.556973 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:11:59.613853 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:11:59.710049 systemd[1]: Switching root. Aug 5 22:11:59.777077 systemd-journald[179]: Journal stopped Aug 5 22:12:04.891781 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Aug 5 22:12:04.891898 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:12:04.891926 kernel: SELinux: policy capability open_perms=1 Aug 5 22:12:04.891955 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:12:04.891979 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:12:04.891999 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:12:04.892023 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:12:04.892051 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:12:04.892081 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:12:04.892105 kernel: audit: type=1403 audit(1722895922.186:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:12:04.892130 systemd[1]: Successfully loaded SELinux policy in 134.565ms. Aug 5 22:12:04.892170 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.919ms. Aug 5 22:12:04.892200 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:12:04.892225 systemd[1]: Detected virtualization amazon. Aug 5 22:12:04.892250 systemd[1]: Detected architecture x86-64. Aug 5 22:12:04.892272 systemd[1]: Detected first boot. Aug 5 22:12:04.892298 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:12:04.892322 zram_generator::config[1390]: No configuration found. Aug 5 22:12:04.892346 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:12:04.892376 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:12:04.892400 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 5 22:12:04.892428 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:12:04.892454 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:12:04.892487 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:12:04.892941 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:12:04.892967 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:12:04.893001 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:12:04.893029 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:12:04.893055 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:12:04.893079 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:12:04.893103 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:12:04.893129 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:12:04.893152 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:12:04.893177 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:12:04.893198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:12:04.893223 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:12:04.893253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:12:04.893276 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:12:04.893301 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:12:04.893326 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:12:04.893349 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:12:04.893375 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:12:04.893400 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:12:04.893425 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:12:04.893453 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:12:04.893478 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:12:04.893503 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:12:04.902627 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:12:04.902670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:12:04.902692 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:12:04.902714 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:12:04.902735 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:12:04.902760 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:12:04.902780 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:04.902808 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:12:04.902829 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:12:04.902849 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:12:04.902870 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:12:04.902891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:04.902912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:12:04.902934 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:12:04.902952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:12:04.903058 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:12:04.903078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:12:04.903099 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:12:04.903119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:12:04.903140 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:12:04.903160 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 5 22:12:04.903182 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 5 22:12:04.903200 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:12:04.903318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:12:04.903342 kernel: fuse: init (API version 7.39) Aug 5 22:12:04.903365 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:12:04.903384 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:12:04.903541 kernel: loop: module loaded Aug 5 22:12:04.903565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:12:04.903585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:04.903605 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:12:04.903623 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:12:04.903647 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:12:04.903667 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:12:04.903688 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:12:04.903709 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:12:04.903729 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:12:04.903747 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:12:04.903764 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:12:04.903781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:12:04.903800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:12:04.911607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:12:04.911645 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:12:04.911670 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:12:04.911692 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:12:04.911714 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:12:04.911739 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:12:04.911769 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:12:04.911792 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:12:04.911815 kernel: ACPI: bus type drm_connector registered Aug 5 22:12:04.911838 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:12:04.911862 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:12:04.911883 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:12:04.911947 systemd-journald[1490]: Collecting audit messages is disabled. Aug 5 22:12:04.911993 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:12:04.912016 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:12:04.912039 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:12:04.912061 systemd-journald[1490]: Journal started Aug 5 22:12:04.912105 systemd-journald[1490]: Runtime Journal (/run/log/journal/ec22068263c0b7e79481193356d4dc5d) is 4.8M, max 38.6M, 33.7M free. Aug 5 22:12:04.919668 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:12:04.922552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:12:04.937919 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:12:04.936427 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:12:04.938776 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:12:04.944942 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:12:04.989771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:12:05.013803 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:12:05.034729 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:12:05.049626 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:12:05.051572 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Aug 5 22:12:05.051996 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Aug 5 22:12:05.053649 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:12:05.061721 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:12:05.063177 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:12:05.064254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:12:05.078762 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:12:05.082109 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:12:05.097694 systemd-journald[1490]: Time spent on flushing to /var/log/journal/ec22068263c0b7e79481193356d4dc5d is 64.875ms for 954 entries. Aug 5 22:12:05.097694 systemd-journald[1490]: System Journal (/var/log/journal/ec22068263c0b7e79481193356d4dc5d) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:12:05.198123 systemd-journald[1490]: Received client request to flush runtime journal. Aug 5 22:12:05.102734 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:12:05.113229 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:12:05.119911 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:12:05.156828 udevadm[1549]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 22:12:05.200684 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:12:05.250475 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:12:05.285953 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:12:05.316116 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Aug 5 22:12:05.317004 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. Aug 5 22:12:05.324660 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:12:07.186349 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:12:07.201787 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:12:07.275501 systemd-udevd[1569]: Using default interface naming scheme 'v255'. Aug 5 22:12:07.359160 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:12:07.387152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:12:07.482420 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:12:07.603935 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 5 22:12:07.623040 (udev-worker)[1580]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:12:07.695237 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1578) Aug 5 22:12:07.697548 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:12:07.908549 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 5 22:12:07.908684 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Aug 5 22:12:07.919128 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:12:07.919162 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Aug 5 22:12:07.919188 kernel: ACPI: button: Sleep Button [SLPF] Aug 5 22:12:07.953347 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Aug 5 22:12:07.953446 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1584) Aug 5 22:12:07.992470 systemd-networkd[1575]: lo: Link UP Aug 5 22:12:07.992483 systemd-networkd[1575]: lo: Gained carrier Aug 5 22:12:07.996919 systemd-networkd[1575]: Enumeration completed Aug 5 22:12:07.997657 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:12:08.005070 systemd-networkd[1575]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:08.005086 systemd-networkd[1575]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:12:08.008910 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:12:08.017461 systemd-networkd[1575]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:08.017532 systemd-networkd[1575]: eth0: Link UP Aug 5 22:12:08.017825 systemd-networkd[1575]: eth0: Gained carrier Aug 5 22:12:08.017918 systemd-networkd[1575]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:08.047053 systemd-networkd[1575]: eth0: DHCPv4 address 172.31.21.119/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:12:08.209607 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:12:08.231022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:12:08.468094 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:12:08.504153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:12:08.514945 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:12:08.897233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:08.968225 lvm[1691]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:12:09.041649 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:12:09.047581 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:12:09.073839 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:12:09.095666 systemd-networkd[1575]: eth0: Gained IPv6LL Aug 5 22:12:09.116476 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:12:09.121427 lvm[1696]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:12:09.168199 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:12:09.174112 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:12:09.182233 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:12:09.184511 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:12:09.189026 systemd[1]: Reached target machines.target - Containers. Aug 5 22:12:09.199564 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:12:09.241071 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:12:09.279134 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:12:09.289573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:09.308746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:12:09.317683 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:12:09.332000 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:12:09.343212 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:12:09.365513 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:12:09.388557 kernel: loop0: detected capacity change from 0 to 209816 Aug 5 22:12:09.388748 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:12:09.423908 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:12:09.434264 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:12:09.437699 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:12:09.504770 kernel: loop1: detected capacity change from 0 to 80568 Aug 5 22:12:09.668544 kernel: loop2: detected capacity change from 0 to 139904 Aug 5 22:12:09.816549 kernel: loop3: detected capacity change from 0 to 60984 Aug 5 22:12:09.924568 kernel: loop4: detected capacity change from 0 to 209816 Aug 5 22:12:09.939548 kernel: loop5: detected capacity change from 0 to 80568 Aug 5 22:12:09.959551 kernel: loop6: detected capacity change from 0 to 139904 Aug 5 22:12:10.000547 kernel: loop7: detected capacity change from 0 to 60984 Aug 5 22:12:10.028618 (sd-merge)[1720]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 5 22:12:10.030703 (sd-merge)[1720]: Merged extensions into '/usr'. Aug 5 22:12:10.042288 systemd[1]: Reloading requested from client PID 1706 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:12:10.042312 systemd[1]: Reloading... Aug 5 22:12:10.228569 zram_generator::config[1751]: No configuration found. Aug 5 22:12:10.554860 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:12:10.798181 systemd[1]: Reloading finished in 753 ms. Aug 5 22:12:10.862732 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:12:10.873766 systemd[1]: Starting ensure-sysext.service... Aug 5 22:12:10.889561 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:12:10.967498 systemd[1]: Reloading requested from client PID 1799 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:12:10.967542 systemd[1]: Reloading... Aug 5 22:12:10.990405 systemd-tmpfiles[1800]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:12:10.992836 systemd-tmpfiles[1800]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:12:10.997169 systemd-tmpfiles[1800]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:12:11.005628 systemd-tmpfiles[1800]: ACLs are not supported, ignoring. Aug 5 22:12:11.006415 systemd-tmpfiles[1800]: ACLs are not supported, ignoring. Aug 5 22:12:11.020309 systemd-tmpfiles[1800]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:12:11.020322 systemd-tmpfiles[1800]: Skipping /boot Aug 5 22:12:11.090899 systemd-tmpfiles[1800]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:12:11.090924 systemd-tmpfiles[1800]: Skipping /boot Aug 5 22:12:11.294639 zram_generator::config[1824]: No configuration found. Aug 5 22:12:11.294777 ldconfig[1702]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:12:11.780066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:12:11.981887 systemd[1]: Reloading finished in 1013 ms. Aug 5 22:12:12.008168 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:12:12.041586 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:12:12.079762 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:12.093055 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:12:12.097778 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:12:12.122381 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:12:12.131789 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:12:12.154107 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:12.154418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:12.166110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:12:12.178119 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:12:12.200856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:12:12.202837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:12.203027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:12.226762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:12:12.228332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:12:12.256321 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:12:12.259457 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:12:12.261635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:12:12.265871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:12:12.350758 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:12:12.356576 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:12:12.369234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:12.371110 augenrules[1921]: No rules Aug 5 22:12:12.369943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:12.379100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:12:12.388400 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:12:12.414808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:12:12.421273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:12:12.428738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:12.429083 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:12:12.443912 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:12:12.448351 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:12.465289 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:12.472712 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:12:12.484582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:12:12.504385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:12:12.510794 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:12:12.515885 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:12:12.517916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:12:12.518651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:12:12.522346 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:12:12.522822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:12:12.532034 systemd[1]: Finished ensure-sysext.service. Aug 5 22:12:12.541972 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:12:12.542269 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:12:12.542305 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:12:12.550504 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:12:12.554712 systemd-resolved[1890]: Positive Trust Anchors: Aug 5 22:12:12.554731 systemd-resolved[1890]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:12:12.554792 systemd-resolved[1890]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:12:12.560474 systemd-resolved[1890]: Defaulting to hostname 'linux'. Aug 5 22:12:12.562406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:12:12.563766 systemd[1]: Reached target network.target - Network. Aug 5 22:12:12.570057 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:12:12.572145 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:12:12.573603 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:12:12.575208 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:12:12.576997 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:12:12.578947 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:12:12.582338 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:12:12.585590 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:12:12.588367 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:12:12.588414 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:12:12.592680 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:12:12.595197 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:12:12.598435 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:12:12.601842 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:12:12.605789 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:12:12.607697 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:12:12.610055 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:12:12.613042 systemd[1]: System is tainted: cgroupsv1 Aug 5 22:12:12.614229 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:12:12.614438 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:12:12.619031 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:12:12.636891 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 22:12:12.641708 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:12:12.658706 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:12:12.742841 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:12:12.747626 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:12:12.761823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:12.786755 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:12:12.809336 jq[1958]: false Aug 5 22:12:12.810122 systemd[1]: Started ntpd.service - Network Time Service. Aug 5 22:12:12.835975 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:12:12.852789 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:12:12.861301 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 5 22:12:12.873919 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:12:12.897699 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:12:12.906538 extend-filesystems[1959]: Found loop4 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found loop5 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found loop6 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found loop7 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found nvme0n1 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found nvme0n1p1 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found nvme0n1p2 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found nvme0n1p3 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found usr Aug 5 22:12:12.906538 extend-filesystems[1959]: Found nvme0n1p4 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found nvme0n1p6 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found nvme0n1p7 Aug 5 22:12:12.906538 extend-filesystems[1959]: Found nvme0n1p9 Aug 5 22:12:12.906538 extend-filesystems[1959]: Checking size of /dev/nvme0n1p9 Aug 5 22:12:13.019859 extend-filesystems[1959]: Resized partition /dev/nvme0n1p9 Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:12.984 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:12.988 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:12.995 INFO Fetch successful Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:12.995 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:13.006 INFO Fetch successful Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:13.006 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:13.014 INFO Fetch successful Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:13.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:13.021 INFO Fetch successful Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:13.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:13.029 INFO Fetch failed with 404: resource not found Aug 5 22:12:13.029839 coreos-metadata[1955]: Aug 05 22:12:13.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 5 22:12:12.944304 dbus-daemon[1956]: [system] SELinux support is enabled Aug 5 22:12:12.943862 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:12:13.044690 extend-filesystems[1992]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:28 UTC 2024 (1): Starting Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: ---------------------------------------------------- Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: corporation. Support and training for ntp-4 are Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: available at https://www.nwtime.org/support Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: ---------------------------------------------------- Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: proto: precision = 0.091 usec (-23) Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: basedate set to 2024-07-24 Aug 5 22:12:13.046178 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: gps base set to 2024-07-28 (week 2325) Aug 5 22:12:13.047079 coreos-metadata[1955]: Aug 05 22:12:13.034 INFO Fetch successful Aug 5 22:12:13.047079 coreos-metadata[1955]: Aug 05 22:12:13.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 5 22:12:13.047079 coreos-metadata[1955]: Aug 05 22:12:13.035 INFO Fetch successful Aug 5 22:12:13.047079 coreos-metadata[1955]: Aug 05 22:12:13.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 5 22:12:13.047079 coreos-metadata[1955]: Aug 05 22:12:13.040 INFO Fetch successful Aug 5 22:12:13.047079 coreos-metadata[1955]: Aug 05 22:12:13.040 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 5 22:12:13.047079 coreos-metadata[1955]: Aug 05 22:12:13.041 INFO Fetch successful Aug 5 22:12:13.047079 coreos-metadata[1955]: Aug 05 22:12:13.041 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 5 22:12:13.047079 coreos-metadata[1955]: Aug 05 22:12:13.042 INFO Fetch successful Aug 5 22:12:12.954959 dbus-daemon[1956]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1575 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 5 22:12:12.953273 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:12:13.029630 ntpd[1964]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:28 UTC 2024 (1): Starting Aug 5 22:12:12.974466 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:12:13.029660 ntpd[1964]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:12:13.057731 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:12:13.057731 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:12:13.057731 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:12:13.057731 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: Listen normally on 3 eth0 172.31.21.119:123 Aug 5 22:12:13.057731 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: Listen normally on 4 lo [::1]:123 Aug 5 22:12:13.057731 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: Listen normally on 5 eth0 [fe80::474:15ff:fe07:7799%2]:123 Aug 5 22:12:13.057731 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: Listening on routing socket on fd #22 for interface updates Aug 5 22:12:13.063730 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 5 22:12:13.029673 ntpd[1964]: ---------------------------------------------------- Aug 5 22:12:13.029683 ntpd[1964]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:12:13.029693 ntpd[1964]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:12:13.029703 ntpd[1964]: corporation. Support and training for ntp-4 are Aug 5 22:12:13.029712 ntpd[1964]: available at https://www.nwtime.org/support Aug 5 22:12:13.029722 ntpd[1964]: ---------------------------------------------------- Aug 5 22:12:13.035332 ntpd[1964]: proto: precision = 0.091 usec (-23) Aug 5 22:12:13.041839 ntpd[1964]: basedate set to 2024-07-24 Aug 5 22:12:13.041864 ntpd[1964]: gps base set to 2024-07-28 (week 2325) Aug 5 22:12:13.090672 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:13.090672 ntpd[1964]: 5 Aug 22:12:13 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:13.086666 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:12:13.049127 ntpd[1964]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:12:13.049182 ntpd[1964]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:12:13.049384 ntpd[1964]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:12:13.049421 ntpd[1964]: Listen normally on 3 eth0 172.31.21.119:123 Aug 5 22:12:13.049459 ntpd[1964]: Listen normally on 4 lo [::1]:123 Aug 5 22:12:13.049582 ntpd[1964]: Listen normally on 5 eth0 [fe80::474:15ff:fe07:7799%2]:123 Aug 5 22:12:13.049670 ntpd[1964]: Listening on routing socket on fd #22 for interface updates Aug 5 22:12:13.088717 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:13.088760 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:13.099950 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:12:13.126756 update_engine[1989]: I0805 22:12:13.125753 1989 main.cc:92] Flatcar Update Engine starting Aug 5 22:12:13.130309 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:12:13.131506 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:12:13.141472 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:12:13.141929 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:12:13.183144 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:12:13.183530 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:12:13.185442 update_engine[1989]: I0805 22:12:13.159745 1989 update_check_scheduler.cc:74] Next update check in 6m31s Aug 5 22:12:13.201363 jq[1998]: true Aug 5 22:12:13.209551 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 5 22:12:13.240023 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:12:13.267013 (ntainerd)[2016]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:12:13.332500 jq[2014]: true Aug 5 22:12:13.345610 extend-filesystems[1992]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 5 22:12:13.345610 extend-filesystems[1992]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:12:13.345610 extend-filesystems[1992]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 5 22:12:13.365729 extend-filesystems[1959]: Resized filesystem in /dev/nvme0n1p9 Aug 5 22:12:13.363646 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:12:13.364007 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:12:13.398932 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 22:12:13.556785 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:12:13.559407 tar[2004]: linux-amd64/helm Aug 5 22:12:13.565046 dbus-daemon[1956]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 5 22:12:13.579208 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 5 22:12:13.616665 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 5 22:12:13.635864 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:12:13.636273 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:12:13.636325 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:12:13.697289 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 5 22:12:13.698373 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:12:13.698408 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:12:13.735216 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:12:13.810248 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:12:13.919222 bash[2070]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:12:13.963297 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:12:13.985074 systemd[1]: Starting sshkeys.service... Aug 5 22:12:14.014921 systemd-logind[1985]: Watching system buttons on /dev/input/event1 (Power Button) Aug 5 22:12:14.014953 systemd-logind[1985]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 5 22:12:14.014977 systemd-logind[1985]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:12:14.026296 systemd-logind[1985]: New seat seat0. Aug 5 22:12:14.043081 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:12:14.131546 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2072) Aug 5 22:12:14.157140 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 22:12:14.171855 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 22:12:14.352821 amazon-ssm-agent[2057]: Initializing new seelog logger Aug 5 22:12:14.379872 amazon-ssm-agent[2057]: New Seelog Logger Creation Complete Aug 5 22:12:14.379872 amazon-ssm-agent[2057]: 2024/08/05 22:12:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:14.379872 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:14.379872 amazon-ssm-agent[2057]: 2024/08/05 22:12:14 processing appconfig overrides Aug 5 22:12:14.400619 amazon-ssm-agent[2057]: 2024/08/05 22:12:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:14.400619 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:14.400619 amazon-ssm-agent[2057]: 2024/08/05 22:12:14 processing appconfig overrides Aug 5 22:12:14.400619 amazon-ssm-agent[2057]: 2024-08-05 22:12:14 INFO Proxy environment variables: Aug 5 22:12:14.400619 amazon-ssm-agent[2057]: 2024/08/05 22:12:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:14.400619 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:14.402467 sshd_keygen[1999]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:12:14.409195 amazon-ssm-agent[2057]: 2024/08/05 22:12:14 processing appconfig overrides Aug 5 22:12:14.430740 amazon-ssm-agent[2057]: 2024/08/05 22:12:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:14.430740 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:14.430740 amazon-ssm-agent[2057]: 2024/08/05 22:12:14 processing appconfig overrides Aug 5 22:12:14.503210 amazon-ssm-agent[2057]: 2024-08-05 22:12:14 INFO http_proxy: Aug 5 22:12:14.557793 coreos-metadata[2108]: Aug 05 22:12:14.557 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:12:14.573691 coreos-metadata[2108]: Aug 05 22:12:14.573 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 5 22:12:14.574774 coreos-metadata[2108]: Aug 05 22:12:14.574 INFO Fetch successful Aug 5 22:12:14.575002 coreos-metadata[2108]: Aug 05 22:12:14.574 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 5 22:12:14.575937 coreos-metadata[2108]: Aug 05 22:12:14.575 INFO Fetch successful Aug 5 22:12:14.580253 unknown[2108]: wrote ssh authorized keys file for user: core Aug 5 22:12:14.609182 amazon-ssm-agent[2057]: 2024-08-05 22:12:14 INFO no_proxy: Aug 5 22:12:14.645963 locksmithd[2067]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:12:14.649654 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:12:14.664380 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:12:14.694958 dbus-daemon[1956]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 5 22:12:14.702762 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 5 22:12:14.717124 amazon-ssm-agent[2057]: 2024-08-05 22:12:14 INFO https_proxy: Aug 5 22:12:14.723932 dbus-daemon[1956]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2065 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 5 22:12:14.741035 systemd[1]: Starting polkit.service - Authorization Manager... Aug 5 22:12:14.753660 update-ssh-keys[2160]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:12:14.777138 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 22:12:14.797770 systemd[1]: Finished sshkeys.service. Aug 5 22:12:14.816587 amazon-ssm-agent[2057]: 2024-08-05 22:12:14 INFO Checking if agent identity type OnPrem can be assumed Aug 5 22:12:14.837034 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:12:14.841794 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:12:14.864538 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:12:14.884438 polkitd[2182]: Started polkitd version 121 Aug 5 22:12:14.922396 amazon-ssm-agent[2057]: 2024-08-05 22:12:14 INFO Checking if agent identity type EC2 can be assumed Aug 5 22:12:14.937226 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:12:14.951755 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:12:14.960238 polkitd[2182]: Loading rules from directory /etc/polkit-1/rules.d Aug 5 22:12:14.960338 polkitd[2182]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 5 22:12:14.963985 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:12:14.968240 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:12:14.981867 polkitd[2182]: Finished loading, compiling and executing 2 rules Aug 5 22:12:14.996053 dbus-daemon[1956]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 5 22:12:14.996429 systemd[1]: Started polkit.service - Authorization Manager. Aug 5 22:12:14.999077 polkitd[2182]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 5 22:12:15.104595 systemd-hostnamed[2065]: Hostname set to (transient) Aug 5 22:12:15.105455 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO Agent will take identity from EC2 Aug 5 22:12:15.105617 systemd-resolved[1890]: System hostname changed to 'ip-172-31-21-119'. Aug 5 22:12:15.206782 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:12:15.228967 containerd[2016]: time="2024-08-05T22:12:15.226907130Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 22:12:15.309632 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:12:15.396544 containerd[2016]: time="2024-08-05T22:12:15.394368525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:12:15.396544 containerd[2016]: time="2024-08-05T22:12:15.394469983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:15.397354 containerd[2016]: time="2024-08-05T22:12:15.397293521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:15.397354 containerd[2016]: time="2024-08-05T22:12:15.397351905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:15.397759 containerd[2016]: time="2024-08-05T22:12:15.397729016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:15.397974 containerd[2016]: time="2024-08-05T22:12:15.397760631Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:12:15.398050 containerd[2016]: time="2024-08-05T22:12:15.398028646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:15.398193 containerd[2016]: time="2024-08-05T22:12:15.398135468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:15.398193 containerd[2016]: time="2024-08-05T22:12:15.398157688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:15.398317 containerd[2016]: time="2024-08-05T22:12:15.398296979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:15.398640 containerd[2016]: time="2024-08-05T22:12:15.398616622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:15.398693 containerd[2016]: time="2024-08-05T22:12:15.398647353Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:12:15.398693 containerd[2016]: time="2024-08-05T22:12:15.398663146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:15.398929 containerd[2016]: time="2024-08-05T22:12:15.398903059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:15.398973 containerd[2016]: time="2024-08-05T22:12:15.398931821Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:12:15.399103 containerd[2016]: time="2024-08-05T22:12:15.399003479Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:12:15.399752 containerd[2016]: time="2024-08-05T22:12:15.399110553Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:12:15.409024 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.425138440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.426158133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.426188569Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.427817334Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.427994224Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.428019924Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.428801386Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.429004514Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.429027106Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.429047404Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.429070165Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.429092118Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.429117854Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:12:15.429899 containerd[2016]: time="2024-08-05T22:12:15.429137811Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:12:15.430487 containerd[2016]: time="2024-08-05T22:12:15.429162125Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:12:15.430487 containerd[2016]: time="2024-08-05T22:12:15.429183863Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:12:15.430487 containerd[2016]: time="2024-08-05T22:12:15.429204722Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:12:15.430487 containerd[2016]: time="2024-08-05T22:12:15.429224662Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:12:15.430487 containerd[2016]: time="2024-08-05T22:12:15.429244322Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:12:15.430487 containerd[2016]: time="2024-08-05T22:12:15.429357928Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.457912056Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.458006730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.458044139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.458095342Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.458957426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.461501861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.461581496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.461616079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.461646078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.461674177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.461706290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.461733099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.462018379Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:12:15.462264 containerd[2016]: time="2024-08-05T22:12:15.462255587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.465103 containerd[2016]: time="2024-08-05T22:12:15.462291466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.465103 containerd[2016]: time="2024-08-05T22:12:15.462315748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.465103 containerd[2016]: time="2024-08-05T22:12:15.462337953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.465103 containerd[2016]: time="2024-08-05T22:12:15.462361946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.477373 containerd[2016]: time="2024-08-05T22:12:15.462387746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.477373 containerd[2016]: time="2024-08-05T22:12:15.475211594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.477373 containerd[2016]: time="2024-08-05T22:12:15.475389392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:12:15.477680 containerd[2016]: time="2024-08-05T22:12:15.477430148Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:12:15.477680 containerd[2016]: time="2024-08-05T22:12:15.477608022Z" level=info msg="Connect containerd service" Aug 5 22:12:15.477908 containerd[2016]: time="2024-08-05T22:12:15.477681841Z" level=info msg="using legacy CRI server" Aug 5 22:12:15.477908 containerd[2016]: time="2024-08-05T22:12:15.477700401Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:12:15.477908 containerd[2016]: time="2024-08-05T22:12:15.477893567Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:12:15.488013 containerd[2016]: time="2024-08-05T22:12:15.487717775Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:12:15.488013 containerd[2016]: time="2024-08-05T22:12:15.487801030Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:12:15.488013 containerd[2016]: time="2024-08-05T22:12:15.487886071Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:12:15.488013 containerd[2016]: time="2024-08-05T22:12:15.487910967Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:12:15.488013 containerd[2016]: time="2024-08-05T22:12:15.487937903Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:12:15.491932 containerd[2016]: time="2024-08-05T22:12:15.491682023Z" level=info msg="Start subscribing containerd event" Aug 5 22:12:15.491932 containerd[2016]: time="2024-08-05T22:12:15.491774027Z" level=info msg="Start recovering state" Aug 5 22:12:15.491932 containerd[2016]: time="2024-08-05T22:12:15.491884960Z" level=info msg="Start event monitor" Aug 5 22:12:15.491932 containerd[2016]: time="2024-08-05T22:12:15.491907484Z" level=info msg="Start snapshots syncer" Aug 5 22:12:15.491932 containerd[2016]: time="2024-08-05T22:12:15.491922151Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:12:15.491932 containerd[2016]: time="2024-08-05T22:12:15.491939665Z" level=info msg="Start streaming server" Aug 5 22:12:15.492258 containerd[2016]: time="2024-08-05T22:12:15.492023111Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:12:15.502364 containerd[2016]: time="2024-08-05T22:12:15.498746182Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:12:15.500836 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:12:15.509636 containerd[2016]: time="2024-08-05T22:12:15.503575884Z" level=info msg="containerd successfully booted in 0.280314s" Aug 5 22:12:15.509753 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 5 22:12:15.610486 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Aug 5 22:12:15.701895 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [amazon-ssm-agent] Starting Core Agent Aug 5 22:12:15.702319 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 5 22:12:15.702319 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [Registrar] Starting registrar module Aug 5 22:12:15.702319 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 5 22:12:15.702319 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [EC2Identity] EC2 registration was successful. Aug 5 22:12:15.702319 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [CredentialRefresher] credentialRefresher has started Aug 5 22:12:15.702319 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [CredentialRefresher] Starting credentials refresher loop Aug 5 22:12:15.702319 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 5 22:12:15.712762 amazon-ssm-agent[2057]: 2024-08-05 22:12:15 INFO [CredentialRefresher] Next credential rotation will be in 31.941655867016667 minutes Aug 5 22:12:16.067039 tar[2004]: linux-amd64/LICENSE Aug 5 22:12:16.067039 tar[2004]: linux-amd64/README.md Aug 5 22:12:16.083797 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:12:16.681843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:16.683989 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:12:16.686353 systemd[1]: Startup finished in 16.776s (kernel) + 14.635s (userspace) = 31.411s. Aug 5 22:12:16.745568 amazon-ssm-agent[2057]: 2024-08-05 22:12:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 5 22:12:16.833709 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:16.835692 amazon-ssm-agent[2057]: 2024-08-05 22:12:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2253) started Aug 5 22:12:16.935999 amazon-ssm-agent[2057]: 2024-08-05 22:12:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 5 22:12:18.362464 kubelet[2251]: E0805 22:12:18.362377 2251 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:18.366389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:18.366712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:19.214203 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:12:19.230947 systemd[1]: Started sshd@0-172.31.21.119:22-139.178.89.65:35540.service - OpenSSH per-connection server daemon (139.178.89.65:35540). Aug 5 22:12:19.453589 sshd[2275]: Accepted publickey for core from 139.178.89.65 port 35540 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:19.461421 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:19.487983 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:12:19.501400 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:12:19.513714 systemd-logind[1985]: New session 1 of user core. Aug 5 22:12:19.581820 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:12:19.608580 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:12:19.628071 (systemd)[2281]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:19.856159 systemd[2281]: Queued start job for default target default.target. Aug 5 22:12:19.856720 systemd[2281]: Created slice app.slice - User Application Slice. Aug 5 22:12:19.856760 systemd[2281]: Reached target paths.target - Paths. Aug 5 22:12:19.856782 systemd[2281]: Reached target timers.target - Timers. Aug 5 22:12:19.861660 systemd[2281]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:12:19.873032 systemd[2281]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:12:19.873131 systemd[2281]: Reached target sockets.target - Sockets. Aug 5 22:12:19.873155 systemd[2281]: Reached target basic.target - Basic System. Aug 5 22:12:19.873220 systemd[2281]: Reached target default.target - Main User Target. Aug 5 22:12:19.873262 systemd[2281]: Startup finished in 217ms. Aug 5 22:12:19.873577 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:12:19.882326 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:12:20.463272 systemd[1]: Started sshd@1-172.31.21.119:22-139.178.89.65:35548.service - OpenSSH per-connection server daemon (139.178.89.65:35548). Aug 5 22:12:20.465728 systemd-resolved[1890]: Clock change detected. Flushing caches. Aug 5 22:12:20.670249 sshd[2293]: Accepted publickey for core from 139.178.89.65 port 35548 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:20.670905 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:20.695664 systemd-logind[1985]: New session 2 of user core. Aug 5 22:12:20.703225 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:12:20.845178 sshd[2293]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:20.854967 systemd[1]: sshd@1-172.31.21.119:22-139.178.89.65:35548.service: Deactivated successfully. Aug 5 22:12:20.871480 systemd-logind[1985]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:12:20.879007 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:12:20.912434 systemd[1]: Started sshd@2-172.31.21.119:22-139.178.89.65:43170.service - OpenSSH per-connection server daemon (139.178.89.65:43170). Aug 5 22:12:20.916986 systemd-logind[1985]: Removed session 2. Aug 5 22:12:21.096735 sshd[2301]: Accepted publickey for core from 139.178.89.65 port 43170 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:21.099736 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:21.121987 systemd-logind[1985]: New session 3 of user core. Aug 5 22:12:21.125861 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:12:21.255581 sshd[2301]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:21.261125 systemd[1]: sshd@2-172.31.21.119:22-139.178.89.65:43170.service: Deactivated successfully. Aug 5 22:12:21.266806 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:12:21.280835 systemd-logind[1985]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:12:21.317834 systemd[1]: Started sshd@3-172.31.21.119:22-139.178.89.65:43186.service - OpenSSH per-connection server daemon (139.178.89.65:43186). Aug 5 22:12:21.319028 systemd-logind[1985]: Removed session 3. Aug 5 22:12:21.537239 sshd[2309]: Accepted publickey for core from 139.178.89.65 port 43186 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:21.542136 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:21.555901 systemd-logind[1985]: New session 4 of user core. Aug 5 22:12:21.568912 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:12:21.709639 sshd[2309]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:21.715129 systemd[1]: sshd@3-172.31.21.119:22-139.178.89.65:43186.service: Deactivated successfully. Aug 5 22:12:21.737279 systemd-logind[1985]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:12:21.741238 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:12:21.777929 systemd[1]: Started sshd@4-172.31.21.119:22-139.178.89.65:43190.service - OpenSSH per-connection server daemon (139.178.89.65:43190). Aug 5 22:12:21.781258 systemd-logind[1985]: Removed session 4. Aug 5 22:12:21.978247 sshd[2317]: Accepted publickey for core from 139.178.89.65 port 43190 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:21.980022 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:21.993399 systemd-logind[1985]: New session 5 of user core. Aug 5 22:12:22.006902 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:12:22.169536 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:12:22.169940 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:22.196945 sudo[2321]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:22.222169 sshd[2317]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:22.226888 systemd[1]: sshd@4-172.31.21.119:22-139.178.89.65:43190.service: Deactivated successfully. Aug 5 22:12:22.233146 systemd-logind[1985]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:12:22.233354 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:12:22.236277 systemd-logind[1985]: Removed session 5. Aug 5 22:12:22.252215 systemd[1]: Started sshd@5-172.31.21.119:22-139.178.89.65:43198.service - OpenSSH per-connection server daemon (139.178.89.65:43198). Aug 5 22:12:22.439519 sshd[2326]: Accepted publickey for core from 139.178.89.65 port 43198 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:22.441637 sshd[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:22.459766 systemd-logind[1985]: New session 6 of user core. Aug 5 22:12:22.466988 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:12:22.585622 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:12:22.586909 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:22.602471 sudo[2331]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:22.616334 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:12:22.621536 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:22.676022 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:22.708637 auditctl[2334]: No rules Aug 5 22:12:22.709176 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:12:22.709532 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:22.732971 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:22.884612 augenrules[2353]: No rules Aug 5 22:12:22.888174 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:22.907318 sudo[2330]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:22.947781 sshd[2326]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:22.958090 systemd[1]: sshd@5-172.31.21.119:22-139.178.89.65:43198.service: Deactivated successfully. Aug 5 22:12:23.004170 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:12:23.005641 systemd-logind[1985]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:12:23.021841 systemd[1]: Started sshd@6-172.31.21.119:22-139.178.89.65:43214.service - OpenSSH per-connection server daemon (139.178.89.65:43214). Aug 5 22:12:23.023603 systemd-logind[1985]: Removed session 6. Aug 5 22:12:23.244533 sshd[2362]: Accepted publickey for core from 139.178.89.65 port 43214 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:23.245800 sshd[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:23.276086 systemd-logind[1985]: New session 7 of user core. Aug 5 22:12:23.283615 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:12:23.400248 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:12:23.400786 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:23.866511 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:12:23.885566 (dockerd)[2375]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:12:24.619322 dockerd[2375]: time="2024-08-05T22:12:24.619257148Z" level=info msg="Starting up" Aug 5 22:12:26.470384 dockerd[2375]: time="2024-08-05T22:12:26.470338030Z" level=info msg="Loading containers: start." Aug 5 22:12:26.738444 kernel: Initializing XFRM netlink socket Aug 5 22:12:26.852052 (udev-worker)[2386]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:12:27.037215 systemd-networkd[1575]: docker0: Link UP Aug 5 22:12:27.102965 dockerd[2375]: time="2024-08-05T22:12:27.102837792Z" level=info msg="Loading containers: done." Aug 5 22:12:27.381952 dockerd[2375]: time="2024-08-05T22:12:27.381824837Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:12:27.382138 dockerd[2375]: time="2024-08-05T22:12:27.382096628Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:12:27.382395 dockerd[2375]: time="2024-08-05T22:12:27.382290831Z" level=info msg="Daemon has completed initialization" Aug 5 22:12:27.472453 dockerd[2375]: time="2024-08-05T22:12:27.470907550Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:12:27.472601 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:12:29.034215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:12:29.040109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:29.144328 containerd[2016]: time="2024-08-05T22:12:29.144291992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 22:12:30.207887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:30.233049 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:30.489330 kubelet[2520]: E0805 22:12:30.489085 2520 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:30.509923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:30.511605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:30.541581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882707506.mount: Deactivated successfully. Aug 5 22:12:34.900020 containerd[2016]: time="2024-08-05T22:12:34.899964349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:34.902258 containerd[2016]: time="2024-08-05T22:12:34.902196806Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=34527317" Aug 5 22:12:34.905334 containerd[2016]: time="2024-08-05T22:12:34.904059388Z" level=info msg="ImageCreate event name:\"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:34.909523 containerd[2016]: time="2024-08-05T22:12:34.909043641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:34.910536 containerd[2016]: time="2024-08-05T22:12:34.910491069Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"34524117\" in 5.766018254s" Aug 5 22:12:34.910649 containerd[2016]: time="2024-08-05T22:12:34.910547158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\"" Aug 5 22:12:34.979270 containerd[2016]: time="2024-08-05T22:12:34.979127661Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 22:12:38.663560 containerd[2016]: time="2024-08-05T22:12:38.663502879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:38.665733 containerd[2016]: time="2024-08-05T22:12:38.665678230Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=31847067" Aug 5 22:12:38.668678 containerd[2016]: time="2024-08-05T22:12:38.667315901Z" level=info msg="ImageCreate event name:\"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:38.674872 containerd[2016]: time="2024-08-05T22:12:38.674808080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:38.688717 containerd[2016]: time="2024-08-05T22:12:38.688661786Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"33397013\" in 3.709479535s" Aug 5 22:12:38.688717 containerd[2016]: time="2024-08-05T22:12:38.688717881Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\"" Aug 5 22:12:38.728380 containerd[2016]: time="2024-08-05T22:12:38.728343257Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 22:12:40.536101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:12:40.561604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:41.660232 containerd[2016]: time="2024-08-05T22:12:41.660173711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:41.681649 containerd[2016]: time="2024-08-05T22:12:41.681550534Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=17097295" Aug 5 22:12:41.776382 containerd[2016]: time="2024-08-05T22:12:41.774626134Z" level=info msg="ImageCreate event name:\"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:41.857731 containerd[2016]: time="2024-08-05T22:12:41.857674939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:41.859686 containerd[2016]: time="2024-08-05T22:12:41.858974557Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"18647259\" in 3.130590325s" Aug 5 22:12:41.859686 containerd[2016]: time="2024-08-05T22:12:41.859021296Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\"" Aug 5 22:12:41.928450 containerd[2016]: time="2024-08-05T22:12:41.928313142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 22:12:43.473722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:43.487983 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:43.649907 kubelet[2617]: E0805 22:12:43.649848 2617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:43.652639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:43.652870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:44.889382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount567363451.mount: Deactivated successfully. Aug 5 22:12:45.550985 containerd[2016]: time="2024-08-05T22:12:45.550924768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:45.552755 containerd[2016]: time="2024-08-05T22:12:45.552558777Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=28303769" Aug 5 22:12:45.555025 containerd[2016]: time="2024-08-05T22:12:45.554729772Z" level=info msg="ImageCreate event name:\"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:45.571026 containerd[2016]: time="2024-08-05T22:12:45.570948630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:45.572092 containerd[2016]: time="2024-08-05T22:12:45.571625944Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"28302788\" in 3.643000601s" Aug 5 22:12:45.572092 containerd[2016]: time="2024-08-05T22:12:45.571669989Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\"" Aug 5 22:12:45.587857 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 5 22:12:45.614994 containerd[2016]: time="2024-08-05T22:12:45.614949253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:12:46.248349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880262274.mount: Deactivated successfully. Aug 5 22:12:46.255249 containerd[2016]: time="2024-08-05T22:12:46.255201235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:46.257073 containerd[2016]: time="2024-08-05T22:12:46.256892519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 5 22:12:46.260435 containerd[2016]: time="2024-08-05T22:12:46.258692495Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:46.262554 containerd[2016]: time="2024-08-05T22:12:46.262514159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:46.263295 containerd[2016]: time="2024-08-05T22:12:46.263258250Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 648.268769ms" Aug 5 22:12:46.263398 containerd[2016]: time="2024-08-05T22:12:46.263301303Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:12:46.292131 containerd[2016]: time="2024-08-05T22:12:46.292096567Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:12:46.932065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487273892.mount: Deactivated successfully. Aug 5 22:12:51.333339 containerd[2016]: time="2024-08-05T22:12:51.333273174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:51.336068 containerd[2016]: time="2024-08-05T22:12:51.336005864Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Aug 5 22:12:51.337068 containerd[2016]: time="2024-08-05T22:12:51.337000971Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:51.345727 containerd[2016]: time="2024-08-05T22:12:51.345649007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:51.347638 containerd[2016]: time="2024-08-05T22:12:51.347226171Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.055089067s" Aug 5 22:12:51.347638 containerd[2016]: time="2024-08-05T22:12:51.347283074Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 5 22:12:51.388801 containerd[2016]: time="2024-08-05T22:12:51.388398959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 22:12:52.053395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399989338.mount: Deactivated successfully. Aug 5 22:12:53.785026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 22:12:53.793131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:55.065917 containerd[2016]: time="2024-08-05T22:12:55.065856638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:55.124541 containerd[2016]: time="2024-08-05T22:12:55.124453810Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Aug 5 22:12:55.155161 containerd[2016]: time="2024-08-05T22:12:55.155017443Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:55.196062 containerd[2016]: time="2024-08-05T22:12:55.196007288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:55.200242 containerd[2016]: time="2024-08-05T22:12:55.197126731Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 3.808583398s" Aug 5 22:12:55.200242 containerd[2016]: time="2024-08-05T22:12:55.200243984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Aug 5 22:12:56.084837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:56.101616 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:56.196222 kubelet[2745]: E0805 22:12:56.196167 2745 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:56.199403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:56.204707 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:58.713462 update_engine[1989]: I0805 22:12:58.712625 1989 update_attempter.cc:509] Updating boot flags... Aug 5 22:12:58.819450 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2806) Aug 5 22:12:58.933006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:58.945923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:59.010593 systemd[1]: Reloading requested from client PID 2890 ('systemctl') (unit session-7.scope)... Aug 5 22:12:59.010614 systemd[1]: Reloading... Aug 5 22:12:59.129992 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2810) Aug 5 22:12:59.188440 zram_generator::config[2952]: No configuration found. Aug 5 22:12:59.545237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:12:59.781581 systemd[1]: Reloading finished in 770 ms. Aug 5 22:12:59.912185 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:12:59.912363 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:12:59.912905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:59.925940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:00.747683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:00.751770 (kubelet)[3086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:13:00.866673 kubelet[3086]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:00.867910 kubelet[3086]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:13:00.867910 kubelet[3086]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:00.867910 kubelet[3086]: I0805 22:13:00.867276 3086 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:13:01.403744 kubelet[3086]: I0805 22:13:01.403696 3086 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:13:01.403744 kubelet[3086]: I0805 22:13:01.403744 3086 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:13:01.404433 kubelet[3086]: I0805 22:13:01.404380 3086 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:13:01.461749 kubelet[3086]: I0805 22:13:01.461713 3086 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:13:01.463430 kubelet[3086]: E0805 22:13:01.462980 3086 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:01.495014 kubelet[3086]: I0805 22:13:01.494050 3086 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:13:01.498155 kubelet[3086]: I0805 22:13:01.498101 3086 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:13:01.500699 kubelet[3086]: I0805 22:13:01.500636 3086 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:13:01.500699 kubelet[3086]: I0805 22:13:01.500687 3086 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:13:01.500699 kubelet[3086]: I0805 22:13:01.500701 3086 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:13:01.502273 kubelet[3086]: I0805 22:13:01.502228 3086 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:01.512676 kubelet[3086]: W0805 22:13:01.512511 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.21.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-119&limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:01.512901 kubelet[3086]: E0805 22:13:01.512713 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-119&limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:01.517647 kubelet[3086]: I0805 22:13:01.517586 3086 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:13:01.517647 kubelet[3086]: I0805 22:13:01.517657 3086 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:13:01.517987 kubelet[3086]: I0805 22:13:01.517743 3086 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:13:01.517987 kubelet[3086]: I0805 22:13:01.517857 3086 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:13:01.521919 kubelet[3086]: W0805 22:13:01.520212 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.21.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:01.521919 kubelet[3086]: E0805 22:13:01.520297 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:01.521919 kubelet[3086]: I0805 22:13:01.521003 3086 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:13:01.544520 kubelet[3086]: W0805 22:13:01.543551 3086 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:13:01.547185 kubelet[3086]: I0805 22:13:01.547151 3086 server.go:1232] "Started kubelet" Aug 5 22:13:01.551830 kubelet[3086]: I0805 22:13:01.551798 3086 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:13:01.554154 kubelet[3086]: I0805 22:13:01.553794 3086 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:13:01.554286 kubelet[3086]: I0805 22:13:01.554219 3086 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:13:01.555664 kubelet[3086]: E0805 22:13:01.555498 3086 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-119.17e8f4c455106d2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-119", UID:"ip-172-31-21-119", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-119"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 13, 1, 547121967, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 13, 1, 547121967, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-21-119"}': 'Post "https://172.31.21.119:6443/api/v1/namespaces/default/events": dial tcp 172.31.21.119:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:13:01.560487 kubelet[3086]: I0805 22:13:01.559521 3086 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:13:01.560487 kubelet[3086]: I0805 22:13:01.560389 3086 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:13:01.562771 kubelet[3086]: E0805 22:13:01.562733 3086 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:13:01.564516 kubelet[3086]: E0805 22:13:01.564476 3086 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:13:01.569167 kubelet[3086]: I0805 22:13:01.569137 3086 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:13:01.583838 kubelet[3086]: I0805 22:13:01.583049 3086 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:13:01.587213 kubelet[3086]: I0805 22:13:01.584879 3086 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:13:01.588276 kubelet[3086]: W0805 22:13:01.588222 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.21.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:01.589851 kubelet[3086]: E0805 22:13:01.588471 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:01.590178 kubelet[3086]: E0805 22:13:01.590161 3086 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-119?timeout=10s\": dial tcp 172.31.21.119:6443: connect: connection refused" interval="200ms" Aug 5 22:13:01.706582 kubelet[3086]: I0805 22:13:01.705904 3086 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-119" Aug 5 22:13:01.706582 kubelet[3086]: E0805 22:13:01.706341 3086 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.119:6443/api/v1/nodes\": dial tcp 172.31.21.119:6443: connect: connection refused" node="ip-172-31-21-119" Aug 5 22:13:01.709143 kubelet[3086]: I0805 22:13:01.709111 3086 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:13:01.717676 kubelet[3086]: I0805 22:13:01.715533 3086 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:13:01.717676 kubelet[3086]: I0805 22:13:01.715573 3086 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:13:01.717676 kubelet[3086]: I0805 22:13:01.715599 3086 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:13:01.717676 kubelet[3086]: E0805 22:13:01.715657 3086 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:13:01.725374 kubelet[3086]: W0805 22:13:01.725332 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.21.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:01.725583 kubelet[3086]: E0805 22:13:01.725380 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:01.792627 kubelet[3086]: E0805 22:13:01.792586 3086 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-119?timeout=10s\": dial tcp 172.31.21.119:6443: connect: connection refused" interval="400ms" Aug 5 22:13:01.817744 kubelet[3086]: E0805 22:13:01.816312 3086 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:13:01.834509 kubelet[3086]: I0805 22:13:01.834474 3086 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:13:01.834509 kubelet[3086]: I0805 22:13:01.834501 3086 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:13:01.834983 kubelet[3086]: I0805 22:13:01.834526 3086 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:01.840755 kubelet[3086]: I0805 22:13:01.840606 3086 policy_none.go:49] "None policy: Start" Aug 5 22:13:01.842885 kubelet[3086]: I0805 22:13:01.842861 3086 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:13:01.843019 kubelet[3086]: I0805 22:13:01.842910 3086 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:13:01.861822 kubelet[3086]: I0805 22:13:01.859502 3086 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:13:01.861822 kubelet[3086]: I0805 22:13:01.860351 3086 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:13:01.864694 kubelet[3086]: E0805 22:13:01.864669 3086 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-119\" not found" Aug 5 22:13:01.910030 kubelet[3086]: I0805 22:13:01.910002 3086 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-119" Aug 5 22:13:01.911124 kubelet[3086]: E0805 22:13:01.911102 3086 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.119:6443/api/v1/nodes\": dial tcp 172.31.21.119:6443: connect: connection refused" node="ip-172-31-21-119" Aug 5 22:13:02.017691 kubelet[3086]: I0805 22:13:02.017541 3086 topology_manager.go:215] "Topology Admit Handler" podUID="80ca3e5887888aed2b981ace1b6dd168" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-119" Aug 5 22:13:02.020661 kubelet[3086]: I0805 22:13:02.020620 3086 topology_manager.go:215] "Topology Admit Handler" podUID="53b9bd58fc37a9ae7c91d131abc23bb7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:02.024920 kubelet[3086]: I0805 22:13:02.024615 3086 topology_manager.go:215] "Topology Admit Handler" podUID="a7c20303a4ea60faa0d18fa8f263fe2f" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-119" Aug 5 22:13:02.089223 kubelet[3086]: I0805 22:13:02.089180 3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:02.089596 kubelet[3086]: I0805 22:13:02.089242 3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:02.089596 kubelet[3086]: I0805 22:13:02.089272 3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:02.089596 kubelet[3086]: I0805 22:13:02.089299 3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ca3e5887888aed2b981ace1b6dd168-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-119\" (UID: \"80ca3e5887888aed2b981ace1b6dd168\") " pod="kube-system/kube-apiserver-ip-172-31-21-119" Aug 5 22:13:02.089596 kubelet[3086]: I0805 22:13:02.089330 3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ca3e5887888aed2b981ace1b6dd168-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-119\" (UID: \"80ca3e5887888aed2b981ace1b6dd168\") " pod="kube-system/kube-apiserver-ip-172-31-21-119" Aug 5 22:13:02.089596 kubelet[3086]: I0805 22:13:02.089356 3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:02.089762 kubelet[3086]: I0805 22:13:02.089382 3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:02.089762 kubelet[3086]: I0805 22:13:02.089427 3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7c20303a4ea60faa0d18fa8f263fe2f-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-119\" (UID: \"a7c20303a4ea60faa0d18fa8f263fe2f\") " pod="kube-system/kube-scheduler-ip-172-31-21-119" Aug 5 22:13:02.089762 kubelet[3086]: I0805 22:13:02.089464 3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ca3e5887888aed2b981ace1b6dd168-ca-certs\") pod \"kube-apiserver-ip-172-31-21-119\" (UID: \"80ca3e5887888aed2b981ace1b6dd168\") " pod="kube-system/kube-apiserver-ip-172-31-21-119" Aug 5 22:13:02.199589 kubelet[3086]: E0805 22:13:02.195835 3086 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-119?timeout=10s\": dial tcp 172.31.21.119:6443: connect: connection refused" interval="800ms" Aug 5 22:13:02.314038 kubelet[3086]: I0805 22:13:02.314002 3086 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-119" Aug 5 22:13:02.314489 kubelet[3086]: E0805 22:13:02.314461 3086 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.119:6443/api/v1/nodes\": dial tcp 172.31.21.119:6443: connect: connection refused" node="ip-172-31-21-119" Aug 5 22:13:02.330817 containerd[2016]: time="2024-08-05T22:13:02.330767344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-119,Uid:80ca3e5887888aed2b981ace1b6dd168,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:02.354112 containerd[2016]: time="2024-08-05T22:13:02.354055054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-119,Uid:53b9bd58fc37a9ae7c91d131abc23bb7,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:02.358270 containerd[2016]: time="2024-08-05T22:13:02.357796224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-119,Uid:a7c20303a4ea60faa0d18fa8f263fe2f,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:02.653913 kubelet[3086]: W0805 22:13:02.653780 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.21.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:02.654501 kubelet[3086]: E0805 22:13:02.654461 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:02.714589 kubelet[3086]: W0805 22:13:02.714501 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.21.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:02.714589 kubelet[3086]: E0805 22:13:02.714581 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:02.770836 kubelet[3086]: W0805 22:13:02.770571 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.21.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-119&limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:02.770836 kubelet[3086]: E0805 22:13:02.770767 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-119&limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:02.989025 kubelet[3086]: W0805 22:13:02.988323 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.21.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:02.989025 kubelet[3086]: E0805 22:13:02.988401 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:03.001052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386833842.mount: Deactivated successfully. Aug 5 22:13:03.003152 kubelet[3086]: E0805 22:13:03.001666 3086 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-119?timeout=10s\": dial tcp 172.31.21.119:6443: connect: connection refused" interval="1.6s" Aug 5 22:13:03.016579 containerd[2016]: time="2024-08-05T22:13:03.016520795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:03.018494 containerd[2016]: time="2024-08-05T22:13:03.018307753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 5 22:13:03.019931 containerd[2016]: time="2024-08-05T22:13:03.019882429Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:03.021817 containerd[2016]: time="2024-08-05T22:13:03.021719434Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:03.023816 containerd[2016]: time="2024-08-05T22:13:03.023762236Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:13:03.026055 containerd[2016]: time="2024-08-05T22:13:03.025969904Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:03.027428 containerd[2016]: time="2024-08-05T22:13:03.027349406Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:13:03.037443 containerd[2016]: time="2024-08-05T22:13:03.034940723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:03.039040 containerd[2016]: time="2024-08-05T22:13:03.038983214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 681.057111ms" Aug 5 22:13:03.044451 containerd[2016]: time="2024-08-05T22:13:03.042010400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 687.750475ms" Aug 5 22:13:03.054275 containerd[2016]: time="2024-08-05T22:13:03.054218565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 723.320346ms" Aug 5 22:13:03.120757 kubelet[3086]: I0805 22:13:03.120041 3086 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-119" Aug 5 22:13:03.157869 kubelet[3086]: E0805 22:13:03.121166 3086 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.119:6443/api/v1/nodes\": dial tcp 172.31.21.119:6443: connect: connection refused" node="ip-172-31-21-119" Aug 5 22:13:03.509682 kubelet[3086]: E0805 22:13:03.509639 3086 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:03.752114 containerd[2016]: time="2024-08-05T22:13:03.751749494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:03.752114 containerd[2016]: time="2024-08-05T22:13:03.751833873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:03.752114 containerd[2016]: time="2024-08-05T22:13:03.751872106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:03.752114 containerd[2016]: time="2024-08-05T22:13:03.751894475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:03.762453 containerd[2016]: time="2024-08-05T22:13:03.758946683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:03.762453 containerd[2016]: time="2024-08-05T22:13:03.759017860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:03.762453 containerd[2016]: time="2024-08-05T22:13:03.759054943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:03.762453 containerd[2016]: time="2024-08-05T22:13:03.759076964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:03.766270 containerd[2016]: time="2024-08-05T22:13:03.766097340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:03.767307 containerd[2016]: time="2024-08-05T22:13:03.766225157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:03.769919 containerd[2016]: time="2024-08-05T22:13:03.767353902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:03.769919 containerd[2016]: time="2024-08-05T22:13:03.769507342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:03.931478 containerd[2016]: time="2024-08-05T22:13:03.931429063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-119,Uid:80ca3e5887888aed2b981ace1b6dd168,Namespace:kube-system,Attempt:0,} returns sandbox id \"593bd372686c88d723446cf7039b660ab19374a5d798f44dd25b2a5d9a0ed656\"" Aug 5 22:13:03.960815 containerd[2016]: time="2024-08-05T22:13:03.960772761Z" level=info msg="CreateContainer within sandbox \"593bd372686c88d723446cf7039b660ab19374a5d798f44dd25b2a5d9a0ed656\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:13:03.961304 containerd[2016]: time="2024-08-05T22:13:03.961263592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-119,Uid:53b9bd58fc37a9ae7c91d131abc23bb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1afd86d4439cda0b41825f2c0d175a6d682f47e36cd50b6d57e65a77fb41078\"" Aug 5 22:13:03.972074 containerd[2016]: time="2024-08-05T22:13:03.971962371Z" level=info msg="CreateContainer within sandbox \"b1afd86d4439cda0b41825f2c0d175a6d682f47e36cd50b6d57e65a77fb41078\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:13:03.992799 containerd[2016]: time="2024-08-05T22:13:03.992736264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-119,Uid:a7c20303a4ea60faa0d18fa8f263fe2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"78f982455b11f9a92ace336a2708a0fc3c12670ab691ec20a5a40a4787118593\"" Aug 5 22:13:04.003284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020765553.mount: Deactivated successfully. Aug 5 22:13:04.007150 containerd[2016]: time="2024-08-05T22:13:04.006967710Z" level=info msg="CreateContainer within sandbox \"78f982455b11f9a92ace336a2708a0fc3c12670ab691ec20a5a40a4787118593\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:13:04.010319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752712350.mount: Deactivated successfully. Aug 5 22:13:04.019605 containerd[2016]: time="2024-08-05T22:13:04.019186809Z" level=info msg="CreateContainer within sandbox \"b1afd86d4439cda0b41825f2c0d175a6d682f47e36cd50b6d57e65a77fb41078\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4475fa79679104b4e6c4566dd4ce0414c762719ca24d691c10090df2a773b9cb\"" Aug 5 22:13:04.031368 containerd[2016]: time="2024-08-05T22:13:04.031332372Z" level=info msg="StartContainer for \"4475fa79679104b4e6c4566dd4ce0414c762719ca24d691c10090df2a773b9cb\"" Aug 5 22:13:04.058591 containerd[2016]: time="2024-08-05T22:13:04.058538178Z" level=info msg="CreateContainer within sandbox \"593bd372686c88d723446cf7039b660ab19374a5d798f44dd25b2a5d9a0ed656\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f497bb9ca248709aec0378594f08d01ad4d9188b54978c1d2c6b4e7147327d03\"" Aug 5 22:13:04.067006 containerd[2016]: time="2024-08-05T22:13:04.066968300Z" level=info msg="StartContainer for \"f497bb9ca248709aec0378594f08d01ad4d9188b54978c1d2c6b4e7147327d03\"" Aug 5 22:13:04.097471 containerd[2016]: time="2024-08-05T22:13:04.097425220Z" level=info msg="CreateContainer within sandbox \"78f982455b11f9a92ace336a2708a0fc3c12670ab691ec20a5a40a4787118593\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eba5e770ba9568dab4686257e83919ca1a1327f9dec76521dcfdd47734653aa5\"" Aug 5 22:13:04.098947 containerd[2016]: time="2024-08-05T22:13:04.098915531Z" level=info msg="StartContainer for \"eba5e770ba9568dab4686257e83919ca1a1327f9dec76521dcfdd47734653aa5\"" Aug 5 22:13:04.257195 containerd[2016]: time="2024-08-05T22:13:04.256615312Z" level=info msg="StartContainer for \"eba5e770ba9568dab4686257e83919ca1a1327f9dec76521dcfdd47734653aa5\" returns successfully" Aug 5 22:13:04.278465 kubelet[3086]: W0805 22:13:04.278151 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.21.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:04.278465 kubelet[3086]: E0805 22:13:04.278277 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:04.313792 containerd[2016]: time="2024-08-05T22:13:04.313629738Z" level=info msg="StartContainer for \"f497bb9ca248709aec0378594f08d01ad4d9188b54978c1d2c6b4e7147327d03\" returns successfully" Aug 5 22:13:04.335435 containerd[2016]: time="2024-08-05T22:13:04.334955759Z" level=info msg="StartContainer for \"4475fa79679104b4e6c4566dd4ce0414c762719ca24d691c10090df2a773b9cb\" returns successfully" Aug 5 22:13:04.602701 kubelet[3086]: E0805 22:13:04.602661 3086 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-119?timeout=10s\": dial tcp 172.31.21.119:6443: connect: connection refused" interval="3.2s" Aug 5 22:13:04.667805 kubelet[3086]: W0805 22:13:04.667719 3086 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.21.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:04.667805 kubelet[3086]: E0805 22:13:04.667770 3086 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.119:6443: connect: connection refused Aug 5 22:13:04.727751 kubelet[3086]: I0805 22:13:04.727209 3086 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-119" Aug 5 22:13:04.727751 kubelet[3086]: E0805 22:13:04.727727 3086 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.119:6443/api/v1/nodes\": dial tcp 172.31.21.119:6443: connect: connection refused" node="ip-172-31-21-119" Aug 5 22:13:07.932626 kubelet[3086]: I0805 22:13:07.932594 3086 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-119" Aug 5 22:13:08.360602 kubelet[3086]: E0805 22:13:08.360551 3086 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-119\" not found" node="ip-172-31-21-119" Aug 5 22:13:08.439535 kubelet[3086]: I0805 22:13:08.436861 3086 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-21-119" Aug 5 22:13:08.452059 kubelet[3086]: E0805 22:13:08.451936 3086 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-119.17e8f4c455106d2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-119", UID:"ip-172-31-21-119", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-119"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 13, 1, 547121967, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 13, 1, 547121967, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-21-119"}': 'namespaces "default" not found' (will not retry!) Aug 5 22:13:08.522253 kubelet[3086]: I0805 22:13:08.522206 3086 apiserver.go:52] "Watching apiserver" Aug 5 22:13:08.533089 kubelet[3086]: E0805 22:13:08.532200 3086 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-119.17e8f4c45618cc2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-119", UID:"ip-172-31-21-119", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-119"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 13, 1, 564447791, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 13, 1, 564447791, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-21-119"}': 'namespaces "default" not found' (will not retry!) Aug 5 22:13:08.586850 kubelet[3086]: I0805 22:13:08.586789 3086 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:13:11.556982 systemd[1]: Reloading requested from client PID 3363 ('systemctl') (unit session-7.scope)... Aug 5 22:13:11.557012 systemd[1]: Reloading... Aug 5 22:13:11.809499 zram_generator::config[3398]: No configuration found. Aug 5 22:13:11.849909 kubelet[3086]: I0805 22:13:11.849458 3086 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-119" podStartSLOduration=1.846571135 podCreationTimestamp="2024-08-05 22:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:11.846207045 +0000 UTC m=+11.072637083" watchObservedRunningTime="2024-08-05 22:13:11.846571135 +0000 UTC m=+11.073001154" Aug 5 22:13:11.866926 kubelet[3086]: I0805 22:13:11.866670 3086 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-119" podStartSLOduration=1.8666223309999999 podCreationTimestamp="2024-08-05 22:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:11.865946814 +0000 UTC m=+11.092376862" watchObservedRunningTime="2024-08-05 22:13:11.866622331 +0000 UTC m=+11.093052373" Aug 5 22:13:12.435896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:13:12.651137 systemd[1]: Reloading finished in 1093 ms. Aug 5 22:13:12.756456 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:12.779630 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:13:12.780329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:12.792608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:13.227893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:13.242028 (kubelet)[3468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:13:13.338926 kubelet[3468]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:13.338926 kubelet[3468]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:13:13.338926 kubelet[3468]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:13.340729 kubelet[3468]: I0805 22:13:13.338989 3468 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:13:13.350459 kubelet[3468]: I0805 22:13:13.349757 3468 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:13:13.350459 kubelet[3468]: I0805 22:13:13.349785 3468 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:13:13.350459 kubelet[3468]: I0805 22:13:13.349984 3468 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:13:13.352854 kubelet[3468]: I0805 22:13:13.352821 3468 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:13:13.358328 kubelet[3468]: I0805 22:13:13.357498 3468 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:13:13.367975 kubelet[3468]: I0805 22:13:13.367786 3468 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:13:13.368441 kubelet[3468]: I0805 22:13:13.368391 3468 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:13:13.368644 kubelet[3468]: I0805 22:13:13.368621 3468 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:13:13.368787 kubelet[3468]: I0805 22:13:13.368653 3468 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:13:13.368787 kubelet[3468]: I0805 22:13:13.368667 3468 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:13:13.368787 kubelet[3468]: I0805 22:13:13.368715 3468 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:13.370734 kubelet[3468]: I0805 22:13:13.370710 3468 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:13:13.370734 kubelet[3468]: I0805 22:13:13.370738 3468 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:13:13.371059 kubelet[3468]: I0805 22:13:13.370769 3468 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:13:13.371059 kubelet[3468]: I0805 22:13:13.370786 3468 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:13:13.378439 kubelet[3468]: I0805 22:13:13.376865 3468 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:13:13.378439 kubelet[3468]: I0805 22:13:13.377624 3468 server.go:1232] "Started kubelet" Aug 5 22:13:13.383509 kubelet[3468]: I0805 22:13:13.382480 3468 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:13:13.413403 kubelet[3468]: I0805 22:13:13.413367 3468 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:13:13.414437 kubelet[3468]: I0805 22:13:13.413876 3468 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:13:13.417621 kubelet[3468]: I0805 22:13:13.416600 3468 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:13:13.422282 kubelet[3468]: E0805 22:13:13.422257 3468 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:13:13.439977 kubelet[3468]: E0805 22:13:13.434621 3468 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:13:13.439977 kubelet[3468]: I0805 22:13:13.428940 3468 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:13:13.439977 kubelet[3468]: I0805 22:13:13.428974 3468 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:13:13.439977 kubelet[3468]: I0805 22:13:13.435065 3468 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:13:13.439977 kubelet[3468]: I0805 22:13:13.429774 3468 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:13:13.475732 kubelet[3468]: I0805 22:13:13.475513 3468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:13:13.478823 kubelet[3468]: I0805 22:13:13.478443 3468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:13:13.478823 kubelet[3468]: I0805 22:13:13.478469 3468 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:13:13.478823 kubelet[3468]: I0805 22:13:13.478491 3468 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:13:13.478823 kubelet[3468]: E0805 22:13:13.478553 3468 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:13:13.543359 kubelet[3468]: I0805 22:13:13.542611 3468 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-119" Aug 5 22:13:13.558623 kubelet[3468]: I0805 22:13:13.558595 3468 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-21-119" Aug 5 22:13:13.558838 kubelet[3468]: I0805 22:13:13.558826 3468 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-21-119" Aug 5 22:13:13.578760 kubelet[3468]: E0805 22:13:13.578723 3468 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:13:13.653599 kubelet[3468]: I0805 22:13:13.653276 3468 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:13:13.653599 kubelet[3468]: I0805 22:13:13.653306 3468 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:13:13.653599 kubelet[3468]: I0805 22:13:13.653360 3468 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:13.655272 kubelet[3468]: I0805 22:13:13.654715 3468 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:13:13.655272 kubelet[3468]: I0805 22:13:13.654808 3468 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:13:13.655272 kubelet[3468]: I0805 22:13:13.654820 3468 policy_none.go:49] "None policy: Start" Aug 5 22:13:13.662586 kubelet[3468]: I0805 22:13:13.660409 3468 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:13:13.662586 kubelet[3468]: I0805 22:13:13.660460 3468 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:13:13.662586 kubelet[3468]: I0805 22:13:13.660731 3468 state_mem.go:75] "Updated machine memory state" Aug 5 22:13:13.663529 kubelet[3468]: I0805 22:13:13.663310 3468 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:13:13.670294 kubelet[3468]: I0805 22:13:13.669134 3468 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:13:13.783693 kubelet[3468]: I0805 22:13:13.779430 3468 topology_manager.go:215] "Topology Admit Handler" podUID="80ca3e5887888aed2b981ace1b6dd168" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-119" Aug 5 22:13:13.783693 kubelet[3468]: I0805 22:13:13.779544 3468 topology_manager.go:215] "Topology Admit Handler" podUID="53b9bd58fc37a9ae7c91d131abc23bb7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:13.783693 kubelet[3468]: I0805 22:13:13.779591 3468 topology_manager.go:215] "Topology Admit Handler" podUID="a7c20303a4ea60faa0d18fa8f263fe2f" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-119" Aug 5 22:13:13.790972 kubelet[3468]: E0805 22:13:13.790933 3468 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-21-119\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-119" Aug 5 22:13:13.799924 kubelet[3468]: E0805 22:13:13.799847 3468 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-21-119\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:13.850457 kubelet[3468]: I0805 22:13:13.847654 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:13.850457 kubelet[3468]: I0805 22:13:13.847720 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:13.850457 kubelet[3468]: I0805 22:13:13.847753 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7c20303a4ea60faa0d18fa8f263fe2f-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-119\" (UID: \"a7c20303a4ea60faa0d18fa8f263fe2f\") " pod="kube-system/kube-scheduler-ip-172-31-21-119" Aug 5 22:13:13.850457 kubelet[3468]: I0805 22:13:13.847787 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:13.850457 kubelet[3468]: I0805 22:13:13.847822 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ca3e5887888aed2b981ace1b6dd168-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-119\" (UID: \"80ca3e5887888aed2b981ace1b6dd168\") " pod="kube-system/kube-apiserver-ip-172-31-21-119" Aug 5 22:13:13.851005 kubelet[3468]: I0805 22:13:13.847873 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:13.854691 kubelet[3468]: I0805 22:13:13.854489 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53b9bd58fc37a9ae7c91d131abc23bb7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-119\" (UID: \"53b9bd58fc37a9ae7c91d131abc23bb7\") " pod="kube-system/kube-controller-manager-ip-172-31-21-119" Aug 5 22:13:13.854691 kubelet[3468]: I0805 22:13:13.854580 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ca3e5887888aed2b981ace1b6dd168-ca-certs\") pod \"kube-apiserver-ip-172-31-21-119\" (UID: \"80ca3e5887888aed2b981ace1b6dd168\") " pod="kube-system/kube-apiserver-ip-172-31-21-119" Aug 5 22:13:13.854691 kubelet[3468]: I0805 22:13:13.854629 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ca3e5887888aed2b981ace1b6dd168-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-119\" (UID: \"80ca3e5887888aed2b981ace1b6dd168\") " pod="kube-system/kube-apiserver-ip-172-31-21-119" Aug 5 22:13:14.373708 kubelet[3468]: I0805 22:13:14.373193 3468 apiserver.go:52] "Watching apiserver" Aug 5 22:13:14.435297 kubelet[3468]: I0805 22:13:14.435230 3468 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:13:14.542892 kubelet[3468]: E0805 22:13:14.542852 3468 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-21-119\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-119" Aug 5 22:13:14.696234 kubelet[3468]: I0805 22:13:14.696043 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-119" podStartSLOduration=1.695529655 podCreationTimestamp="2024-08-05 22:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:14.695118627 +0000 UTC m=+1.445246790" watchObservedRunningTime="2024-08-05 22:13:14.695529655 +0000 UTC m=+1.445657821" Aug 5 22:13:19.906645 sudo[2366]: pam_unix(sudo:session): session closed for user root Aug 5 22:13:19.929995 sshd[2362]: pam_unix(sshd:session): session closed for user core Aug 5 22:13:19.935237 systemd[1]: sshd@6-172.31.21.119:22-139.178.89.65:43214.service: Deactivated successfully. Aug 5 22:13:19.941843 systemd-logind[1985]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:13:19.943669 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:13:19.944985 systemd-logind[1985]: Removed session 7. Aug 5 22:13:25.154237 kubelet[3468]: I0805 22:13:25.154203 3468 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:13:25.154869 containerd[2016]: time="2024-08-05T22:13:25.154804186Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:13:25.156021 kubelet[3468]: I0805 22:13:25.155139 3468 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:13:25.491182 kubelet[3468]: I0805 22:13:25.491035 3468 topology_manager.go:215] "Topology Admit Handler" podUID="f104acd0-e154-4b53-bfcd-f47763839f6d" podNamespace="kube-system" podName="kube-proxy-lkk5x" Aug 5 22:13:25.567165 kubelet[3468]: I0805 22:13:25.565641 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f104acd0-e154-4b53-bfcd-f47763839f6d-xtables-lock\") pod \"kube-proxy-lkk5x\" (UID: \"f104acd0-e154-4b53-bfcd-f47763839f6d\") " pod="kube-system/kube-proxy-lkk5x" Aug 5 22:13:25.567165 kubelet[3468]: I0805 22:13:25.565716 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frzbw\" (UniqueName: \"kubernetes.io/projected/f104acd0-e154-4b53-bfcd-f47763839f6d-kube-api-access-frzbw\") pod \"kube-proxy-lkk5x\" (UID: \"f104acd0-e154-4b53-bfcd-f47763839f6d\") " pod="kube-system/kube-proxy-lkk5x" Aug 5 22:13:25.567165 kubelet[3468]: I0805 22:13:25.565748 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f104acd0-e154-4b53-bfcd-f47763839f6d-lib-modules\") pod \"kube-proxy-lkk5x\" (UID: \"f104acd0-e154-4b53-bfcd-f47763839f6d\") " pod="kube-system/kube-proxy-lkk5x" Aug 5 22:13:25.567165 kubelet[3468]: I0805 22:13:25.565778 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f104acd0-e154-4b53-bfcd-f47763839f6d-kube-proxy\") pod \"kube-proxy-lkk5x\" (UID: \"f104acd0-e154-4b53-bfcd-f47763839f6d\") " pod="kube-system/kube-proxy-lkk5x" Aug 5 22:13:25.723033 kubelet[3468]: E0805 22:13:25.722991 3468 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 5 22:13:25.723033 kubelet[3468]: E0805 22:13:25.723033 3468 projected.go:198] Error preparing data for projected volume kube-api-access-frzbw for pod kube-system/kube-proxy-lkk5x: configmap "kube-root-ca.crt" not found Aug 5 22:13:25.727433 kubelet[3468]: E0805 22:13:25.724980 3468 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f104acd0-e154-4b53-bfcd-f47763839f6d-kube-api-access-frzbw podName:f104acd0-e154-4b53-bfcd-f47763839f6d nodeName:}" failed. No retries permitted until 2024-08-05 22:13:26.223076474 +0000 UTC m=+12.973204625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-frzbw" (UniqueName: "kubernetes.io/projected/f104acd0-e154-4b53-bfcd-f47763839f6d-kube-api-access-frzbw") pod "kube-proxy-lkk5x" (UID: "f104acd0-e154-4b53-bfcd-f47763839f6d") : configmap "kube-root-ca.crt" not found Aug 5 22:13:26.080153 kubelet[3468]: I0805 22:13:26.077930 3468 topology_manager.go:215] "Topology Admit Handler" podUID="79502562-874a-4990-a81e-bc6e99e2fe95" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-mrjpl" Aug 5 22:13:26.171523 kubelet[3468]: I0805 22:13:26.171484 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/79502562-874a-4990-a81e-bc6e99e2fe95-var-lib-calico\") pod \"tigera-operator-76c4974c85-mrjpl\" (UID: \"79502562-874a-4990-a81e-bc6e99e2fe95\") " pod="tigera-operator/tigera-operator-76c4974c85-mrjpl" Aug 5 22:13:26.172125 kubelet[3468]: I0805 22:13:26.171557 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txnw9\" (UniqueName: \"kubernetes.io/projected/79502562-874a-4990-a81e-bc6e99e2fe95-kube-api-access-txnw9\") pod \"tigera-operator-76c4974c85-mrjpl\" (UID: \"79502562-874a-4990-a81e-bc6e99e2fe95\") " pod="tigera-operator/tigera-operator-76c4974c85-mrjpl" Aug 5 22:13:26.407903 containerd[2016]: time="2024-08-05T22:13:26.407782967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-mrjpl,Uid:79502562-874a-4990-a81e-bc6e99e2fe95,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:13:26.425772 containerd[2016]: time="2024-08-05T22:13:26.425718857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lkk5x,Uid:f104acd0-e154-4b53-bfcd-f47763839f6d,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:26.496793 containerd[2016]: time="2024-08-05T22:13:26.495711905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:26.496793 containerd[2016]: time="2024-08-05T22:13:26.495794895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:26.496793 containerd[2016]: time="2024-08-05T22:13:26.495834845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:26.496793 containerd[2016]: time="2024-08-05T22:13:26.495855362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:26.522233 containerd[2016]: time="2024-08-05T22:13:26.522072081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:26.523008 containerd[2016]: time="2024-08-05T22:13:26.522212492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:26.523008 containerd[2016]: time="2024-08-05T22:13:26.522261351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:26.523008 containerd[2016]: time="2024-08-05T22:13:26.522280533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:26.624655 containerd[2016]: time="2024-08-05T22:13:26.623660182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lkk5x,Uid:f104acd0-e154-4b53-bfcd-f47763839f6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"904cd1f709c15498cf74f633bec936da51be5205e9f943c0d26df4a9638af2bb\"" Aug 5 22:13:26.630842 containerd[2016]: time="2024-08-05T22:13:26.630767538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-mrjpl,Uid:79502562-874a-4990-a81e-bc6e99e2fe95,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c2850f0d750a685465e4a4e6c9348b0d6ff655b31bffc62041bab60aa5c13248\"" Aug 5 22:13:26.639562 containerd[2016]: time="2024-08-05T22:13:26.638743821Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:13:26.643615 containerd[2016]: time="2024-08-05T22:13:26.641571297Z" level=info msg="CreateContainer within sandbox \"904cd1f709c15498cf74f633bec936da51be5205e9f943c0d26df4a9638af2bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:13:26.746963 containerd[2016]: time="2024-08-05T22:13:26.746843772Z" level=info msg="CreateContainer within sandbox \"904cd1f709c15498cf74f633bec936da51be5205e9f943c0d26df4a9638af2bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c38024db43a8cfec2877d97ee290eb1822198f7f98677d369d64594fa8ec1eb2\"" Aug 5 22:13:26.750213 containerd[2016]: time="2024-08-05T22:13:26.748736224Z" level=info msg="StartContainer for \"c38024db43a8cfec2877d97ee290eb1822198f7f98677d369d64594fa8ec1eb2\"" Aug 5 22:13:26.952768 containerd[2016]: time="2024-08-05T22:13:26.952725233Z" level=info msg="StartContainer for \"c38024db43a8cfec2877d97ee290eb1822198f7f98677d369d64594fa8ec1eb2\" returns successfully" Aug 5 22:13:28.260626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753771034.mount: Deactivated successfully. Aug 5 22:13:29.446500 containerd[2016]: time="2024-08-05T22:13:29.446442316Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:29.453305 containerd[2016]: time="2024-08-05T22:13:29.453230020Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076060" Aug 5 22:13:29.457334 containerd[2016]: time="2024-08-05T22:13:29.457249700Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:29.464590 containerd[2016]: time="2024-08-05T22:13:29.464530148Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:29.467446 containerd[2016]: time="2024-08-05T22:13:29.466939528Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.828108109s" Aug 5 22:13:29.467446 containerd[2016]: time="2024-08-05T22:13:29.467021119Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:13:29.526205 containerd[2016]: time="2024-08-05T22:13:29.525984920Z" level=info msg="CreateContainer within sandbox \"c2850f0d750a685465e4a4e6c9348b0d6ff655b31bffc62041bab60aa5c13248\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:13:29.550796 containerd[2016]: time="2024-08-05T22:13:29.550650561Z" level=info msg="CreateContainer within sandbox \"c2850f0d750a685465e4a4e6c9348b0d6ff655b31bffc62041bab60aa5c13248\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"180d7fdecb4ad8633aa7391f8d538ea987a51fe018fb436234790ad34c871559\"" Aug 5 22:13:29.552767 containerd[2016]: time="2024-08-05T22:13:29.552626719Z" level=info msg="StartContainer for \"180d7fdecb4ad8633aa7391f8d538ea987a51fe018fb436234790ad34c871559\"" Aug 5 22:13:29.828100 containerd[2016]: time="2024-08-05T22:13:29.828060544Z" level=info msg="StartContainer for \"180d7fdecb4ad8633aa7391f8d538ea987a51fe018fb436234790ad34c871559\" returns successfully" Aug 5 22:13:30.746356 kubelet[3468]: I0805 22:13:30.745402 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lkk5x" podStartSLOduration=5.745350136 podCreationTimestamp="2024-08-05 22:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:27.640264888 +0000 UTC m=+14.390393050" watchObservedRunningTime="2024-08-05 22:13:30.745350136 +0000 UTC m=+17.495478297" Aug 5 22:13:31.119702 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:13:31.119785 systemd-resolved[1890]: Flushed all caches. Aug 5 22:13:31.121625 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:13:33.576810 kubelet[3468]: I0805 22:13:33.576764 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-mrjpl" podStartSLOduration=4.735392212 podCreationTimestamp="2024-08-05 22:13:26 +0000 UTC" firstStartedPulling="2024-08-05 22:13:26.634119178 +0000 UTC m=+13.384247329" lastFinishedPulling="2024-08-05 22:13:29.475430657 +0000 UTC m=+16.225558806" observedRunningTime="2024-08-05 22:13:30.747481977 +0000 UTC m=+17.497610138" watchObservedRunningTime="2024-08-05 22:13:33.576703689 +0000 UTC m=+20.326831852" Aug 5 22:13:33.677325 kubelet[3468]: I0805 22:13:33.677293 3468 topology_manager.go:215] "Topology Admit Handler" podUID="1baa3152-e243-4913-8e93-5e4b24780a65" podNamespace="calico-system" podName="calico-typha-5665b85d9c-kwb2f" Aug 5 22:13:33.827731 kubelet[3468]: I0805 22:13:33.819335 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkv4\" (UniqueName: \"kubernetes.io/projected/1baa3152-e243-4913-8e93-5e4b24780a65-kube-api-access-4rkv4\") pod \"calico-typha-5665b85d9c-kwb2f\" (UID: \"1baa3152-e243-4913-8e93-5e4b24780a65\") " pod="calico-system/calico-typha-5665b85d9c-kwb2f" Aug 5 22:13:33.827731 kubelet[3468]: I0805 22:13:33.819392 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1baa3152-e243-4913-8e93-5e4b24780a65-typha-certs\") pod \"calico-typha-5665b85d9c-kwb2f\" (UID: \"1baa3152-e243-4913-8e93-5e4b24780a65\") " pod="calico-system/calico-typha-5665b85d9c-kwb2f" Aug 5 22:13:33.827731 kubelet[3468]: I0805 22:13:33.819443 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1baa3152-e243-4913-8e93-5e4b24780a65-tigera-ca-bundle\") pod \"calico-typha-5665b85d9c-kwb2f\" (UID: \"1baa3152-e243-4913-8e93-5e4b24780a65\") " pod="calico-system/calico-typha-5665b85d9c-kwb2f" Aug 5 22:13:33.896793 kubelet[3468]: I0805 22:13:33.896754 3468 topology_manager.go:215] "Topology Admit Handler" podUID="602f955e-404c-4e38-9081-26550c7fedcd" podNamespace="calico-system" podName="calico-node-pzf2d" Aug 5 22:13:33.992647 containerd[2016]: time="2024-08-05T22:13:33.992598341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5665b85d9c-kwb2f,Uid:1baa3152-e243-4913-8e93-5e4b24780a65,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:34.021064 kubelet[3468]: I0805 22:13:34.021021 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/602f955e-404c-4e38-9081-26550c7fedcd-xtables-lock\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.022964 kubelet[3468]: I0805 22:13:34.021159 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/602f955e-404c-4e38-9081-26550c7fedcd-cni-net-dir\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.022964 kubelet[3468]: I0805 22:13:34.021431 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/602f955e-404c-4e38-9081-26550c7fedcd-cni-log-dir\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.022964 kubelet[3468]: I0805 22:13:34.021470 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/602f955e-404c-4e38-9081-26550c7fedcd-lib-modules\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.022964 kubelet[3468]: I0805 22:13:34.021504 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/602f955e-404c-4e38-9081-26550c7fedcd-flexvol-driver-host\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.022964 kubelet[3468]: I0805 22:13:34.021668 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/602f955e-404c-4e38-9081-26550c7fedcd-tigera-ca-bundle\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.023675 kubelet[3468]: I0805 22:13:34.021704 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/602f955e-404c-4e38-9081-26550c7fedcd-var-lib-calico\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.023675 kubelet[3468]: I0805 22:13:34.021738 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xzh\" (UniqueName: \"kubernetes.io/projected/602f955e-404c-4e38-9081-26550c7fedcd-kube-api-access-r8xzh\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.023675 kubelet[3468]: I0805 22:13:34.021766 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/602f955e-404c-4e38-9081-26550c7fedcd-node-certs\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.023675 kubelet[3468]: I0805 22:13:34.021795 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/602f955e-404c-4e38-9081-26550c7fedcd-policysync\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.023675 kubelet[3468]: I0805 22:13:34.021909 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/602f955e-404c-4e38-9081-26550c7fedcd-var-run-calico\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.024346 kubelet[3468]: I0805 22:13:34.022249 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/602f955e-404c-4e38-9081-26550c7fedcd-cni-bin-dir\") pod \"calico-node-pzf2d\" (UID: \"602f955e-404c-4e38-9081-26550c7fedcd\") " pod="calico-system/calico-node-pzf2d" Aug 5 22:13:34.186996 kubelet[3468]: I0805 22:13:34.186858 3468 topology_manager.go:215] "Topology Admit Handler" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" podNamespace="calico-system" podName="csi-node-driver-pqxtn" Aug 5 22:13:34.199933 kubelet[3468]: E0805 22:13:34.187171 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.199933 kubelet[3468]: W0805 22:13:34.187193 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.199933 kubelet[3468]: E0805 22:13:34.187229 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.199933 kubelet[3468]: E0805 22:13:34.189865 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:34.213047 kubelet[3468]: E0805 22:13:34.213015 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.217271 kubelet[3468]: W0805 22:13:34.217194 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.218200 kubelet[3468]: E0805 22:13:34.217363 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.222675 kubelet[3468]: E0805 22:13:34.219517 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.222675 kubelet[3468]: W0805 22:13:34.222519 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.222675 kubelet[3468]: E0805 22:13:34.222556 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.236045 kubelet[3468]: E0805 22:13:34.229290 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.236045 kubelet[3468]: W0805 22:13:34.233709 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.236045 kubelet[3468]: E0805 22:13:34.233759 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.245383 containerd[2016]: time="2024-08-05T22:13:34.243460875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:34.245383 containerd[2016]: time="2024-08-05T22:13:34.243555176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:34.245383 containerd[2016]: time="2024-08-05T22:13:34.244221690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:34.245383 containerd[2016]: time="2024-08-05T22:13:34.244239942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:34.257056 kubelet[3468]: E0805 22:13:34.247524 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.257056 kubelet[3468]: W0805 22:13:34.247550 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.258233 kubelet[3468]: E0805 22:13:34.257692 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.268852 kubelet[3468]: E0805 22:13:34.262579 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.268852 kubelet[3468]: W0805 22:13:34.266475 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.269325 kubelet[3468]: E0805 22:13:34.269198 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.298859 kubelet[3468]: E0805 22:13:34.295147 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.298859 kubelet[3468]: W0805 22:13:34.295581 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.298859 kubelet[3468]: E0805 22:13:34.296217 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.307792 kubelet[3468]: E0805 22:13:34.306899 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.307792 kubelet[3468]: W0805 22:13:34.306926 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.307792 kubelet[3468]: E0805 22:13:34.307045 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.311041 kubelet[3468]: E0805 22:13:34.307769 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.311041 kubelet[3468]: W0805 22:13:34.310588 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.311041 kubelet[3468]: E0805 22:13:34.310898 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.311041 kubelet[3468]: W0805 22:13:34.310921 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.311041 kubelet[3468]: E0805 22:13:34.310944 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.321312 kubelet[3468]: E0805 22:13:34.313494 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.321312 kubelet[3468]: W0805 22:13:34.313777 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.321312 kubelet[3468]: E0805 22:13:34.313802 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.321312 kubelet[3468]: E0805 22:13:34.313648 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.323319 kubelet[3468]: E0805 22:13:34.322198 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.323319 kubelet[3468]: W0805 22:13:34.322226 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.323319 kubelet[3468]: E0805 22:13:34.322257 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.351530 kubelet[3468]: E0805 22:13:34.341728 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.351530 kubelet[3468]: W0805 22:13:34.344659 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.351530 kubelet[3468]: E0805 22:13:34.344808 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.352248 kubelet[3468]: E0805 22:13:34.352159 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.352248 kubelet[3468]: W0805 22:13:34.352190 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.357278 kubelet[3468]: E0805 22:13:34.357210 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.359121 kubelet[3468]: E0805 22:13:34.358715 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.359121 kubelet[3468]: W0805 22:13:34.358739 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.359121 kubelet[3468]: E0805 22:13:34.358808 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.359491 kubelet[3468]: E0805 22:13:34.359396 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.359491 kubelet[3468]: W0805 22:13:34.359432 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.363907 kubelet[3468]: E0805 22:13:34.359614 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.364098 kubelet[3468]: E0805 22:13:34.364083 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.364181 kubelet[3468]: W0805 22:13:34.364167 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.364455 kubelet[3468]: E0805 22:13:34.364438 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.366881 kubelet[3468]: E0805 22:13:34.366839 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.367239 kubelet[3468]: W0805 22:13:34.367170 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.371383 kubelet[3468]: E0805 22:13:34.367456 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.371956 kubelet[3468]: E0805 22:13:34.371927 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.372161 kubelet[3468]: W0805 22:13:34.372141 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.372323 kubelet[3468]: E0805 22:13:34.372302 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.380548 kubelet[3468]: E0805 22:13:34.380517 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.380548 kubelet[3468]: W0805 22:13:34.380546 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.380548 kubelet[3468]: E0805 22:13:34.380583 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.381692 kubelet[3468]: E0805 22:13:34.381068 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.381692 kubelet[3468]: W0805 22:13:34.381082 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.381692 kubelet[3468]: E0805 22:13:34.381235 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.382942 kubelet[3468]: E0805 22:13:34.382440 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.382942 kubelet[3468]: W0805 22:13:34.382455 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.382942 kubelet[3468]: E0805 22:13:34.382487 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.385975 kubelet[3468]: E0805 22:13:34.383262 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.385975 kubelet[3468]: W0805 22:13:34.383276 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.385975 kubelet[3468]: E0805 22:13:34.383294 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.385975 kubelet[3468]: E0805 22:13:34.385854 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.385975 kubelet[3468]: W0805 22:13:34.385868 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.385975 kubelet[3468]: E0805 22:13:34.385888 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.391921 kubelet[3468]: E0805 22:13:34.391787 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.392900 kubelet[3468]: W0805 22:13:34.392059 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.397252 kubelet[3468]: E0805 22:13:34.395526 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.399781 kubelet[3468]: E0805 22:13:34.399754 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.399781 kubelet[3468]: W0805 22:13:34.399779 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.401559 kubelet[3468]: E0805 22:13:34.401467 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.403788 kubelet[3468]: E0805 22:13:34.403156 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.403788 kubelet[3468]: W0805 22:13:34.403174 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.403788 kubelet[3468]: E0805 22:13:34.403200 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.403788 kubelet[3468]: E0805 22:13:34.403660 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.403788 kubelet[3468]: W0805 22:13:34.403672 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.403788 kubelet[3468]: E0805 22:13:34.403696 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.408567 kubelet[3468]: E0805 22:13:34.408532 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.408567 kubelet[3468]: W0805 22:13:34.408563 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.408838 kubelet[3468]: E0805 22:13:34.408592 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.411901 kubelet[3468]: E0805 22:13:34.411522 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.411901 kubelet[3468]: W0805 22:13:34.411546 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.411901 kubelet[3468]: E0805 22:13:34.411808 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.423280 kubelet[3468]: E0805 22:13:34.423250 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.423280 kubelet[3468]: W0805 22:13:34.423276 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.423513 kubelet[3468]: E0805 22:13:34.423305 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.425104 kubelet[3468]: E0805 22:13:34.424841 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.425104 kubelet[3468]: W0805 22:13:34.424866 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.425104 kubelet[3468]: E0805 22:13:34.424926 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.425980 kubelet[3468]: E0805 22:13:34.425687 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.425980 kubelet[3468]: W0805 22:13:34.425702 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.425980 kubelet[3468]: E0805 22:13:34.425722 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.428824 kubelet[3468]: E0805 22:13:34.428527 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.428824 kubelet[3468]: W0805 22:13:34.428556 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.428824 kubelet[3468]: E0805 22:13:34.428579 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.432187 kubelet[3468]: E0805 22:13:34.432048 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.432187 kubelet[3468]: W0805 22:13:34.432076 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.432187 kubelet[3468]: E0805 22:13:34.432117 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.432187 kubelet[3468]: I0805 22:13:34.432155 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49ea90fb-8427-4eac-8b89-57071ef71ebc-kubelet-dir\") pod \"csi-node-driver-pqxtn\" (UID: \"49ea90fb-8427-4eac-8b89-57071ef71ebc\") " pod="calico-system/csi-node-driver-pqxtn" Aug 5 22:13:34.433810 kubelet[3468]: E0805 22:13:34.433005 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.433810 kubelet[3468]: W0805 22:13:34.433019 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.433810 kubelet[3468]: E0805 22:13:34.433045 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.433810 kubelet[3468]: I0805 22:13:34.433095 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs8k\" (UniqueName: \"kubernetes.io/projected/49ea90fb-8427-4eac-8b89-57071ef71ebc-kube-api-access-rgs8k\") pod \"csi-node-driver-pqxtn\" (UID: \"49ea90fb-8427-4eac-8b89-57071ef71ebc\") " pod="calico-system/csi-node-driver-pqxtn" Aug 5 22:13:34.434883 kubelet[3468]: E0805 22:13:34.434609 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.434883 kubelet[3468]: W0805 22:13:34.434632 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.434883 kubelet[3468]: E0805 22:13:34.434653 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.434883 kubelet[3468]: I0805 22:13:34.434777 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/49ea90fb-8427-4eac-8b89-57071ef71ebc-registration-dir\") pod \"csi-node-driver-pqxtn\" (UID: \"49ea90fb-8427-4eac-8b89-57071ef71ebc\") " pod="calico-system/csi-node-driver-pqxtn" Aug 5 22:13:34.439012 kubelet[3468]: E0805 22:13:34.438701 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.439012 kubelet[3468]: W0805 22:13:34.438778 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.440380 kubelet[3468]: E0805 22:13:34.440174 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.440380 kubelet[3468]: E0805 22:13:34.440267 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.441326 kubelet[3468]: W0805 22:13:34.440277 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.442080 kubelet[3468]: I0805 22:13:34.441098 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/49ea90fb-8427-4eac-8b89-57071ef71ebc-varrun\") pod \"csi-node-driver-pqxtn\" (UID: \"49ea90fb-8427-4eac-8b89-57071ef71ebc\") " pod="calico-system/csi-node-driver-pqxtn" Aug 5 22:13:34.442080 kubelet[3468]: E0805 22:13:34.441899 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.446492 kubelet[3468]: E0805 22:13:34.445961 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.446492 kubelet[3468]: W0805 22:13:34.446130 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.447723 kubelet[3468]: E0805 22:13:34.447261 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.450771 kubelet[3468]: E0805 22:13:34.450734 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.451045 kubelet[3468]: W0805 22:13:34.451028 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.455970 kubelet[3468]: E0805 22:13:34.455925 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.456907 kubelet[3468]: E0805 22:13:34.456610 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.456907 kubelet[3468]: W0805 22:13:34.456847 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.459274 kubelet[3468]: E0805 22:13:34.458470 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.459274 kubelet[3468]: W0805 22:13:34.458486 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.459942 kubelet[3468]: E0805 22:13:34.459927 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.463716 kubelet[3468]: W0805 22:13:34.463684 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.464073 kubelet[3468]: E0805 22:13:34.463960 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.464813 kubelet[3468]: E0805 22:13:34.460744 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.464935 kubelet[3468]: E0805 22:13:34.460761 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.465091 kubelet[3468]: I0805 22:13:34.465064 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/49ea90fb-8427-4eac-8b89-57071ef71ebc-socket-dir\") pod \"csi-node-driver-pqxtn\" (UID: \"49ea90fb-8427-4eac-8b89-57071ef71ebc\") " pod="calico-system/csi-node-driver-pqxtn" Aug 5 22:13:34.465829 kubelet[3468]: E0805 22:13:34.465804 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.465939 kubelet[3468]: W0805 22:13:34.465923 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.466051 kubelet[3468]: E0805 22:13:34.466041 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.466708 kubelet[3468]: E0805 22:13:34.466482 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.466708 kubelet[3468]: W0805 22:13:34.466510 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.466708 kubelet[3468]: E0805 22:13:34.466529 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.469201 kubelet[3468]: E0805 22:13:34.469095 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.469548 kubelet[3468]: W0805 22:13:34.469312 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.469548 kubelet[3468]: E0805 22:13:34.469341 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.470963 kubelet[3468]: E0805 22:13:34.470477 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.470963 kubelet[3468]: W0805 22:13:34.470492 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.470963 kubelet[3468]: E0805 22:13:34.470513 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.473064 kubelet[3468]: E0805 22:13:34.472488 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.473064 kubelet[3468]: W0805 22:13:34.472504 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.473064 kubelet[3468]: E0805 22:13:34.472534 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.516257 containerd[2016]: time="2024-08-05T22:13:34.516062328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pzf2d,Uid:602f955e-404c-4e38-9081-26550c7fedcd,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:34.581290 kubelet[3468]: E0805 22:13:34.581216 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.581290 kubelet[3468]: W0805 22:13:34.581241 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.584194 kubelet[3468]: E0805 22:13:34.582028 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.584194 kubelet[3468]: E0805 22:13:34.582910 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.584194 kubelet[3468]: W0805 22:13:34.582924 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.584194 kubelet[3468]: E0805 22:13:34.582953 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.587484 kubelet[3468]: E0805 22:13:34.586162 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.587484 kubelet[3468]: W0805 22:13:34.586183 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.587484 kubelet[3468]: E0805 22:13:34.586373 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.588947 kubelet[3468]: E0805 22:13:34.588265 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.588947 kubelet[3468]: W0805 22:13:34.588306 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.588947 kubelet[3468]: E0805 22:13:34.588436 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.590565 kubelet[3468]: E0805 22:13:34.590033 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.590565 kubelet[3468]: W0805 22:13:34.590063 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.590565 kubelet[3468]: E0805 22:13:34.590313 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.591138 kubelet[3468]: E0805 22:13:34.590822 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.591138 kubelet[3468]: W0805 22:13:34.590835 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.591138 kubelet[3468]: E0805 22:13:34.591109 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.592343 kubelet[3468]: E0805 22:13:34.591661 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.592343 kubelet[3468]: W0805 22:13:34.591674 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.592343 kubelet[3468]: E0805 22:13:34.592067 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.593082 kubelet[3468]: E0805 22:13:34.592737 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.593082 kubelet[3468]: W0805 22:13:34.592749 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.593375 kubelet[3468]: E0805 22:13:34.593214 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.594268 kubelet[3468]: E0805 22:13:34.593796 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.594268 kubelet[3468]: W0805 22:13:34.593809 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.595093 kubelet[3468]: E0805 22:13:34.594722 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.595093 kubelet[3468]: E0805 22:13:34.594912 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.595093 kubelet[3468]: W0805 22:13:34.594922 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.595866 kubelet[3468]: E0805 22:13:34.595381 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.597884 kubelet[3468]: E0805 22:13:34.596335 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.597884 kubelet[3468]: W0805 22:13:34.596349 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.598839 kubelet[3468]: E0805 22:13:34.598277 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.599285 kubelet[3468]: E0805 22:13:34.599086 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.599285 kubelet[3468]: W0805 22:13:34.599100 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.599728 kubelet[3468]: E0805 22:13:34.599458 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.600921 kubelet[3468]: E0805 22:13:34.600231 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.600921 kubelet[3468]: W0805 22:13:34.600373 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.602052 kubelet[3468]: E0805 22:13:34.601473 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.602052 kubelet[3468]: E0805 22:13:34.601667 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.602052 kubelet[3468]: W0805 22:13:34.601679 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.604430 kubelet[3468]: E0805 22:13:34.602341 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.604430 kubelet[3468]: E0805 22:13:34.604081 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.604430 kubelet[3468]: W0805 22:13:34.604095 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.604430 kubelet[3468]: E0805 22:13:34.604188 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.605084 kubelet[3468]: E0805 22:13:34.604531 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.605084 kubelet[3468]: W0805 22:13:34.604541 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.606955 kubelet[3468]: E0805 22:13:34.606910 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.607393 kubelet[3468]: E0805 22:13:34.607237 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.608900 kubelet[3468]: W0805 22:13:34.608246 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.608900 kubelet[3468]: E0805 22:13:34.608596 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.609157 kubelet[3468]: E0805 22:13:34.609144 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.609483 kubelet[3468]: W0805 22:13:34.609466 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.609813 kubelet[3468]: E0805 22:13:34.609762 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.610744 kubelet[3468]: E0805 22:13:34.610559 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.610744 kubelet[3468]: W0805 22:13:34.610574 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.611268 kubelet[3468]: E0805 22:13:34.610980 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.612932 kubelet[3468]: E0805 22:13:34.611689 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.612932 kubelet[3468]: W0805 22:13:34.611703 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.612932 kubelet[3468]: E0805 22:13:34.612522 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.612932 kubelet[3468]: E0805 22:13:34.612803 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.612932 kubelet[3468]: W0805 22:13:34.612814 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.613395 kubelet[3468]: E0805 22:13:34.613343 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.613650 kubelet[3468]: E0805 22:13:34.613626 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.613743 kubelet[3468]: W0805 22:13:34.613731 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.616236 kubelet[3468]: E0805 22:13:34.614648 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.616236 kubelet[3468]: E0805 22:13:34.616109 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.616236 kubelet[3468]: W0805 22:13:34.616121 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.616577 kubelet[3468]: E0805 22:13:34.616565 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.617056 kubelet[3468]: E0805 22:13:34.616888 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.617056 kubelet[3468]: W0805 22:13:34.616901 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.617056 kubelet[3468]: E0805 22:13:34.617009 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.619579 kubelet[3468]: E0805 22:13:34.619510 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.619579 kubelet[3468]: W0805 22:13:34.619525 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.619579 kubelet[3468]: E0805 22:13:34.619544 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.672012 containerd[2016]: time="2024-08-05T22:13:34.671551330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:34.672012 containerd[2016]: time="2024-08-05T22:13:34.671654390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:34.672012 containerd[2016]: time="2024-08-05T22:13:34.671690240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:34.672012 containerd[2016]: time="2024-08-05T22:13:34.671709270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:34.681064 kubelet[3468]: E0805 22:13:34.680710 3468 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:34.681064 kubelet[3468]: W0805 22:13:34.680734 3468 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:34.681064 kubelet[3468]: E0805 22:13:34.680761 3468 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:34.911249 containerd[2016]: time="2024-08-05T22:13:34.910933180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pzf2d,Uid:602f955e-404c-4e38-9081-26550c7fedcd,Namespace:calico-system,Attempt:0,} returns sandbox id \"62759fcc3b85e75be4cef26c13139ec0c884ff03e55e02cb5cdb3cadfd570102\"" Aug 5 22:13:34.917443 containerd[2016]: time="2024-08-05T22:13:34.914279776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5665b85d9c-kwb2f,Uid:1baa3152-e243-4913-8e93-5e4b24780a65,Namespace:calico-system,Attempt:0,} returns sandbox id \"b898f08c6d5f2d00974364114e60b218c5a53117afa47a1eb78887391b9039ce\"" Aug 5 22:13:34.918758 containerd[2016]: time="2024-08-05T22:13:34.918723204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:13:35.510556 kubelet[3468]: E0805 22:13:35.504723 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:37.015266 containerd[2016]: time="2024-08-05T22:13:37.014141276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:37.017867 containerd[2016]: time="2024-08-05T22:13:37.017804284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:13:37.019813 containerd[2016]: time="2024-08-05T22:13:37.019772344Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:37.023237 containerd[2016]: time="2024-08-05T22:13:37.022944090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:37.024816 containerd[2016]: time="2024-08-05T22:13:37.024760742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 2.104405855s" Aug 5 22:13:37.026698 containerd[2016]: time="2024-08-05T22:13:37.025971773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:13:37.030482 containerd[2016]: time="2024-08-05T22:13:37.030446156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:13:37.034595 containerd[2016]: time="2024-08-05T22:13:37.034554033Z" level=info msg="CreateContainer within sandbox \"62759fcc3b85e75be4cef26c13139ec0c884ff03e55e02cb5cdb3cadfd570102\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:13:37.074286 containerd[2016]: time="2024-08-05T22:13:37.071885314Z" level=info msg="CreateContainer within sandbox \"62759fcc3b85e75be4cef26c13139ec0c884ff03e55e02cb5cdb3cadfd570102\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a01a444a7802a1aff41014874a91fc771e95bbaf111d979310f3d21ba051e9a3\"" Aug 5 22:13:37.079980 containerd[2016]: time="2024-08-05T22:13:37.077675269Z" level=info msg="StartContainer for \"a01a444a7802a1aff41014874a91fc771e95bbaf111d979310f3d21ba051e9a3\"" Aug 5 22:13:37.141478 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:13:37.135492 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:13:37.135540 systemd-resolved[1890]: Flushed all caches. Aug 5 22:13:37.313055 containerd[2016]: time="2024-08-05T22:13:37.311858476Z" level=info msg="StartContainer for \"a01a444a7802a1aff41014874a91fc771e95bbaf111d979310f3d21ba051e9a3\" returns successfully" Aug 5 22:13:37.365032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a01a444a7802a1aff41014874a91fc771e95bbaf111d979310f3d21ba051e9a3-rootfs.mount: Deactivated successfully. Aug 5 22:13:37.487892 kubelet[3468]: E0805 22:13:37.487849 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:37.732727 containerd[2016]: time="2024-08-05T22:13:37.585472211Z" level=info msg="shim disconnected" id=a01a444a7802a1aff41014874a91fc771e95bbaf111d979310f3d21ba051e9a3 namespace=k8s.io Aug 5 22:13:37.732727 containerd[2016]: time="2024-08-05T22:13:37.732620496Z" level=warning msg="cleaning up after shim disconnected" id=a01a444a7802a1aff41014874a91fc771e95bbaf111d979310f3d21ba051e9a3 namespace=k8s.io Aug 5 22:13:37.732727 containerd[2016]: time="2024-08-05T22:13:37.732643333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:13:37.782476 containerd[2016]: time="2024-08-05T22:13:37.782387448Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:13:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:13:39.480395 kubelet[3468]: E0805 22:13:39.480319 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:41.133514 containerd[2016]: time="2024-08-05T22:13:41.132279860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:41.134989 containerd[2016]: time="2024-08-05T22:13:41.134928115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:13:41.136925 containerd[2016]: time="2024-08-05T22:13:41.136881327Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:41.142180 containerd[2016]: time="2024-08-05T22:13:41.142067146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:41.143308 containerd[2016]: time="2024-08-05T22:13:41.143152835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.112664971s" Aug 5 22:13:41.143308 containerd[2016]: time="2024-08-05T22:13:41.143198719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:13:41.144850 containerd[2016]: time="2024-08-05T22:13:41.144625841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:13:41.183958 containerd[2016]: time="2024-08-05T22:13:41.183802345Z" level=info msg="CreateContainer within sandbox \"b898f08c6d5f2d00974364114e60b218c5a53117afa47a1eb78887391b9039ce\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:13:41.210621 containerd[2016]: time="2024-08-05T22:13:41.210566401Z" level=info msg="CreateContainer within sandbox \"b898f08c6d5f2d00974364114e60b218c5a53117afa47a1eb78887391b9039ce\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"13b2f0f5faf4cdc7c9d352494281a21ea6491ae8f03ea48d28d7850dcf5d8038\"" Aug 5 22:13:41.211736 containerd[2016]: time="2024-08-05T22:13:41.211568072Z" level=info msg="StartContainer for \"13b2f0f5faf4cdc7c9d352494281a21ea6491ae8f03ea48d28d7850dcf5d8038\"" Aug 5 22:13:41.426317 containerd[2016]: time="2024-08-05T22:13:41.421775617Z" level=info msg="StartContainer for \"13b2f0f5faf4cdc7c9d352494281a21ea6491ae8f03ea48d28d7850dcf5d8038\" returns successfully" Aug 5 22:13:41.479709 kubelet[3468]: E0805 22:13:41.479678 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:41.862950 kubelet[3468]: I0805 22:13:41.862915 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5665b85d9c-kwb2f" podStartSLOduration=2.641030338 podCreationTimestamp="2024-08-05 22:13:33 +0000 UTC" firstStartedPulling="2024-08-05 22:13:34.921801945 +0000 UTC m=+21.671930085" lastFinishedPulling="2024-08-05 22:13:41.143634356 +0000 UTC m=+27.893762497" observedRunningTime="2024-08-05 22:13:41.862553162 +0000 UTC m=+28.612681326" watchObservedRunningTime="2024-08-05 22:13:41.86286275 +0000 UTC m=+28.612990933" Aug 5 22:13:42.855956 kubelet[3468]: I0805 22:13:42.854207 3468 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:13:43.492465 kubelet[3468]: E0805 22:13:43.489938 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:45.480784 kubelet[3468]: E0805 22:13:45.479391 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:47.483994 kubelet[3468]: E0805 22:13:47.483941 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:48.770460 containerd[2016]: time="2024-08-05T22:13:48.770389226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:48.771900 containerd[2016]: time="2024-08-05T22:13:48.771703061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:13:48.802772 containerd[2016]: time="2024-08-05T22:13:48.802630084Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:48.813989 containerd[2016]: time="2024-08-05T22:13:48.813944812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 7.669085106s" Aug 5 22:13:48.814376 containerd[2016]: time="2024-08-05T22:13:48.814351709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:13:48.815126 containerd[2016]: time="2024-08-05T22:13:48.814484923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:48.820798 containerd[2016]: time="2024-08-05T22:13:48.820749974Z" level=info msg="CreateContainer within sandbox \"62759fcc3b85e75be4cef26c13139ec0c884ff03e55e02cb5cdb3cadfd570102\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:13:48.862326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432560651.mount: Deactivated successfully. Aug 5 22:13:48.867096 containerd[2016]: time="2024-08-05T22:13:48.866952084Z" level=info msg="CreateContainer within sandbox \"62759fcc3b85e75be4cef26c13139ec0c884ff03e55e02cb5cdb3cadfd570102\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ecf490280953a338d046f5eea795ea1cd08c7b9ece7cc9cdb970ea1486b50720\"" Aug 5 22:13:48.867828 containerd[2016]: time="2024-08-05T22:13:48.867716264Z" level=info msg="StartContainer for \"ecf490280953a338d046f5eea795ea1cd08c7b9ece7cc9cdb970ea1486b50720\"" Aug 5 22:13:49.267141 containerd[2016]: time="2024-08-05T22:13:49.266983493Z" level=info msg="StartContainer for \"ecf490280953a338d046f5eea795ea1cd08c7b9ece7cc9cdb970ea1486b50720\" returns successfully" Aug 5 22:13:49.480390 kubelet[3468]: E0805 22:13:49.479312 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:51.482914 kubelet[3468]: E0805 22:13:51.479623 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:53.323187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecf490280953a338d046f5eea795ea1cd08c7b9ece7cc9cdb970ea1486b50720-rootfs.mount: Deactivated successfully. Aug 5 22:13:53.325621 containerd[2016]: time="2024-08-05T22:13:53.325555156Z" level=info msg="shim disconnected" id=ecf490280953a338d046f5eea795ea1cd08c7b9ece7cc9cdb970ea1486b50720 namespace=k8s.io Aug 5 22:13:53.325621 containerd[2016]: time="2024-08-05T22:13:53.325617676Z" level=warning msg="cleaning up after shim disconnected" id=ecf490280953a338d046f5eea795ea1cd08c7b9ece7cc9cdb970ea1486b50720 namespace=k8s.io Aug 5 22:13:53.326216 containerd[2016]: time="2024-08-05T22:13:53.325629507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:13:53.332735 kubelet[3468]: I0805 22:13:53.332015 3468 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 22:13:53.388279 kubelet[3468]: I0805 22:13:53.388233 3468 topology_manager.go:215] "Topology Admit Handler" podUID="3d90ed18-88fe-4b37-ba5a-a0772610a05d" podNamespace="kube-system" podName="coredns-5dd5756b68-hpqq2" Aug 5 22:13:53.395043 kubelet[3468]: I0805 22:13:53.394963 3468 topology_manager.go:215] "Topology Admit Handler" podUID="02a9cbec-1103-4b28-a2a7-d82304e5ff6d" podNamespace="kube-system" podName="coredns-5dd5756b68-cjhqm" Aug 5 22:13:53.403228 kubelet[3468]: I0805 22:13:53.403178 3468 topology_manager.go:215] "Topology Admit Handler" podUID="e9849bc8-dfda-4d92-a15c-331f7ac59401" podNamespace="calico-system" podName="calico-kube-controllers-75f5fbfb65-fbxpw" Aug 5 22:13:53.471528 kubelet[3468]: I0805 22:13:53.471487 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrsnk\" (UniqueName: \"kubernetes.io/projected/3d90ed18-88fe-4b37-ba5a-a0772610a05d-kube-api-access-wrsnk\") pod \"coredns-5dd5756b68-hpqq2\" (UID: \"3d90ed18-88fe-4b37-ba5a-a0772610a05d\") " pod="kube-system/coredns-5dd5756b68-hpqq2" Aug 5 22:13:53.472354 kubelet[3468]: I0805 22:13:53.472332 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d90ed18-88fe-4b37-ba5a-a0772610a05d-config-volume\") pod \"coredns-5dd5756b68-hpqq2\" (UID: \"3d90ed18-88fe-4b37-ba5a-a0772610a05d\") " pod="kube-system/coredns-5dd5756b68-hpqq2" Aug 5 22:13:53.475014 kubelet[3468]: I0805 22:13:53.473106 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgd2v\" (UniqueName: \"kubernetes.io/projected/e9849bc8-dfda-4d92-a15c-331f7ac59401-kube-api-access-vgd2v\") pod \"calico-kube-controllers-75f5fbfb65-fbxpw\" (UID: \"e9849bc8-dfda-4d92-a15c-331f7ac59401\") " pod="calico-system/calico-kube-controllers-75f5fbfb65-fbxpw" Aug 5 22:13:53.475590 kubelet[3468]: I0805 22:13:53.475282 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnsl4\" (UniqueName: \"kubernetes.io/projected/02a9cbec-1103-4b28-a2a7-d82304e5ff6d-kube-api-access-gnsl4\") pod \"coredns-5dd5756b68-cjhqm\" (UID: \"02a9cbec-1103-4b28-a2a7-d82304e5ff6d\") " pod="kube-system/coredns-5dd5756b68-cjhqm" Aug 5 22:13:53.475590 kubelet[3468]: I0805 22:13:53.475534 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9849bc8-dfda-4d92-a15c-331f7ac59401-tigera-ca-bundle\") pod \"calico-kube-controllers-75f5fbfb65-fbxpw\" (UID: \"e9849bc8-dfda-4d92-a15c-331f7ac59401\") " pod="calico-system/calico-kube-controllers-75f5fbfb65-fbxpw" Aug 5 22:13:53.476266 kubelet[3468]: I0805 22:13:53.476019 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02a9cbec-1103-4b28-a2a7-d82304e5ff6d-config-volume\") pod \"coredns-5dd5756b68-cjhqm\" (UID: \"02a9cbec-1103-4b28-a2a7-d82304e5ff6d\") " pod="kube-system/coredns-5dd5756b68-cjhqm" Aug 5 22:13:53.506117 containerd[2016]: time="2024-08-05T22:13:53.506073596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pqxtn,Uid:49ea90fb-8427-4eac-8b89-57071ef71ebc,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:53.734767 containerd[2016]: time="2024-08-05T22:13:53.731669259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cjhqm,Uid:02a9cbec-1103-4b28-a2a7-d82304e5ff6d,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:53.757743 containerd[2016]: time="2024-08-05T22:13:53.753111505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f5fbfb65-fbxpw,Uid:e9849bc8-dfda-4d92-a15c-331f7ac59401,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:53.757743 containerd[2016]: time="2024-08-05T22:13:53.755167529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hpqq2,Uid:3d90ed18-88fe-4b37-ba5a-a0772610a05d,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:53.915490 containerd[2016]: time="2024-08-05T22:13:53.915446890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:13:53.997707 containerd[2016]: time="2024-08-05T22:13:53.997560044Z" level=error msg="Failed to destroy network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.004356 containerd[2016]: time="2024-08-05T22:13:54.004282018Z" level=error msg="encountered an error cleaning up failed sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.061726 containerd[2016]: time="2024-08-05T22:13:54.061660744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pqxtn,Uid:49ea90fb-8427-4eac-8b89-57071ef71ebc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.062358 kubelet[3468]: E0805 22:13:54.061955 3468 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.062358 kubelet[3468]: E0805 22:13:54.062013 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pqxtn" Aug 5 22:13:54.062358 kubelet[3468]: E0805 22:13:54.062036 3468 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pqxtn" Aug 5 22:13:54.062528 kubelet[3468]: E0805 22:13:54.062090 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pqxtn_calico-system(49ea90fb-8427-4eac-8b89-57071ef71ebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pqxtn_calico-system(49ea90fb-8427-4eac-8b89-57071ef71ebc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:13:54.136687 containerd[2016]: time="2024-08-05T22:13:54.136448180Z" level=error msg="Failed to destroy network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.137327 containerd[2016]: time="2024-08-05T22:13:54.136957541Z" level=error msg="encountered an error cleaning up failed sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.137327 containerd[2016]: time="2024-08-05T22:13:54.137031964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f5fbfb65-fbxpw,Uid:e9849bc8-dfda-4d92-a15c-331f7ac59401,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.138852 kubelet[3468]: E0805 22:13:54.137857 3468 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.138852 kubelet[3468]: E0805 22:13:54.137911 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f5fbfb65-fbxpw" Aug 5 22:13:54.138852 kubelet[3468]: E0805 22:13:54.137941 3468 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f5fbfb65-fbxpw" Aug 5 22:13:54.139034 kubelet[3468]: E0805 22:13:54.138002 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75f5fbfb65-fbxpw_calico-system(e9849bc8-dfda-4d92-a15c-331f7ac59401)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75f5fbfb65-fbxpw_calico-system(e9849bc8-dfda-4d92-a15c-331f7ac59401)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75f5fbfb65-fbxpw" podUID="e9849bc8-dfda-4d92-a15c-331f7ac59401" Aug 5 22:13:54.164188 containerd[2016]: time="2024-08-05T22:13:54.163911169Z" level=error msg="Failed to destroy network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.165101 containerd[2016]: time="2024-08-05T22:13:54.165048486Z" level=error msg="encountered an error cleaning up failed sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.165317 containerd[2016]: time="2024-08-05T22:13:54.165123498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cjhqm,Uid:02a9cbec-1103-4b28-a2a7-d82304e5ff6d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.166454 kubelet[3468]: E0805 22:13:54.165709 3468 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.166454 kubelet[3468]: E0805 22:13:54.165810 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-cjhqm" Aug 5 22:13:54.166454 kubelet[3468]: E0805 22:13:54.165877 3468 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-cjhqm" Aug 5 22:13:54.169048 kubelet[3468]: E0805 22:13:54.166692 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-cjhqm_kube-system(02a9cbec-1103-4b28-a2a7-d82304e5ff6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-cjhqm_kube-system(02a9cbec-1103-4b28-a2a7-d82304e5ff6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-cjhqm" podUID="02a9cbec-1103-4b28-a2a7-d82304e5ff6d" Aug 5 22:13:54.180287 containerd[2016]: time="2024-08-05T22:13:54.180231944Z" level=error msg="Failed to destroy network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.186372 containerd[2016]: time="2024-08-05T22:13:54.180775304Z" level=error msg="encountered an error cleaning up failed sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.186547 containerd[2016]: time="2024-08-05T22:13:54.186452163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hpqq2,Uid:3d90ed18-88fe-4b37-ba5a-a0772610a05d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.187651 kubelet[3468]: E0805 22:13:54.187522 3468 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:54.187809 kubelet[3468]: E0805 22:13:54.187669 3468 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-hpqq2" Aug 5 22:13:54.187809 kubelet[3468]: E0805 22:13:54.187715 3468 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-hpqq2" Aug 5 22:13:54.187809 kubelet[3468]: E0805 22:13:54.187796 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-hpqq2_kube-system(3d90ed18-88fe-4b37-ba5a-a0772610a05d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-hpqq2_kube-system(3d90ed18-88fe-4b37-ba5a-a0772610a05d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-hpqq2" podUID="3d90ed18-88fe-4b37-ba5a-a0772610a05d" Aug 5 22:13:54.332170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb-shm.mount: Deactivated successfully. Aug 5 22:13:54.700095 kubelet[3468]: I0805 22:13:54.699599 3468 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:13:54.899101 kubelet[3468]: I0805 22:13:54.899067 3468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:13:54.903461 kubelet[3468]: I0805 22:13:54.901873 3468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:13:54.906862 kubelet[3468]: I0805 22:13:54.906786 3468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:13:54.925742 kubelet[3468]: I0805 22:13:54.925697 3468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:13:54.961019 containerd[2016]: time="2024-08-05T22:13:54.960280050Z" level=info msg="StopPodSandbox for \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\"" Aug 5 22:13:54.961019 containerd[2016]: time="2024-08-05T22:13:54.960570357Z" level=info msg="Ensure that sandbox 23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9 in task-service has been cleanup successfully" Aug 5 22:13:54.967657 containerd[2016]: time="2024-08-05T22:13:54.967592543Z" level=info msg="StopPodSandbox for \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\"" Aug 5 22:13:54.967881 containerd[2016]: time="2024-08-05T22:13:54.967854712Z" level=info msg="Ensure that sandbox 48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb in task-service has been cleanup successfully" Aug 5 22:13:54.972429 containerd[2016]: time="2024-08-05T22:13:54.972358493Z" level=info msg="StopPodSandbox for \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\"" Aug 5 22:13:54.976493 containerd[2016]: time="2024-08-05T22:13:54.975724344Z" level=info msg="Ensure that sandbox 98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e in task-service has been cleanup successfully" Aug 5 22:13:54.976759 containerd[2016]: time="2024-08-05T22:13:54.973657743Z" level=info msg="StopPodSandbox for \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\"" Aug 5 22:13:54.977852 containerd[2016]: time="2024-08-05T22:13:54.977408653Z" level=info msg="Ensure that sandbox ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072 in task-service has been cleanup successfully" Aug 5 22:13:55.127990 containerd[2016]: time="2024-08-05T22:13:55.127928104Z" level=error msg="StopPodSandbox for \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\" failed" error="failed to destroy network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:55.128570 kubelet[3468]: E0805 22:13:55.128542 3468 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:13:55.142451 kubelet[3468]: E0805 22:13:55.141960 3468 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e"} Aug 5 22:13:55.142451 kubelet[3468]: E0805 22:13:55.142053 3468 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02a9cbec-1103-4b28-a2a7-d82304e5ff6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:13:55.142451 kubelet[3468]: E0805 22:13:55.142113 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02a9cbec-1103-4b28-a2a7-d82304e5ff6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-cjhqm" podUID="02a9cbec-1103-4b28-a2a7-d82304e5ff6d" Aug 5 22:13:55.149158 containerd[2016]: time="2024-08-05T22:13:55.149063903Z" level=error msg="StopPodSandbox for \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\" failed" error="failed to destroy network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:55.153432 kubelet[3468]: E0805 22:13:55.152303 3468 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:13:55.153432 kubelet[3468]: E0805 22:13:55.152646 3468 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9"} Aug 5 22:13:55.153432 kubelet[3468]: E0805 22:13:55.152726 3468 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d90ed18-88fe-4b37-ba5a-a0772610a05d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:13:55.153432 kubelet[3468]: E0805 22:13:55.152770 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d90ed18-88fe-4b37-ba5a-a0772610a05d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-hpqq2" podUID="3d90ed18-88fe-4b37-ba5a-a0772610a05d" Aug 5 22:13:55.155277 containerd[2016]: time="2024-08-05T22:13:55.155161669Z" level=error msg="StopPodSandbox for \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\" failed" error="failed to destroy network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:55.155939 kubelet[3468]: E0805 22:13:55.155918 3468 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:13:55.156142 kubelet[3468]: E0805 22:13:55.156004 3468 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072"} Aug 5 22:13:55.156142 kubelet[3468]: E0805 22:13:55.156140 3468 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9849bc8-dfda-4d92-a15c-331f7ac59401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:13:55.156360 kubelet[3468]: E0805 22:13:55.156183 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9849bc8-dfda-4d92-a15c-331f7ac59401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75f5fbfb65-fbxpw" podUID="e9849bc8-dfda-4d92-a15c-331f7ac59401" Aug 5 22:13:55.157679 containerd[2016]: time="2024-08-05T22:13:55.157639651Z" level=error msg="StopPodSandbox for \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\" failed" error="failed to destroy network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:55.157990 kubelet[3468]: E0805 22:13:55.157957 3468 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:13:55.158069 kubelet[3468]: E0805 22:13:55.158001 3468 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb"} Aug 5 22:13:55.158069 kubelet[3468]: E0805 22:13:55.158053 3468 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49ea90fb-8427-4eac-8b89-57071ef71ebc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:13:55.158909 kubelet[3468]: E0805 22:13:55.158106 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49ea90fb-8427-4eac-8b89-57071ef71ebc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:14:01.312812 systemd[1]: Started sshd@7-172.31.21.119:22-139.178.89.65:43488.service - OpenSSH per-connection server daemon (139.178.89.65:43488). Aug 5 22:14:01.738453 sshd[4402]: Accepted publickey for core from 139.178.89.65 port 43488 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:01.754500 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:01.877728 systemd-logind[1985]: New session 8 of user core. Aug 5 22:14:01.914404 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:14:03.170980 sshd[4402]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:03.209718 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:03.184513 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:03.184578 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:03.193784 systemd[1]: sshd@7-172.31.21.119:22-139.178.89.65:43488.service: Deactivated successfully. Aug 5 22:14:03.226319 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:14:03.226530 systemd-logind[1985]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:14:03.284727 systemd-logind[1985]: Removed session 8. Aug 5 22:14:05.246633 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:05.234770 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:05.235236 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:06.519040 containerd[2016]: time="2024-08-05T22:14:06.491929498Z" level=info msg="StopPodSandbox for \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\"" Aug 5 22:14:06.544174 containerd[2016]: time="2024-08-05T22:14:06.514143962Z" level=info msg="StopPodSandbox for \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\"" Aug 5 22:14:06.871333 containerd[2016]: time="2024-08-05T22:14:06.870883607Z" level=error msg="StopPodSandbox for \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\" failed" error="failed to destroy network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:06.871517 kubelet[3468]: E0805 22:14:06.871158 3468 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:14:06.871517 kubelet[3468]: E0805 22:14:06.871208 3468 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072"} Aug 5 22:14:06.871517 kubelet[3468]: E0805 22:14:06.871261 3468 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9849bc8-dfda-4d92-a15c-331f7ac59401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:06.871517 kubelet[3468]: E0805 22:14:06.871301 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9849bc8-dfda-4d92-a15c-331f7ac59401\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75f5fbfb65-fbxpw" podUID="e9849bc8-dfda-4d92-a15c-331f7ac59401" Aug 5 22:14:06.946226 containerd[2016]: time="2024-08-05T22:14:06.944601618Z" level=error msg="StopPodSandbox for \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\" failed" error="failed to destroy network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:06.946380 kubelet[3468]: E0805 22:14:06.946113 3468 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:14:06.947136 kubelet[3468]: E0805 22:14:06.946599 3468 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9"} Aug 5 22:14:06.947136 kubelet[3468]: E0805 22:14:06.946680 3468 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d90ed18-88fe-4b37-ba5a-a0772610a05d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:06.947551 kubelet[3468]: E0805 22:14:06.947374 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d90ed18-88fe-4b37-ba5a-a0772610a05d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-hpqq2" podUID="3d90ed18-88fe-4b37-ba5a-a0772610a05d" Aug 5 22:14:07.486686 containerd[2016]: time="2024-08-05T22:14:07.486096822Z" level=info msg="StopPodSandbox for \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\"" Aug 5 22:14:07.637459 containerd[2016]: time="2024-08-05T22:14:07.636156852Z" level=error msg="StopPodSandbox for \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\" failed" error="failed to destroy network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:07.639009 kubelet[3468]: E0805 22:14:07.638587 3468 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:14:07.639009 kubelet[3468]: E0805 22:14:07.638639 3468 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb"} Aug 5 22:14:07.639009 kubelet[3468]: E0805 22:14:07.638684 3468 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49ea90fb-8427-4eac-8b89-57071ef71ebc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:07.639009 kubelet[3468]: E0805 22:14:07.638720 3468 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49ea90fb-8427-4eac-8b89-57071ef71ebc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pqxtn" podUID="49ea90fb-8427-4eac-8b89-57071ef71ebc" Aug 5 22:14:08.060562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1955253518.mount: Deactivated successfully. Aug 5 22:14:08.207951 systemd[1]: Started sshd@8-172.31.21.119:22-139.178.89.65:43490.service - OpenSSH per-connection server daemon (139.178.89.65:43490). Aug 5 22:14:08.254680 containerd[2016]: time="2024-08-05T22:14:08.254390075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:08.265169 containerd[2016]: time="2024-08-05T22:14:08.265095271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:14:08.369731 containerd[2016]: time="2024-08-05T22:14:08.369584074Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:08.403271 containerd[2016]: time="2024-08-05T22:14:08.395539312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:08.444047 containerd[2016]: time="2024-08-05T22:14:08.443991016Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 14.4969598s" Aug 5 22:14:08.444247 containerd[2016]: time="2024-08-05T22:14:08.444227923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:14:08.554841 sshd[4476]: Accepted publickey for core from 139.178.89.65 port 43490 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:08.608320 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:08.680388 systemd-logind[1985]: New session 9 of user core. Aug 5 22:14:08.689149 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:14:08.737924 containerd[2016]: time="2024-08-05T22:14:08.737648667Z" level=info msg="CreateContainer within sandbox \"62759fcc3b85e75be4cef26c13139ec0c884ff03e55e02cb5cdb3cadfd570102\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:14:08.815482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824862991.mount: Deactivated successfully. Aug 5 22:14:08.850230 containerd[2016]: time="2024-08-05T22:14:08.849796268Z" level=info msg="CreateContainer within sandbox \"62759fcc3b85e75be4cef26c13139ec0c884ff03e55e02cb5cdb3cadfd570102\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ed979259396268972cb7016c17909a1f72bad125c210bc0f3c269e372a9cab7\"" Aug 5 22:14:08.870674 containerd[2016]: time="2024-08-05T22:14:08.870534157Z" level=info msg="StartContainer for \"9ed979259396268972cb7016c17909a1f72bad125c210bc0f3c269e372a9cab7\"" Aug 5 22:14:09.135746 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:09.142561 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:09.135755 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:09.401194 sshd[4476]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:09.409885 systemd-logind[1985]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:14:09.414235 systemd[1]: sshd@8-172.31.21.119:22-139.178.89.65:43490.service: Deactivated successfully. Aug 5 22:14:09.430319 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:14:09.435645 containerd[2016]: time="2024-08-05T22:14:09.435599831Z" level=info msg="StartContainer for \"9ed979259396268972cb7016c17909a1f72bad125c210bc0f3c269e372a9cab7\" returns successfully" Aug 5 22:14:09.437441 systemd-logind[1985]: Removed session 9. Aug 5 22:14:09.770450 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:14:09.772483 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:14:10.485156 containerd[2016]: time="2024-08-05T22:14:10.482612804Z" level=info msg="StopPodSandbox for \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\"" Aug 5 22:14:10.555446 kubelet[3468]: I0805 22:14:10.553851 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-pzf2d" podStartSLOduration=3.892964316 podCreationTimestamp="2024-08-05 22:13:33 +0000 UTC" firstStartedPulling="2024-08-05 22:13:34.915672435 +0000 UTC m=+21.665800577" lastFinishedPulling="2024-08-05 22:14:08.451335519 +0000 UTC m=+55.201463677" observedRunningTime="2024-08-05 22:14:10.428584583 +0000 UTC m=+57.178712744" watchObservedRunningTime="2024-08-05 22:14:10.428627416 +0000 UTC m=+57.178755577" Aug 5 22:14:11.189697 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:11.188655 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:11.188664 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:10.757 [INFO][4561] k8s.go 608: Cleaning up netns ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:10.758 [INFO][4561] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" iface="eth0" netns="/var/run/netns/cni-7874c959-acba-cb14-98cd-9156b211e722" Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:10.762 [INFO][4561] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" iface="eth0" netns="/var/run/netns/cni-7874c959-acba-cb14-98cd-9156b211e722" Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:10.766 [INFO][4561] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" iface="eth0" netns="/var/run/netns/cni-7874c959-acba-cb14-98cd-9156b211e722" Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:10.766 [INFO][4561] k8s.go 615: Releasing IP address(es) ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:10.766 [INFO][4561] utils.go 188: Calico CNI releasing IP address ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:11.267 [INFO][4595] ipam_plugin.go 411: Releasing address using handleID ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" HandleID="k8s-pod-network.98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:11.269 [INFO][4595] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:11.270 [INFO][4595] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:11.294 [WARNING][4595] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" HandleID="k8s-pod-network.98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:11.294 [INFO][4595] ipam_plugin.go 439: Releasing address using workloadID ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" HandleID="k8s-pod-network.98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:11.299 [INFO][4595] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:11.307781 containerd[2016]: 2024-08-05 22:14:11.304 [INFO][4561] k8s.go 621: Teardown processing complete. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:11.309396 containerd[2016]: time="2024-08-05T22:14:11.308620940Z" level=info msg="TearDown network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\" successfully" Aug 5 22:14:11.309396 containerd[2016]: time="2024-08-05T22:14:11.308666203Z" level=info msg="StopPodSandbox for \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\" returns successfully" Aug 5 22:14:11.310192 containerd[2016]: time="2024-08-05T22:14:11.310136326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cjhqm,Uid:02a9cbec-1103-4b28-a2a7-d82304e5ff6d,Namespace:kube-system,Attempt:1,}" Aug 5 22:14:11.331218 systemd[1]: run-netns-cni\x2d7874c959\x2dacba\x2dcb14\x2d98cd\x2d9156b211e722.mount: Deactivated successfully. Aug 5 22:14:11.764106 (udev-worker)[4534]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:11.777787 systemd-networkd[1575]: cali46b133397bc: Link UP Aug 5 22:14:11.782732 systemd-networkd[1575]: cali46b133397bc: Gained carrier Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.533 [INFO][4629] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.561 [INFO][4629] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0 coredns-5dd5756b68- kube-system 02a9cbec-1103-4b28-a2a7-d82304e5ff6d 798 0 2024-08-05 22:13:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-119 coredns-5dd5756b68-cjhqm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46b133397bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Namespace="kube-system" Pod="coredns-5dd5756b68-cjhqm" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.561 [INFO][4629] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Namespace="kube-system" Pod="coredns-5dd5756b68-cjhqm" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.638 [INFO][4641] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" HandleID="k8s-pod-network.a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.663 [INFO][4641] ipam_plugin.go 264: Auto assigning IP ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" HandleID="k8s-pod-network.a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285cf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-119", "pod":"coredns-5dd5756b68-cjhqm", "timestamp":"2024-08-05 22:14:11.638237709 +0000 UTC"}, Hostname:"ip-172-31-21-119", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.664 [INFO][4641] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.664 [INFO][4641] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.664 [INFO][4641] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-119' Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.668 [INFO][4641] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" host="ip-172-31-21-119" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.699 [INFO][4641] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-119" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.706 [INFO][4641] ipam.go 489: Trying affinity for 192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.710 [INFO][4641] ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.714 [INFO][4641] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.714 [INFO][4641] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" host="ip-172-31-21-119" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.717 [INFO][4641] ipam.go 1685: Creating new handle: k8s-pod-network.a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.728 [INFO][4641] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" host="ip-172-31-21-119" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.742 [INFO][4641] ipam.go 1216: Successfully claimed IPs: [192.168.96.65/26] block=192.168.96.64/26 handle="k8s-pod-network.a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" host="ip-172-31-21-119" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.742 [INFO][4641] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.65/26] handle="k8s-pod-network.a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" host="ip-172-31-21-119" Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.742 [INFO][4641] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:11.795926 containerd[2016]: 2024-08-05 22:14:11.743 [INFO][4641] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.96.65/26] IPv6=[] ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" HandleID="k8s-pod-network.a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.817899 containerd[2016]: 2024-08-05 22:14:11.747 [INFO][4629] k8s.go 386: Populated endpoint ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Namespace="kube-system" Pod="coredns-5dd5756b68-cjhqm" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"02a9cbec-1103-4b28-a2a7-d82304e5ff6d", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"", Pod:"coredns-5dd5756b68-cjhqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46b133397bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:11.817899 containerd[2016]: 2024-08-05 22:14:11.747 [INFO][4629] k8s.go 387: Calico CNI using IPs: [192.168.96.65/32] ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Namespace="kube-system" Pod="coredns-5dd5756b68-cjhqm" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.817899 containerd[2016]: 2024-08-05 22:14:11.747 [INFO][4629] dataplane_linux.go 68: Setting the host side veth name to cali46b133397bc ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Namespace="kube-system" Pod="coredns-5dd5756b68-cjhqm" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.817899 containerd[2016]: 2024-08-05 22:14:11.768 [INFO][4629] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Namespace="kube-system" Pod="coredns-5dd5756b68-cjhqm" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.817899 containerd[2016]: 2024-08-05 22:14:11.769 [INFO][4629] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Namespace="kube-system" Pod="coredns-5dd5756b68-cjhqm" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"02a9cbec-1103-4b28-a2a7-d82304e5ff6d", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a", Pod:"coredns-5dd5756b68-cjhqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46b133397bc", MAC:"4e:54:92:e5:bc:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:11.817899 containerd[2016]: 2024-08-05 22:14:11.788 [INFO][4629] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a" Namespace="kube-system" Pod="coredns-5dd5756b68-cjhqm" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:11.900398 containerd[2016]: time="2024-08-05T22:14:11.900192028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:11.900398 containerd[2016]: time="2024-08-05T22:14:11.900259418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:11.900940 containerd[2016]: time="2024-08-05T22:14:11.900288652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:11.900940 containerd[2016]: time="2024-08-05T22:14:11.900705023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:11.975609 systemd[1]: run-containerd-runc-k8s.io-a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a-runc.IrVYXL.mount: Deactivated successfully. Aug 5 22:14:12.072931 containerd[2016]: time="2024-08-05T22:14:12.072873191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cjhqm,Uid:02a9cbec-1103-4b28-a2a7-d82304e5ff6d,Namespace:kube-system,Attempt:1,} returns sandbox id \"a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a\"" Aug 5 22:14:12.094827 containerd[2016]: time="2024-08-05T22:14:12.094734543Z" level=info msg="CreateContainer within sandbox \"a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:14:12.153089 containerd[2016]: time="2024-08-05T22:14:12.153031155Z" level=info msg="CreateContainer within sandbox \"a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b36c4e026d196a61854d2d9c89e22b1f75435acf3cf1196d2ebcf2cbdf94ff9f\"" Aug 5 22:14:12.153994 containerd[2016]: time="2024-08-05T22:14:12.153961545Z" level=info msg="StartContainer for \"b36c4e026d196a61854d2d9c89e22b1f75435acf3cf1196d2ebcf2cbdf94ff9f\"" Aug 5 22:14:12.375620 containerd[2016]: time="2024-08-05T22:14:12.373779323Z" level=info msg="StartContainer for \"b36c4e026d196a61854d2d9c89e22b1f75435acf3cf1196d2ebcf2cbdf94ff9f\" returns successfully" Aug 5 22:14:12.695352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976263113.mount: Deactivated successfully. Aug 5 22:14:13.231634 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:13.232766 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:13.231642 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:13.295677 systemd-networkd[1575]: cali46b133397bc: Gained IPv6LL Aug 5 22:14:13.419554 kubelet[3468]: I0805 22:14:13.419499 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cjhqm" podStartSLOduration=47.419227485 podCreationTimestamp="2024-08-05 22:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:14:13.341481509 +0000 UTC m=+60.091609671" watchObservedRunningTime="2024-08-05 22:14:13.419227485 +0000 UTC m=+60.169355645" Aug 5 22:14:13.559579 containerd[2016]: time="2024-08-05T22:14:13.558356321Z" level=info msg="StopPodSandbox for \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\"" Aug 5 22:14:13.905867 systemd-networkd[1575]: vxlan.calico: Link UP Aug 5 22:14:13.906238 systemd-networkd[1575]: vxlan.calico: Gained carrier Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.749 [WARNING][4895] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"02a9cbec-1103-4b28-a2a7-d82304e5ff6d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a", Pod:"coredns-5dd5756b68-cjhqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46b133397bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.751 [INFO][4895] k8s.go 608: Cleaning up netns ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.753 [INFO][4895] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" iface="eth0" netns="" Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.753 [INFO][4895] k8s.go 615: Releasing IP address(es) ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.754 [INFO][4895] utils.go 188: Calico CNI releasing IP address ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.877 [INFO][4904] ipam_plugin.go 411: Releasing address using handleID ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" HandleID="k8s-pod-network.98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.877 [INFO][4904] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.877 [INFO][4904] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.908 [WARNING][4904] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" HandleID="k8s-pod-network.98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.908 [INFO][4904] ipam_plugin.go 439: Releasing address using workloadID ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" HandleID="k8s-pod-network.98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.914 [INFO][4904] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:13.923774 containerd[2016]: 2024-08-05 22:14:13.920 [INFO][4895] k8s.go 621: Teardown processing complete. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:13.923774 containerd[2016]: time="2024-08-05T22:14:13.923649879Z" level=info msg="TearDown network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\" successfully" Aug 5 22:14:13.923774 containerd[2016]: time="2024-08-05T22:14:13.923680842Z" level=info msg="StopPodSandbox for \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\" returns successfully" Aug 5 22:14:13.925348 containerd[2016]: time="2024-08-05T22:14:13.924683686Z" level=info msg="RemovePodSandbox for \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\"" Aug 5 22:14:13.929185 containerd[2016]: time="2024-08-05T22:14:13.928900696Z" level=info msg="Forcibly stopping sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\"" Aug 5 22:14:13.942530 (udev-worker)[4673]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.087 [WARNING][4944] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"02a9cbec-1103-4b28-a2a7-d82304e5ff6d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"a7df912f0414a26b9d8b1a68c14693a1d29c8d4d9ac4183e695cb356796ff77a", Pod:"coredns-5dd5756b68-cjhqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46b133397bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.089 [INFO][4944] k8s.go 608: Cleaning up netns ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.089 [INFO][4944] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" iface="eth0" netns="" Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.089 [INFO][4944] k8s.go 615: Releasing IP address(es) ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.089 [INFO][4944] utils.go 188: Calico CNI releasing IP address ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.150 [INFO][4971] ipam_plugin.go 411: Releasing address using handleID ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" HandleID="k8s-pod-network.98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.150 [INFO][4971] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.150 [INFO][4971] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.174 [WARNING][4971] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" HandleID="k8s-pod-network.98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.174 [INFO][4971] ipam_plugin.go 439: Releasing address using workloadID ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" HandleID="k8s-pod-network.98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--cjhqm-eth0" Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.177 [INFO][4971] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:14.185908 containerd[2016]: 2024-08-05 22:14:14.183 [INFO][4944] k8s.go 621: Teardown processing complete. ContainerID="98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e" Aug 5 22:14:14.185908 containerd[2016]: time="2024-08-05T22:14:14.185878335Z" level=info msg="TearDown network for sandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\" successfully" Aug 5 22:14:14.217468 containerd[2016]: time="2024-08-05T22:14:14.217394168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:14:14.217902 containerd[2016]: time="2024-08-05T22:14:14.217496283Z" level=info msg="RemovePodSandbox \"98cf62893efad9c83e7f3ab051dc10ff4ab4ecd74441e984607430a13422a96e\" returns successfully" Aug 5 22:14:14.428199 systemd[1]: Started sshd@9-172.31.21.119:22-139.178.89.65:43626.service - OpenSSH per-connection server daemon (139.178.89.65:43626). Aug 5 22:14:14.683479 sshd[4994]: Accepted publickey for core from 139.178.89.65 port 43626 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:14.685673 sshd[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:14.692693 systemd-logind[1985]: New session 10 of user core. Aug 5 22:14:14.701485 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:14:15.297469 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:15.281041 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:15.281070 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:15.513757 sshd[4994]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:15.524457 systemd[1]: sshd@9-172.31.21.119:22-139.178.89.65:43626.service: Deactivated successfully. Aug 5 22:14:15.547731 systemd-logind[1985]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:14:15.555270 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:14:15.563862 systemd[1]: Started sshd@10-172.31.21.119:22-139.178.89.65:43628.service - OpenSSH per-connection server daemon (139.178.89.65:43628). Aug 5 22:14:15.567657 systemd-logind[1985]: Removed session 10. Aug 5 22:14:15.599846 systemd-networkd[1575]: vxlan.calico: Gained IPv6LL Aug 5 22:14:15.740376 sshd[5012]: Accepted publickey for core from 139.178.89.65 port 43628 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:15.742668 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:15.749479 systemd-logind[1985]: New session 11 of user core. Aug 5 22:14:15.757021 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:14:16.456537 sshd[5012]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:16.472904 systemd[1]: sshd@10-172.31.21.119:22-139.178.89.65:43628.service: Deactivated successfully. Aug 5 22:14:16.482545 systemd-logind[1985]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:14:16.510367 systemd[1]: Started sshd@11-172.31.21.119:22-139.178.89.65:43634.service - OpenSSH per-connection server daemon (139.178.89.65:43634). Aug 5 22:14:16.512343 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:14:16.517386 systemd-logind[1985]: Removed session 11. Aug 5 22:14:16.701589 sshd[5024]: Accepted publickey for core from 139.178.89.65 port 43634 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:16.714761 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:16.755614 systemd-logind[1985]: New session 12 of user core. Aug 5 22:14:16.762508 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:14:17.051526 sshd[5024]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:17.057975 systemd-logind[1985]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:14:17.059203 systemd[1]: sshd@11-172.31.21.119:22-139.178.89.65:43634.service: Deactivated successfully. Aug 5 22:14:17.066473 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:14:17.067906 systemd-logind[1985]: Removed session 12. Aug 5 22:14:18.453988 ntpd[1964]: Listen normally on 6 vxlan.calico 192.168.96.64:123 Aug 5 22:14:18.454203 ntpd[1964]: Listen normally on 7 cali46b133397bc [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 22:14:18.455894 ntpd[1964]: 5 Aug 22:14:18 ntpd[1964]: Listen normally on 6 vxlan.calico 192.168.96.64:123 Aug 5 22:14:18.455894 ntpd[1964]: 5 Aug 22:14:18 ntpd[1964]: Listen normally on 7 cali46b133397bc [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 22:14:18.455894 ntpd[1964]: 5 Aug 22:14:18 ntpd[1964]: Listen normally on 8 vxlan.calico [fe80::6453:b0ff:fec5:c6e5%5]:123 Aug 5 22:14:18.454272 ntpd[1964]: Listen normally on 8 vxlan.calico [fe80::6453:b0ff:fec5:c6e5%5]:123 Aug 5 22:14:18.484140 containerd[2016]: time="2024-08-05T22:14:18.480962499Z" level=info msg="StopPodSandbox for \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\"" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.602 [INFO][5051] k8s.go 608: Cleaning up netns ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.604 [INFO][5051] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" iface="eth0" netns="/var/run/netns/cni-0a77fc12-d590-7528-c385-0df8a9dcb108" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.605 [INFO][5051] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" iface="eth0" netns="/var/run/netns/cni-0a77fc12-d590-7528-c385-0df8a9dcb108" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.606 [INFO][5051] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" iface="eth0" netns="/var/run/netns/cni-0a77fc12-d590-7528-c385-0df8a9dcb108" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.606 [INFO][5051] k8s.go 615: Releasing IP address(es) ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.606 [INFO][5051] utils.go 188: Calico CNI releasing IP address ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.667 [INFO][5057] ipam_plugin.go 411: Releasing address using handleID ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" HandleID="k8s-pod-network.ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.667 [INFO][5057] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.667 [INFO][5057] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.678 [WARNING][5057] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" HandleID="k8s-pod-network.ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.679 [INFO][5057] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" HandleID="k8s-pod-network.ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.681 [INFO][5057] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:18.686827 containerd[2016]: 2024-08-05 22:14:18.684 [INFO][5051] k8s.go 621: Teardown processing complete. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:14:18.696593 containerd[2016]: time="2024-08-05T22:14:18.689508702Z" level=info msg="TearDown network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\" successfully" Aug 5 22:14:18.696593 containerd[2016]: time="2024-08-05T22:14:18.689753929Z" level=info msg="StopPodSandbox for \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\" returns successfully" Aug 5 22:14:18.708617 containerd[2016]: time="2024-08-05T22:14:18.700085463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f5fbfb65-fbxpw,Uid:e9849bc8-dfda-4d92-a15c-331f7ac59401,Namespace:calico-system,Attempt:1,}" Aug 5 22:14:18.716619 systemd[1]: run-netns-cni\x2d0a77fc12\x2dd590\x2d7528\x2dc385\x2d0df8a9dcb108.mount: Deactivated successfully. Aug 5 22:14:19.014853 systemd-networkd[1575]: cali9cdac68db05: Link UP Aug 5 22:14:19.019511 systemd-networkd[1575]: cali9cdac68db05: Gained carrier Aug 5 22:14:19.021736 (udev-worker)[5082]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.845 [INFO][5063] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0 calico-kube-controllers-75f5fbfb65- calico-system e9849bc8-dfda-4d92-a15c-331f7ac59401 860 0 2024-08-05 22:13:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75f5fbfb65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-119 calico-kube-controllers-75f5fbfb65-fbxpw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9cdac68db05 [] []}} ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Namespace="calico-system" Pod="calico-kube-controllers-75f5fbfb65-fbxpw" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.845 [INFO][5063] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Namespace="calico-system" Pod="calico-kube-controllers-75f5fbfb65-fbxpw" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.905 [INFO][5074] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" HandleID="k8s-pod-network.ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.927 [INFO][5074] ipam_plugin.go 264: Auto assigning IP ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" HandleID="k8s-pod-network.ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003192d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-119", "pod":"calico-kube-controllers-75f5fbfb65-fbxpw", "timestamp":"2024-08-05 22:14:18.905026422 +0000 UTC"}, Hostname:"ip-172-31-21-119", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.927 [INFO][5074] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.927 [INFO][5074] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.927 [INFO][5074] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-119' Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.930 [INFO][5074] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" host="ip-172-31-21-119" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.947 [INFO][5074] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-119" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.965 [INFO][5074] ipam.go 489: Trying affinity for 192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.972 [INFO][5074] ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.978 [INFO][5074] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.978 [INFO][5074] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" host="ip-172-31-21-119" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.983 [INFO][5074] ipam.go 1685: Creating new handle: k8s-pod-network.ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:18.993 [INFO][5074] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" host="ip-172-31-21-119" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:19.003 [INFO][5074] ipam.go 1216: Successfully claimed IPs: [192.168.96.66/26] block=192.168.96.64/26 handle="k8s-pod-network.ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" host="ip-172-31-21-119" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:19.003 [INFO][5074] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.66/26] handle="k8s-pod-network.ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" host="ip-172-31-21-119" Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:19.003 [INFO][5074] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:19.069836 containerd[2016]: 2024-08-05 22:14:19.003 [INFO][5074] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.96.66/26] IPv6=[] ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" HandleID="k8s-pod-network.ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:19.071565 containerd[2016]: 2024-08-05 22:14:19.007 [INFO][5063] k8s.go 386: Populated endpoint ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Namespace="calico-system" Pod="calico-kube-controllers-75f5fbfb65-fbxpw" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0", GenerateName:"calico-kube-controllers-75f5fbfb65-", Namespace:"calico-system", SelfLink:"", UID:"e9849bc8-dfda-4d92-a15c-331f7ac59401", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f5fbfb65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"", Pod:"calico-kube-controllers-75f5fbfb65-fbxpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9cdac68db05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:19.071565 containerd[2016]: 2024-08-05 22:14:19.008 [INFO][5063] k8s.go 387: Calico CNI using IPs: [192.168.96.66/32] ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Namespace="calico-system" Pod="calico-kube-controllers-75f5fbfb65-fbxpw" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:19.071565 containerd[2016]: 2024-08-05 22:14:19.008 [INFO][5063] dataplane_linux.go 68: Setting the host side veth name to cali9cdac68db05 ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Namespace="calico-system" Pod="calico-kube-controllers-75f5fbfb65-fbxpw" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:19.071565 containerd[2016]: 2024-08-05 22:14:19.015 [INFO][5063] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Namespace="calico-system" Pod="calico-kube-controllers-75f5fbfb65-fbxpw" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:19.071565 containerd[2016]: 2024-08-05 22:14:19.018 [INFO][5063] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Namespace="calico-system" Pod="calico-kube-controllers-75f5fbfb65-fbxpw" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0", GenerateName:"calico-kube-controllers-75f5fbfb65-", Namespace:"calico-system", SelfLink:"", UID:"e9849bc8-dfda-4d92-a15c-331f7ac59401", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f5fbfb65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d", Pod:"calico-kube-controllers-75f5fbfb65-fbxpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9cdac68db05", MAC:"9e:53:1f:31:4c:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:19.071565 containerd[2016]: 2024-08-05 22:14:19.056 [INFO][5063] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d" Namespace="calico-system" Pod="calico-kube-controllers-75f5fbfb65-fbxpw" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:14:19.172329 containerd[2016]: time="2024-08-05T22:14:19.171984591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:19.173606 containerd[2016]: time="2024-08-05T22:14:19.172381089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:19.173606 containerd[2016]: time="2024-08-05T22:14:19.172501609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:19.176394 containerd[2016]: time="2024-08-05T22:14:19.175097495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:19.282854 containerd[2016]: time="2024-08-05T22:14:19.282722867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f5fbfb65-fbxpw,Uid:e9849bc8-dfda-4d92-a15c-331f7ac59401,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d\"" Aug 5 22:14:19.288518 containerd[2016]: time="2024-08-05T22:14:19.288382853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:14:20.483575 containerd[2016]: time="2024-08-05T22:14:20.480337510Z" level=info msg="StopPodSandbox for \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\"" Aug 5 22:14:20.849296 systemd-networkd[1575]: cali9cdac68db05: Gained IPv6LL Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.782 [INFO][5160] k8s.go 608: Cleaning up netns ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.785 [INFO][5160] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" iface="eth0" netns="/var/run/netns/cni-847339c4-56a4-4a28-ccc3-321fe5637193" Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.785 [INFO][5160] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" iface="eth0" netns="/var/run/netns/cni-847339c4-56a4-4a28-ccc3-321fe5637193" Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.786 [INFO][5160] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" iface="eth0" netns="/var/run/netns/cni-847339c4-56a4-4a28-ccc3-321fe5637193" Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.786 [INFO][5160] k8s.go 615: Releasing IP address(es) ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.786 [INFO][5160] utils.go 188: Calico CNI releasing IP address ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.952 [INFO][5170] ipam_plugin.go 411: Releasing address using handleID ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" HandleID="k8s-pod-network.23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.952 [INFO][5170] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.952 [INFO][5170] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.971 [WARNING][5170] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" HandleID="k8s-pod-network.23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.971 [INFO][5170] ipam_plugin.go 439: Releasing address using workloadID ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" HandleID="k8s-pod-network.23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.975 [INFO][5170] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:21.011503 containerd[2016]: 2024-08-05 22:14:20.989 [INFO][5160] k8s.go 621: Teardown processing complete. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:14:21.011503 containerd[2016]: time="2024-08-05T22:14:20.996872245Z" level=info msg="TearDown network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\" successfully" Aug 5 22:14:21.011503 containerd[2016]: time="2024-08-05T22:14:20.997118918Z" level=info msg="StopPodSandbox for \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\" returns successfully" Aug 5 22:14:21.011503 containerd[2016]: time="2024-08-05T22:14:21.007501181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hpqq2,Uid:3d90ed18-88fe-4b37-ba5a-a0772610a05d,Namespace:kube-system,Attempt:1,}" Aug 5 22:14:21.008905 systemd[1]: run-netns-cni\x2d847339c4\x2d56a4\x2d4a28\x2dccc3\x2d321fe5637193.mount: Deactivated successfully. Aug 5 22:14:21.328081 systemd-networkd[1575]: calie0974664695: Link UP Aug 5 22:14:21.330519 systemd-networkd[1575]: calie0974664695: Gained carrier Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.156 [INFO][5177] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0 coredns-5dd5756b68- kube-system 3d90ed18-88fe-4b37-ba5a-a0772610a05d 878 0 2024-08-05 22:13:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-119 coredns-5dd5756b68-hpqq2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0974664695 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Namespace="kube-system" Pod="coredns-5dd5756b68-hpqq2" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.157 [INFO][5177] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Namespace="kube-system" Pod="coredns-5dd5756b68-hpqq2" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.234 [INFO][5189] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" HandleID="k8s-pod-network.a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.254 [INFO][5189] ipam_plugin.go 264: Auto assigning IP ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" HandleID="k8s-pod-network.a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050f60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-119", "pod":"coredns-5dd5756b68-hpqq2", "timestamp":"2024-08-05 22:14:21.234544861 +0000 UTC"}, Hostname:"ip-172-31-21-119", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.254 [INFO][5189] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.254 [INFO][5189] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.255 [INFO][5189] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-119' Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.258 [INFO][5189] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" host="ip-172-31-21-119" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.265 [INFO][5189] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-119" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.277 [INFO][5189] ipam.go 489: Trying affinity for 192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.284 [INFO][5189] ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.289 [INFO][5189] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.289 [INFO][5189] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" host="ip-172-31-21-119" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.296 [INFO][5189] ipam.go 1685: Creating new handle: k8s-pod-network.a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8 Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.307 [INFO][5189] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" host="ip-172-31-21-119" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.319 [INFO][5189] ipam.go 1216: Successfully claimed IPs: [192.168.96.67/26] block=192.168.96.64/26 handle="k8s-pod-network.a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" host="ip-172-31-21-119" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.319 [INFO][5189] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.67/26] handle="k8s-pod-network.a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" host="ip-172-31-21-119" Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.320 [INFO][5189] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:21.366776 containerd[2016]: 2024-08-05 22:14:21.320 [INFO][5189] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.96.67/26] IPv6=[] ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" HandleID="k8s-pod-network.a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.371101 containerd[2016]: 2024-08-05 22:14:21.322 [INFO][5177] k8s.go 386: Populated endpoint ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Namespace="kube-system" Pod="coredns-5dd5756b68-hpqq2" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3d90ed18-88fe-4b37-ba5a-a0772610a05d", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"", Pod:"coredns-5dd5756b68-hpqq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0974664695", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:21.371101 containerd[2016]: 2024-08-05 22:14:21.322 [INFO][5177] k8s.go 387: Calico CNI using IPs: [192.168.96.67/32] ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Namespace="kube-system" Pod="coredns-5dd5756b68-hpqq2" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.371101 containerd[2016]: 2024-08-05 22:14:21.323 [INFO][5177] dataplane_linux.go 68: Setting the host side veth name to calie0974664695 ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Namespace="kube-system" Pod="coredns-5dd5756b68-hpqq2" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.371101 containerd[2016]: 2024-08-05 22:14:21.331 [INFO][5177] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Namespace="kube-system" Pod="coredns-5dd5756b68-hpqq2" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.371101 containerd[2016]: 2024-08-05 22:14:21.333 [INFO][5177] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Namespace="kube-system" Pod="coredns-5dd5756b68-hpqq2" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3d90ed18-88fe-4b37-ba5a-a0772610a05d", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8", Pod:"coredns-5dd5756b68-hpqq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0974664695", MAC:"06:c0:1f:ff:4e:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:21.371101 containerd[2016]: 2024-08-05 22:14:21.362 [INFO][5177] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8" Namespace="kube-system" Pod="coredns-5dd5756b68-hpqq2" WorkloadEndpoint="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:14:21.483358 containerd[2016]: time="2024-08-05T22:14:21.483316504Z" level=info msg="StopPodSandbox for \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\"" Aug 5 22:14:21.492460 containerd[2016]: time="2024-08-05T22:14:21.492224992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:21.492460 containerd[2016]: time="2024-08-05T22:14:21.492297340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:21.492460 containerd[2016]: time="2024-08-05T22:14:21.492325632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:21.492460 containerd[2016]: time="2024-08-05T22:14:21.492345848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:21.723271 containerd[2016]: time="2024-08-05T22:14:21.721754205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hpqq2,Uid:3d90ed18-88fe-4b37-ba5a-a0772610a05d,Namespace:kube-system,Attempt:1,} returns sandbox id \"a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8\"" Aug 5 22:14:21.751473 containerd[2016]: time="2024-08-05T22:14:21.751257983Z" level=info msg="CreateContainer within sandbox \"a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:14:21.788685 containerd[2016]: time="2024-08-05T22:14:21.788392409Z" level=info msg="CreateContainer within sandbox \"a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f53e02efbf5d501589951ed97586978cf53e426d239cf73b337bc94f7b6b684b\"" Aug 5 22:14:21.791309 containerd[2016]: time="2024-08-05T22:14:21.790282031Z" level=info msg="StartContainer for \"f53e02efbf5d501589951ed97586978cf53e426d239cf73b337bc94f7b6b684b\"" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.707 [INFO][5262] k8s.go 608: Cleaning up netns ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.711 [INFO][5262] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" iface="eth0" netns="/var/run/netns/cni-7ee1fabd-cb84-71ed-4e49-b0fe442362c1" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.711 [INFO][5262] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" iface="eth0" netns="/var/run/netns/cni-7ee1fabd-cb84-71ed-4e49-b0fe442362c1" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.711 [INFO][5262] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" iface="eth0" netns="/var/run/netns/cni-7ee1fabd-cb84-71ed-4e49-b0fe442362c1" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.711 [INFO][5262] k8s.go 615: Releasing IP address(es) ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.711 [INFO][5262] utils.go 188: Calico CNI releasing IP address ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.777 [INFO][5274] ipam_plugin.go 411: Releasing address using handleID ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" HandleID="k8s-pod-network.48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.778 [INFO][5274] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.778 [INFO][5274] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.789 [WARNING][5274] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" HandleID="k8s-pod-network.48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.791 [INFO][5274] ipam_plugin.go 439: Releasing address using workloadID ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" HandleID="k8s-pod-network.48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.797 [INFO][5274] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:21.808982 containerd[2016]: 2024-08-05 22:14:21.805 [INFO][5262] k8s.go 621: Teardown processing complete. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:14:21.810124 containerd[2016]: time="2024-08-05T22:14:21.809158179Z" level=info msg="TearDown network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\" successfully" Aug 5 22:14:21.810124 containerd[2016]: time="2024-08-05T22:14:21.809190043Z" level=info msg="StopPodSandbox for \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\" returns successfully" Aug 5 22:14:21.811345 containerd[2016]: time="2024-08-05T22:14:21.811289685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pqxtn,Uid:49ea90fb-8427-4eac-8b89-57071ef71ebc,Namespace:calico-system,Attempt:1,}" Aug 5 22:14:21.923455 containerd[2016]: time="2024-08-05T22:14:21.923317705Z" level=info msg="StartContainer for \"f53e02efbf5d501589951ed97586978cf53e426d239cf73b337bc94f7b6b684b\" returns successfully" Aug 5 22:14:22.017540 systemd[1]: run-containerd-runc-k8s.io-a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8-runc.wFn8W9.mount: Deactivated successfully. Aug 5 22:14:22.017755 systemd[1]: run-netns-cni\x2d7ee1fabd\x2dcb84\x2d71ed\x2d4e49\x2db0fe442362c1.mount: Deactivated successfully. Aug 5 22:14:22.087052 systemd[1]: Started sshd@12-172.31.21.119:22-139.178.89.65:47334.service - OpenSSH per-connection server daemon (139.178.89.65:47334). Aug 5 22:14:22.257994 systemd-networkd[1575]: cali33d36ae7431: Link UP Aug 5 22:14:22.262913 systemd-networkd[1575]: cali33d36ae7431: Gained carrier Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.023 [INFO][5309] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0 csi-node-driver- calico-system 49ea90fb-8427-4eac-8b89-57071ef71ebc 892 0 2024-08-05 22:13:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-21-119 csi-node-driver-pqxtn eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali33d36ae7431 [] []}} ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Namespace="calico-system" Pod="csi-node-driver-pqxtn" WorkloadEndpoint="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.023 [INFO][5309] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Namespace="calico-system" Pod="csi-node-driver-pqxtn" WorkloadEndpoint="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.156 [INFO][5334] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" HandleID="k8s-pod-network.0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.173 [INFO][5334] ipam_plugin.go 264: Auto assigning IP ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" HandleID="k8s-pod-network.0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00046a660), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-119", "pod":"csi-node-driver-pqxtn", "timestamp":"2024-08-05 22:14:22.15614831 +0000 UTC"}, Hostname:"ip-172-31-21-119", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.174 [INFO][5334] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.174 [INFO][5334] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.174 [INFO][5334] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-119' Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.178 [INFO][5334] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" host="ip-172-31-21-119" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.190 [INFO][5334] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-119" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.202 [INFO][5334] ipam.go 489: Trying affinity for 192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.208 [INFO][5334] ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.214 [INFO][5334] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.214 [INFO][5334] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" host="ip-172-31-21-119" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.219 [INFO][5334] ipam.go 1685: Creating new handle: k8s-pod-network.0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.225 [INFO][5334] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" host="ip-172-31-21-119" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.239 [INFO][5334] ipam.go 1216: Successfully claimed IPs: [192.168.96.68/26] block=192.168.96.64/26 handle="k8s-pod-network.0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" host="ip-172-31-21-119" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.239 [INFO][5334] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.68/26] handle="k8s-pod-network.0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" host="ip-172-31-21-119" Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.240 [INFO][5334] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:22.317013 containerd[2016]: 2024-08-05 22:14:22.240 [INFO][5334] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.96.68/26] IPv6=[] ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" HandleID="k8s-pod-network.0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:22.318930 containerd[2016]: 2024-08-05 22:14:22.245 [INFO][5309] k8s.go 386: Populated endpoint ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Namespace="calico-system" Pod="csi-node-driver-pqxtn" WorkloadEndpoint="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49ea90fb-8427-4eac-8b89-57071ef71ebc", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"", Pod:"csi-node-driver-pqxtn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali33d36ae7431", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:22.318930 containerd[2016]: 2024-08-05 22:14:22.245 [INFO][5309] k8s.go 387: Calico CNI using IPs: [192.168.96.68/32] ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Namespace="calico-system" Pod="csi-node-driver-pqxtn" WorkloadEndpoint="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:22.318930 containerd[2016]: 2024-08-05 22:14:22.245 [INFO][5309] dataplane_linux.go 68: Setting the host side veth name to cali33d36ae7431 ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Namespace="calico-system" Pod="csi-node-driver-pqxtn" WorkloadEndpoint="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:22.318930 containerd[2016]: 2024-08-05 22:14:22.264 [INFO][5309] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Namespace="calico-system" Pod="csi-node-driver-pqxtn" WorkloadEndpoint="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:22.318930 containerd[2016]: 2024-08-05 22:14:22.266 [INFO][5309] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Namespace="calico-system" Pod="csi-node-driver-pqxtn" WorkloadEndpoint="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49ea90fb-8427-4eac-8b89-57071ef71ebc", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c", Pod:"csi-node-driver-pqxtn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali33d36ae7431", MAC:"36:b4:39:82:5e:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:22.318930 containerd[2016]: 2024-08-05 22:14:22.305 [INFO][5309] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c" Namespace="calico-system" Pod="csi-node-driver-pqxtn" WorkloadEndpoint="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:14:22.350110 sshd[5338]: Accepted publickey for core from 139.178.89.65 port 47334 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:22.361909 sshd[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:22.380039 systemd-logind[1985]: New session 13 of user core. Aug 5 22:14:22.386257 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:14:22.407504 containerd[2016]: time="2024-08-05T22:14:22.407177546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:22.407504 containerd[2016]: time="2024-08-05T22:14:22.407244813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:22.407504 containerd[2016]: time="2024-08-05T22:14:22.407271478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:22.407504 containerd[2016]: time="2024-08-05T22:14:22.407291883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:22.512185 kubelet[3468]: I0805 22:14:22.512091 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hpqq2" podStartSLOduration=56.51203621 podCreationTimestamp="2024-08-05 22:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:14:22.505793622 +0000 UTC m=+69.255921783" watchObservedRunningTime="2024-08-05 22:14:22.51203621 +0000 UTC m=+69.262164371" Aug 5 22:14:22.769545 containerd[2016]: time="2024-08-05T22:14:22.769241394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pqxtn,Uid:49ea90fb-8427-4eac-8b89-57071ef71ebc,Namespace:calico-system,Attempt:1,} returns sandbox id \"0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c\"" Aug 5 22:14:23.030563 sshd[5338]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:23.037283 systemd[1]: sshd@12-172.31.21.119:22-139.178.89.65:47334.service: Deactivated successfully. Aug 5 22:14:23.050971 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:14:23.055211 systemd-logind[1985]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:14:23.058376 systemd-logind[1985]: Removed session 13. Aug 5 22:14:23.218801 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:23.215926 systemd-networkd[1575]: calie0974664695: Gained IPv6LL Aug 5 22:14:23.217601 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:23.217635 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:23.565439 containerd[2016]: time="2024-08-05T22:14:23.564626805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:23.566057 containerd[2016]: time="2024-08-05T22:14:23.566004417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:14:23.568145 containerd[2016]: time="2024-08-05T22:14:23.568084417Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:23.571224 containerd[2016]: time="2024-08-05T22:14:23.571161748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:23.572661 containerd[2016]: time="2024-08-05T22:14:23.572367689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.283602027s" Aug 5 22:14:23.572661 containerd[2016]: time="2024-08-05T22:14:23.572428237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:14:23.574343 containerd[2016]: time="2024-08-05T22:14:23.574222638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:14:23.646560 containerd[2016]: time="2024-08-05T22:14:23.646376166Z" level=info msg="CreateContainer within sandbox \"ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:14:23.724908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034618443.mount: Deactivated successfully. Aug 5 22:14:23.727695 systemd-networkd[1575]: cali33d36ae7431: Gained IPv6LL Aug 5 22:14:23.730694 containerd[2016]: time="2024-08-05T22:14:23.730634347Z" level=info msg="CreateContainer within sandbox \"ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b9ace5dc62a59e407482d70a9a9112264b2edec74770e6802da72b1c6a29cd40\"" Aug 5 22:14:23.733872 containerd[2016]: time="2024-08-05T22:14:23.733830575Z" level=info msg="StartContainer for \"b9ace5dc62a59e407482d70a9a9112264b2edec74770e6802da72b1c6a29cd40\"" Aug 5 22:14:23.940051 containerd[2016]: time="2024-08-05T22:14:23.939908694Z" level=info msg="StartContainer for \"b9ace5dc62a59e407482d70a9a9112264b2edec74770e6802da72b1c6a29cd40\" returns successfully" Aug 5 22:14:24.538147 kubelet[3468]: I0805 22:14:24.538016 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75f5fbfb65-fbxpw" podStartSLOduration=46.251285371 podCreationTimestamp="2024-08-05 22:13:34 +0000 UTC" firstStartedPulling="2024-08-05 22:14:19.286442499 +0000 UTC m=+66.036570649" lastFinishedPulling="2024-08-05 22:14:23.573098726 +0000 UTC m=+70.323226866" observedRunningTime="2024-08-05 22:14:24.534928401 +0000 UTC m=+71.285056561" watchObservedRunningTime="2024-08-05 22:14:24.537941588 +0000 UTC m=+71.288069748" Aug 5 22:14:25.444357 containerd[2016]: time="2024-08-05T22:14:25.444307186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:25.447186 containerd[2016]: time="2024-08-05T22:14:25.447081305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:14:25.453623 containerd[2016]: time="2024-08-05T22:14:25.453173030Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:25.466017 containerd[2016]: time="2024-08-05T22:14:25.465968764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:25.471220 containerd[2016]: time="2024-08-05T22:14:25.471167234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.896906706s" Aug 5 22:14:25.471220 containerd[2016]: time="2024-08-05T22:14:25.471209868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:14:25.499285 containerd[2016]: time="2024-08-05T22:14:25.498570096Z" level=info msg="CreateContainer within sandbox \"0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:14:25.551652 containerd[2016]: time="2024-08-05T22:14:25.551607117Z" level=info msg="CreateContainer within sandbox \"0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"293182fb44fcf2e834a5c459916ee0c4c86e54edfed14887bc2f213210c894c8\"" Aug 5 22:14:25.552627 containerd[2016]: time="2024-08-05T22:14:25.552589866Z" level=info msg="StartContainer for \"293182fb44fcf2e834a5c459916ee0c4c86e54edfed14887bc2f213210c894c8\"" Aug 5 22:14:25.681954 containerd[2016]: time="2024-08-05T22:14:25.681905856Z" level=info msg="StartContainer for \"293182fb44fcf2e834a5c459916ee0c4c86e54edfed14887bc2f213210c894c8\" returns successfully" Aug 5 22:14:25.685003 containerd[2016]: time="2024-08-05T22:14:25.684962890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:14:26.454769 ntpd[1964]: Listen normally on 9 cali9cdac68db05 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:14:26.454933 ntpd[1964]: Listen normally on 10 calie0974664695 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:14:26.455394 ntpd[1964]: 5 Aug 22:14:26 ntpd[1964]: Listen normally on 9 cali9cdac68db05 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:14:26.455394 ntpd[1964]: 5 Aug 22:14:26 ntpd[1964]: Listen normally on 10 calie0974664695 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:14:26.455394 ntpd[1964]: 5 Aug 22:14:26 ntpd[1964]: Listen normally on 11 cali33d36ae7431 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:14:26.454984 ntpd[1964]: Listen normally on 11 cali33d36ae7431 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:14:27.613129 containerd[2016]: time="2024-08-05T22:14:27.609787000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:27.625274 containerd[2016]: time="2024-08-05T22:14:27.624998859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:14:27.641527 containerd[2016]: time="2024-08-05T22:14:27.635362908Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:27.710159 containerd[2016]: time="2024-08-05T22:14:27.709463351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:27.714859 containerd[2016]: time="2024-08-05T22:14:27.713868493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.028855017s" Aug 5 22:14:27.714859 containerd[2016]: time="2024-08-05T22:14:27.713955225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:14:27.730819 containerd[2016]: time="2024-08-05T22:14:27.729743274Z" level=info msg="CreateContainer within sandbox \"0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:14:27.764847 containerd[2016]: time="2024-08-05T22:14:27.763150917Z" level=info msg="CreateContainer within sandbox \"0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5b21b78d1f4bfd286a75a67f2368cfad975affd3f46a54d855356983c2dd993e\"" Aug 5 22:14:27.768577 containerd[2016]: time="2024-08-05T22:14:27.768381554Z" level=info msg="StartContainer for \"5b21b78d1f4bfd286a75a67f2368cfad975affd3f46a54d855356983c2dd993e\"" Aug 5 22:14:28.068366 systemd[1]: Started sshd@13-172.31.21.119:22-139.178.89.65:47340.service - OpenSSH per-connection server daemon (139.178.89.65:47340). Aug 5 22:14:28.132776 containerd[2016]: time="2024-08-05T22:14:28.129392602Z" level=info msg="StartContainer for \"5b21b78d1f4bfd286a75a67f2368cfad975affd3f46a54d855356983c2dd993e\" returns successfully" Aug 5 22:14:28.344083 sshd[5548]: Accepted publickey for core from 139.178.89.65 port 47340 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:28.346741 sshd[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:28.354812 systemd-logind[1985]: New session 14 of user core. Aug 5 22:14:28.359960 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:14:29.170909 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:29.170040 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:29.170143 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:29.193506 kubelet[3468]: I0805 22:14:29.192647 3468 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:14:29.193506 kubelet[3468]: I0805 22:14:29.192711 3468 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:14:29.397057 sshd[5548]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:29.404011 systemd[1]: sshd@13-172.31.21.119:22-139.178.89.65:47340.service: Deactivated successfully. Aug 5 22:14:29.411432 systemd-logind[1985]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:14:29.413366 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:14:29.416607 systemd-logind[1985]: Removed session 14. Aug 5 22:14:33.836795 kubelet[3468]: I0805 22:14:33.836754 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-pqxtn" podStartSLOduration=54.89174712 podCreationTimestamp="2024-08-05 22:13:34 +0000 UTC" firstStartedPulling="2024-08-05 22:14:22.772751191 +0000 UTC m=+69.522879332" lastFinishedPulling="2024-08-05 22:14:27.717719951 +0000 UTC m=+74.467848105" observedRunningTime="2024-08-05 22:14:28.606720835 +0000 UTC m=+75.356848996" watchObservedRunningTime="2024-08-05 22:14:33.836715893 +0000 UTC m=+80.586844053" Aug 5 22:14:34.430363 systemd[1]: Started sshd@14-172.31.21.119:22-139.178.89.65:35546.service - OpenSSH per-connection server daemon (139.178.89.65:35546). Aug 5 22:14:34.608468 sshd[5605]: Accepted publickey for core from 139.178.89.65 port 35546 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:34.611394 sshd[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:34.617383 systemd-logind[1985]: New session 15 of user core. Aug 5 22:14:34.623340 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:14:34.871995 sshd[5605]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:34.878538 systemd[1]: sshd@14-172.31.21.119:22-139.178.89.65:35546.service: Deactivated successfully. Aug 5 22:14:34.889244 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:14:34.891020 systemd-logind[1985]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:14:34.892665 systemd-logind[1985]: Removed session 15. Aug 5 22:14:38.505027 kubelet[3468]: I0805 22:14:38.504918 3468 topology_manager.go:215] "Topology Admit Handler" podUID="9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8" podNamespace="calico-apiserver" podName="calico-apiserver-7dc964cd58-d6svm" Aug 5 22:14:38.595332 kubelet[3468]: I0805 22:14:38.594985 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jx7q\" (UniqueName: \"kubernetes.io/projected/9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8-kube-api-access-2jx7q\") pod \"calico-apiserver-7dc964cd58-d6svm\" (UID: \"9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8\") " pod="calico-apiserver/calico-apiserver-7dc964cd58-d6svm" Aug 5 22:14:38.604205 kubelet[3468]: I0805 22:14:38.604115 3468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8-calico-apiserver-certs\") pod \"calico-apiserver-7dc964cd58-d6svm\" (UID: \"9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8\") " pod="calico-apiserver/calico-apiserver-7dc964cd58-d6svm" Aug 5 22:14:38.709959 kubelet[3468]: E0805 22:14:38.709898 3468 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:14:38.773299 kubelet[3468]: E0805 22:14:38.773104 3468 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8-calico-apiserver-certs podName:9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8 nodeName:}" failed. No retries permitted until 2024-08-05 22:14:39.219894613 +0000 UTC m=+85.970022765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8-calico-apiserver-certs") pod "calico-apiserver-7dc964cd58-d6svm" (UID: "9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8") : secret "calico-apiserver-certs" not found Aug 5 22:14:39.426888 containerd[2016]: time="2024-08-05T22:14:39.426836905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc964cd58-d6svm,Uid:9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:14:39.875323 systemd-networkd[1575]: cali4a3cec66cc4: Link UP Aug 5 22:14:39.878770 systemd-networkd[1575]: cali4a3cec66cc4: Gained carrier Aug 5 22:14:39.881765 (udev-worker)[5667]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:39.931014 systemd[1]: Started sshd@15-172.31.21.119:22-139.178.89.65:35562.service - OpenSSH per-connection server daemon (139.178.89.65:35562). Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.647 [INFO][5650] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0 calico-apiserver-7dc964cd58- calico-apiserver 9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8 1046 0 2024-08-05 22:14:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dc964cd58 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-119 calico-apiserver-7dc964cd58-d6svm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a3cec66cc4 [] []}} ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Namespace="calico-apiserver" Pod="calico-apiserver-7dc964cd58-d6svm" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.647 [INFO][5650] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Namespace="calico-apiserver" Pod="calico-apiserver-7dc964cd58-d6svm" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.737 [INFO][5660] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" HandleID="k8s-pod-network.2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Workload="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.747 [INFO][5660] ipam_plugin.go 264: Auto assigning IP ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" HandleID="k8s-pod-network.2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Workload="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000ef190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-119", "pod":"calico-apiserver-7dc964cd58-d6svm", "timestamp":"2024-08-05 22:14:39.737844363 +0000 UTC"}, Hostname:"ip-172-31-21-119", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.748 [INFO][5660] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.748 [INFO][5660] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.748 [INFO][5660] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-119' Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.750 [INFO][5660] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" host="ip-172-31-21-119" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.755 [INFO][5660] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-119" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.776 [INFO][5660] ipam.go 489: Trying affinity for 192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.795 [INFO][5660] ipam.go 155: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.811 [INFO][5660] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-21-119" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.811 [INFO][5660] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" host="ip-172-31-21-119" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.815 [INFO][5660] ipam.go 1685: Creating new handle: k8s-pod-network.2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209 Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.823 [INFO][5660] ipam.go 1203: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" host="ip-172-31-21-119" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.838 [INFO][5660] ipam.go 1216: Successfully claimed IPs: [192.168.96.69/26] block=192.168.96.64/26 handle="k8s-pod-network.2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" host="ip-172-31-21-119" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.838 [INFO][5660] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.69/26] handle="k8s-pod-network.2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" host="ip-172-31-21-119" Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.840 [INFO][5660] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:39.950542 containerd[2016]: 2024-08-05 22:14:39.840 [INFO][5660] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.96.69/26] IPv6=[] ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" HandleID="k8s-pod-network.2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Workload="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" Aug 5 22:14:39.957631 containerd[2016]: 2024-08-05 22:14:39.858 [INFO][5650] k8s.go 386: Populated endpoint ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Namespace="calico-apiserver" Pod="calico-apiserver-7dc964cd58-d6svm" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0", GenerateName:"calico-apiserver-7dc964cd58-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc964cd58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"", Pod:"calico-apiserver-7dc964cd58-d6svm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a3cec66cc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:39.957631 containerd[2016]: 2024-08-05 22:14:39.858 [INFO][5650] k8s.go 387: Calico CNI using IPs: [192.168.96.69/32] ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Namespace="calico-apiserver" Pod="calico-apiserver-7dc964cd58-d6svm" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" Aug 5 22:14:39.957631 containerd[2016]: 2024-08-05 22:14:39.858 [INFO][5650] dataplane_linux.go 68: Setting the host side veth name to cali4a3cec66cc4 ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Namespace="calico-apiserver" Pod="calico-apiserver-7dc964cd58-d6svm" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" Aug 5 22:14:39.957631 containerd[2016]: 2024-08-05 22:14:39.879 [INFO][5650] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Namespace="calico-apiserver" Pod="calico-apiserver-7dc964cd58-d6svm" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" Aug 5 22:14:39.957631 containerd[2016]: 2024-08-05 22:14:39.885 [INFO][5650] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Namespace="calico-apiserver" Pod="calico-apiserver-7dc964cd58-d6svm" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0", GenerateName:"calico-apiserver-7dc964cd58-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 14, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc964cd58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209", Pod:"calico-apiserver-7dc964cd58-d6svm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a3cec66cc4", MAC:"3a:4f:cd:15:0f:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:39.957631 containerd[2016]: 2024-08-05 22:14:39.926 [INFO][5650] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209" Namespace="calico-apiserver" Pod="calico-apiserver-7dc964cd58-d6svm" WorkloadEndpoint="ip--172--31--21--119-k8s-calico--apiserver--7dc964cd58--d6svm-eth0" Aug 5 22:14:40.044215 containerd[2016]: time="2024-08-05T22:14:40.042362937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:40.044215 containerd[2016]: time="2024-08-05T22:14:40.043220283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:40.044215 containerd[2016]: time="2024-08-05T22:14:40.043272868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:40.044215 containerd[2016]: time="2024-08-05T22:14:40.043298449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:40.200700 containerd[2016]: time="2024-08-05T22:14:40.200647097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc964cd58-d6svm,Uid:9c7802d9-a4d8-4691-bac2-3e7db7c0b3e8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209\"" Aug 5 22:14:40.203148 containerd[2016]: time="2024-08-05T22:14:40.202819837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:14:40.210083 sshd[5670]: Accepted publickey for core from 139.178.89.65 port 35562 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:40.211369 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:40.218791 systemd-logind[1985]: New session 16 of user core. Aug 5 22:14:40.237877 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:14:40.326315 systemd[1]: run-containerd-runc-k8s.io-2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209-runc.hZowxv.mount: Deactivated successfully. Aug 5 22:14:40.748529 sshd[5670]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:40.763362 systemd[1]: sshd@15-172.31.21.119:22-139.178.89.65:35562.service: Deactivated successfully. Aug 5 22:14:40.782476 systemd-logind[1985]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:14:40.788958 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:14:40.816547 systemd[1]: Started sshd@16-172.31.21.119:22-139.178.89.65:51194.service - OpenSSH per-connection server daemon (139.178.89.65:51194). Aug 5 22:14:40.822887 systemd-logind[1985]: Removed session 16. Aug 5 22:14:41.078341 sshd[5737]: Accepted publickey for core from 139.178.89.65 port 51194 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:41.081831 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:41.098526 systemd-logind[1985]: New session 17 of user core. Aug 5 22:14:41.105190 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:14:41.199748 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:41.201688 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:41.199803 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:41.840234 systemd-networkd[1575]: cali4a3cec66cc4: Gained IPv6LL Aug 5 22:14:42.377282 sshd[5737]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:42.404111 systemd[1]: Started sshd@17-172.31.21.119:22-139.178.89.65:51200.service - OpenSSH per-connection server daemon (139.178.89.65:51200). Aug 5 22:14:42.404986 systemd[1]: sshd@16-172.31.21.119:22-139.178.89.65:51194.service: Deactivated successfully. Aug 5 22:14:42.427698 systemd-logind[1985]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:14:42.427988 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:14:42.445503 systemd-logind[1985]: Removed session 17. Aug 5 22:14:42.725136 sshd[5747]: Accepted publickey for core from 139.178.89.65 port 51200 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:42.730804 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:42.760954 systemd-logind[1985]: New session 18 of user core. Aug 5 22:14:42.766354 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:14:43.254094 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:43.252424 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:43.252444 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:44.454175 ntpd[1964]: Listen normally on 12 cali4a3cec66cc4 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:14:44.455632 ntpd[1964]: 5 Aug 22:14:44 ntpd[1964]: Listen normally on 12 cali4a3cec66cc4 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:14:45.298242 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:45.300765 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:45.298251 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:45.380464 sshd[5747]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:45.392212 systemd[1]: sshd@17-172.31.21.119:22-139.178.89.65:51200.service: Deactivated successfully. Aug 5 22:14:45.403295 systemd-logind[1985]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:14:45.434532 systemd[1]: Started sshd@18-172.31.21.119:22-139.178.89.65:51204.service - OpenSSH per-connection server daemon (139.178.89.65:51204). Aug 5 22:14:45.437678 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:14:45.445611 systemd-logind[1985]: Removed session 18. Aug 5 22:14:45.668262 sshd[5778]: Accepted publickey for core from 139.178.89.65 port 51204 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:45.668137 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:45.699645 systemd-logind[1985]: New session 19 of user core. Aug 5 22:14:45.705091 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:14:46.568228 containerd[2016]: time="2024-08-05T22:14:46.568178740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:46.572141 containerd[2016]: time="2024-08-05T22:14:46.570259843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:14:46.572592 containerd[2016]: time="2024-08-05T22:14:46.572549666Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:46.581593 containerd[2016]: time="2024-08-05T22:14:46.580576990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:46.584405 containerd[2016]: time="2024-08-05T22:14:46.584234952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 6.381368217s" Aug 5 22:14:46.584405 containerd[2016]: time="2024-08-05T22:14:46.584358944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:14:46.589322 containerd[2016]: time="2024-08-05T22:14:46.589131847Z" level=info msg="CreateContainer within sandbox \"2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:14:46.649095 containerd[2016]: time="2024-08-05T22:14:46.648196849Z" level=info msg="CreateContainer within sandbox \"2e1a49688930553dd85284daabc31ede5f56f27a820d07d0737af24ee3e6e209\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0c13bd2e93e85cf1ad0716af7cee002c817ad2c0766f0b1936b3342179c527c9\"" Aug 5 22:14:46.657799 containerd[2016]: time="2024-08-05T22:14:46.649884559Z" level=info msg="StartContainer for \"0c13bd2e93e85cf1ad0716af7cee002c817ad2c0766f0b1936b3342179c527c9\"" Aug 5 22:14:46.682964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107755364.mount: Deactivated successfully. Aug 5 22:14:46.936918 systemd[1]: run-containerd-runc-k8s.io-0c13bd2e93e85cf1ad0716af7cee002c817ad2c0766f0b1936b3342179c527c9-runc.eOZ3BA.mount: Deactivated successfully. Aug 5 22:14:47.314252 containerd[2016]: time="2024-08-05T22:14:47.314126124Z" level=info msg="StartContainer for \"0c13bd2e93e85cf1ad0716af7cee002c817ad2c0766f0b1936b3342179c527c9\" returns successfully" Aug 5 22:14:47.347755 systemd-journald[1490]: Under memory pressure, flushing caches. Aug 5 22:14:47.345110 systemd-resolved[1890]: Under memory pressure, flushing caches. Aug 5 22:14:47.345142 systemd-resolved[1890]: Flushed all caches. Aug 5 22:14:47.910207 kubelet[3468]: I0805 22:14:47.901784 3468 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dc964cd58-d6svm" podStartSLOduration=3.5167508549999997 podCreationTimestamp="2024-08-05 22:14:38 +0000 UTC" firstStartedPulling="2024-08-05 22:14:40.20219534 +0000 UTC m=+86.952323489" lastFinishedPulling="2024-08-05 22:14:46.584784684 +0000 UTC m=+93.334912836" observedRunningTime="2024-08-05 22:14:47.890278247 +0000 UTC m=+94.640406429" watchObservedRunningTime="2024-08-05 22:14:47.899340202 +0000 UTC m=+94.649468380" Aug 5 22:14:48.281879 sshd[5778]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:48.293867 systemd[1]: sshd@18-172.31.21.119:22-139.178.89.65:51204.service: Deactivated successfully. Aug 5 22:14:48.306289 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:14:48.309532 systemd-logind[1985]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:14:48.332429 systemd[1]: Started sshd@19-172.31.21.119:22-139.178.89.65:51212.service - OpenSSH per-connection server daemon (139.178.89.65:51212). Aug 5 22:14:48.342922 systemd-logind[1985]: Removed session 19. Aug 5 22:14:48.562786 sshd[5841]: Accepted publickey for core from 139.178.89.65 port 51212 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:48.566086 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:48.585796 systemd-logind[1985]: New session 20 of user core. Aug 5 22:14:48.594801 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:14:49.036615 sshd[5841]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:49.044771 systemd[1]: sshd@19-172.31.21.119:22-139.178.89.65:51212.service: Deactivated successfully. Aug 5 22:14:49.057669 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:14:49.059577 systemd-logind[1985]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:14:49.061372 systemd-logind[1985]: Removed session 20. Aug 5 22:14:53.833083 systemd[1]: run-containerd-runc-k8s.io-b9ace5dc62a59e407482d70a9a9112264b2edec74770e6802da72b1c6a29cd40-runc.ZbE2sl.mount: Deactivated successfully. Aug 5 22:14:54.090561 systemd[1]: Started sshd@20-172.31.21.119:22-139.178.89.65:40590.service - OpenSSH per-connection server daemon (139.178.89.65:40590). Aug 5 22:14:54.298461 sshd[5881]: Accepted publickey for core from 139.178.89.65 port 40590 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:54.300338 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:54.329723 systemd-logind[1985]: New session 21 of user core. Aug 5 22:14:54.335244 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:14:54.792353 sshd[5881]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:54.807363 systemd[1]: sshd@20-172.31.21.119:22-139.178.89.65:40590.service: Deactivated successfully. Aug 5 22:14:54.819764 systemd-logind[1985]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:14:54.825403 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:14:54.835979 systemd-logind[1985]: Removed session 21. Aug 5 22:14:59.830941 systemd[1]: Started sshd@21-172.31.21.119:22-139.178.89.65:40604.service - OpenSSH per-connection server daemon (139.178.89.65:40604). Aug 5 22:15:00.117164 sshd[5908]: Accepted publickey for core from 139.178.89.65 port 40604 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:00.122589 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:00.149206 systemd-logind[1985]: New session 22 of user core. Aug 5 22:15:00.158017 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:15:00.386310 sshd[5908]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:00.391471 systemd[1]: sshd@21-172.31.21.119:22-139.178.89.65:40604.service: Deactivated successfully. Aug 5 22:15:00.399013 systemd-logind[1985]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:15:00.399920 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:15:00.402292 systemd-logind[1985]: Removed session 22. Aug 5 22:15:05.433749 systemd[1]: Started sshd@22-172.31.21.119:22-139.178.89.65:53666.service - OpenSSH per-connection server daemon (139.178.89.65:53666). Aug 5 22:15:05.675864 sshd[5946]: Accepted publickey for core from 139.178.89.65 port 53666 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:05.673806 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:05.693956 systemd-logind[1985]: New session 23 of user core. Aug 5 22:15:05.697876 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:15:05.968671 sshd[5946]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:05.983109 systemd-logind[1985]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:15:05.985671 systemd[1]: sshd@22-172.31.21.119:22-139.178.89.65:53666.service: Deactivated successfully. Aug 5 22:15:06.010818 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:15:06.013476 systemd-logind[1985]: Removed session 23. Aug 5 22:15:10.995346 systemd[1]: Started sshd@23-172.31.21.119:22-139.178.89.65:42550.service - OpenSSH per-connection server daemon (139.178.89.65:42550). Aug 5 22:15:11.233221 sshd[5965]: Accepted publickey for core from 139.178.89.65 port 42550 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:11.234234 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:11.244689 systemd-logind[1985]: New session 24 of user core. Aug 5 22:15:11.254900 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:15:11.545221 sshd[5965]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:11.554198 systemd[1]: sshd@23-172.31.21.119:22-139.178.89.65:42550.service: Deactivated successfully. Aug 5 22:15:11.562136 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:15:11.563758 systemd-logind[1985]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:15:11.565173 systemd-logind[1985]: Removed session 24. Aug 5 22:15:14.239045 containerd[2016]: time="2024-08-05T22:15:14.239006934Z" level=info msg="StopPodSandbox for \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\"" Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.628 [WARNING][5993] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3d90ed18-88fe-4b37-ba5a-a0772610a05d", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8", Pod:"coredns-5dd5756b68-hpqq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0974664695", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.632 [INFO][5993] k8s.go 608: Cleaning up netns ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.632 [INFO][5993] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" iface="eth0" netns="" Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.632 [INFO][5993] k8s.go 615: Releasing IP address(es) ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.632 [INFO][5993] utils.go 188: Calico CNI releasing IP address ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.675 [INFO][5999] ipam_plugin.go 411: Releasing address using handleID ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" HandleID="k8s-pod-network.23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.675 [INFO][5999] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.675 [INFO][5999] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.685 [WARNING][5999] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" HandleID="k8s-pod-network.23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.685 [INFO][5999] ipam_plugin.go 439: Releasing address using workloadID ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" HandleID="k8s-pod-network.23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.687 [INFO][5999] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:15:14.695526 containerd[2016]: 2024-08-05 22:15:14.692 [INFO][5993] k8s.go 621: Teardown processing complete. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:15:14.696919 containerd[2016]: time="2024-08-05T22:15:14.695557473Z" level=info msg="TearDown network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\" successfully" Aug 5 22:15:14.696919 containerd[2016]: time="2024-08-05T22:15:14.695590634Z" level=info msg="StopPodSandbox for \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\" returns successfully" Aug 5 22:15:14.696919 containerd[2016]: time="2024-08-05T22:15:14.696203619Z" level=info msg="RemovePodSandbox for \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\"" Aug 5 22:15:14.704833 containerd[2016]: time="2024-08-05T22:15:14.704500194Z" level=info msg="Forcibly stopping sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\"" Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.817 [WARNING][6017] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3d90ed18-88fe-4b37-ba5a-a0772610a05d", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"a8cb69c4b7dd7e2ab3438765acb88fb07611766a118deb715ce90a7b142681c8", Pod:"coredns-5dd5756b68-hpqq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0974664695", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.819 [INFO][6017] k8s.go 608: Cleaning up netns ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.821 [INFO][6017] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" iface="eth0" netns="" Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.822 [INFO][6017] k8s.go 615: Releasing IP address(es) ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.822 [INFO][6017] utils.go 188: Calico CNI releasing IP address ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.857 [INFO][6024] ipam_plugin.go 411: Releasing address using handleID ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" HandleID="k8s-pod-network.23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.857 [INFO][6024] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.857 [INFO][6024] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.865 [WARNING][6024] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" HandleID="k8s-pod-network.23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.865 [INFO][6024] ipam_plugin.go 439: Releasing address using workloadID ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" HandleID="k8s-pod-network.23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Workload="ip--172--31--21--119-k8s-coredns--5dd5756b68--hpqq2-eth0" Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.867 [INFO][6024] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:15:14.872984 containerd[2016]: 2024-08-05 22:15:14.869 [INFO][6017] k8s.go 621: Teardown processing complete. ContainerID="23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9" Aug 5 22:15:14.873692 containerd[2016]: time="2024-08-05T22:15:14.873046248Z" level=info msg="TearDown network for sandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\" successfully" Aug 5 22:15:14.897157 containerd[2016]: time="2024-08-05T22:15:14.897085982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:15:14.897321 containerd[2016]: time="2024-08-05T22:15:14.897198588Z" level=info msg="RemovePodSandbox \"23d56c9a476d8b5bfb22049d146a6345f8028304095a8cc54ae7fb9d1bb5fff9\" returns successfully" Aug 5 22:15:14.899122 containerd[2016]: time="2024-08-05T22:15:14.898799761Z" level=info msg="StopPodSandbox for \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\"" Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.022 [WARNING][6043] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0", GenerateName:"calico-kube-controllers-75f5fbfb65-", Namespace:"calico-system", SelfLink:"", UID:"e9849bc8-dfda-4d92-a15c-331f7ac59401", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f5fbfb65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d", Pod:"calico-kube-controllers-75f5fbfb65-fbxpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9cdac68db05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.022 [INFO][6043] k8s.go 608: Cleaning up netns ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.022 [INFO][6043] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" iface="eth0" netns="" Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.022 [INFO][6043] k8s.go 615: Releasing IP address(es) ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.022 [INFO][6043] utils.go 188: Calico CNI releasing IP address ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.054 [INFO][6049] ipam_plugin.go 411: Releasing address using handleID ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" HandleID="k8s-pod-network.ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.054 [INFO][6049] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.054 [INFO][6049] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.062 [WARNING][6049] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" HandleID="k8s-pod-network.ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.062 [INFO][6049] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" HandleID="k8s-pod-network.ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.065 [INFO][6049] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:15:15.076226 containerd[2016]: 2024-08-05 22:15:15.067 [INFO][6043] k8s.go 621: Teardown processing complete. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:15:15.080984 containerd[2016]: time="2024-08-05T22:15:15.076278593Z" level=info msg="TearDown network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\" successfully" Aug 5 22:15:15.080984 containerd[2016]: time="2024-08-05T22:15:15.076309588Z" level=info msg="StopPodSandbox for \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\" returns successfully" Aug 5 22:15:15.080984 containerd[2016]: time="2024-08-05T22:15:15.076907878Z" level=info msg="RemovePodSandbox for \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\"" Aug 5 22:15:15.080984 containerd[2016]: time="2024-08-05T22:15:15.077192433Z" level=info msg="Forcibly stopping sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\"" Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.185 [WARNING][6067] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0", GenerateName:"calico-kube-controllers-75f5fbfb65-", Namespace:"calico-system", SelfLink:"", UID:"e9849bc8-dfda-4d92-a15c-331f7ac59401", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f5fbfb65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"ba812734f61ed7685f42121a710f65141f4df4b0ea27ec9c7499f7000fb9742d", Pod:"calico-kube-controllers-75f5fbfb65-fbxpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9cdac68db05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.186 [INFO][6067] k8s.go 608: Cleaning up netns ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.186 [INFO][6067] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" iface="eth0" netns="" Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.186 [INFO][6067] k8s.go 615: Releasing IP address(es) ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.186 [INFO][6067] utils.go 188: Calico CNI releasing IP address ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.237 [INFO][6073] ipam_plugin.go 411: Releasing address using handleID ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" HandleID="k8s-pod-network.ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.237 [INFO][6073] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.237 [INFO][6073] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.248 [WARNING][6073] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" HandleID="k8s-pod-network.ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.248 [INFO][6073] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" HandleID="k8s-pod-network.ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Workload="ip--172--31--21--119-k8s-calico--kube--controllers--75f5fbfb65--fbxpw-eth0" Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.250 [INFO][6073] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:15:15.254510 containerd[2016]: 2024-08-05 22:15:15.252 [INFO][6067] k8s.go 621: Teardown processing complete. ContainerID="ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072" Aug 5 22:15:15.254510 containerd[2016]: time="2024-08-05T22:15:15.253865067Z" level=info msg="TearDown network for sandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\" successfully" Aug 5 22:15:15.262677 containerd[2016]: time="2024-08-05T22:15:15.262188390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:15:15.262677 containerd[2016]: time="2024-08-05T22:15:15.262376744Z" level=info msg="RemovePodSandbox \"ea0e9455a06e9a66dc69a02dba83e41b7ff5c43f0b86383aaaa479e719d71072\" returns successfully" Aug 5 22:15:15.265211 containerd[2016]: time="2024-08-05T22:15:15.264628831Z" level=info msg="StopPodSandbox for \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\"" Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.392 [WARNING][6091] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49ea90fb-8427-4eac-8b89-57071ef71ebc", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c", Pod:"csi-node-driver-pqxtn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali33d36ae7431", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.392 [INFO][6091] k8s.go 608: Cleaning up netns ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.392 [INFO][6091] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" iface="eth0" netns="" Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.392 [INFO][6091] k8s.go 615: Releasing IP address(es) ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.392 [INFO][6091] utils.go 188: Calico CNI releasing IP address ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.457 [INFO][6097] ipam_plugin.go 411: Releasing address using handleID ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" HandleID="k8s-pod-network.48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.458 [INFO][6097] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.458 [INFO][6097] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.472 [WARNING][6097] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" HandleID="k8s-pod-network.48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.473 [INFO][6097] ipam_plugin.go 439: Releasing address using workloadID ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" HandleID="k8s-pod-network.48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.475 [INFO][6097] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:15:15.501287 containerd[2016]: 2024-08-05 22:15:15.478 [INFO][6091] k8s.go 621: Teardown processing complete. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:15:15.501287 containerd[2016]: time="2024-08-05T22:15:15.491677767Z" level=info msg="TearDown network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\" successfully" Aug 5 22:15:15.501287 containerd[2016]: time="2024-08-05T22:15:15.491705813Z" level=info msg="StopPodSandbox for \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\" returns successfully" Aug 5 22:15:15.501287 containerd[2016]: time="2024-08-05T22:15:15.493578984Z" level=info msg="RemovePodSandbox for \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\"" Aug 5 22:15:15.501287 containerd[2016]: time="2024-08-05T22:15:15.496723344Z" level=info msg="Forcibly stopping sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\"" Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.642 [WARNING][6121] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49ea90fb-8427-4eac-8b89-57071ef71ebc", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-119", ContainerID:"0cabf6d6ff1706071ddd6a0d0e992338254a36013320bf06de75d91c65c1742c", Pod:"csi-node-driver-pqxtn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali33d36ae7431", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.642 [INFO][6121] k8s.go 608: Cleaning up netns ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.642 [INFO][6121] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" iface="eth0" netns="" Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.642 [INFO][6121] k8s.go 615: Releasing IP address(es) ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.642 [INFO][6121] utils.go 188: Calico CNI releasing IP address ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.676 [INFO][6127] ipam_plugin.go 411: Releasing address using handleID ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" HandleID="k8s-pod-network.48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.676 [INFO][6127] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.676 [INFO][6127] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.686 [WARNING][6127] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" HandleID="k8s-pod-network.48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.686 [INFO][6127] ipam_plugin.go 439: Releasing address using workloadID ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" HandleID="k8s-pod-network.48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Workload="ip--172--31--21--119-k8s-csi--node--driver--pqxtn-eth0" Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.688 [INFO][6127] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:15:15.692374 containerd[2016]: 2024-08-05 22:15:15.690 [INFO][6121] k8s.go 621: Teardown processing complete. ContainerID="48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb" Aug 5 22:15:15.693669 containerd[2016]: time="2024-08-05T22:15:15.692462253Z" level=info msg="TearDown network for sandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\" successfully" Aug 5 22:15:15.697388 containerd[2016]: time="2024-08-05T22:15:15.697342270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:15:15.697644 containerd[2016]: time="2024-08-05T22:15:15.697547138Z" level=info msg="RemovePodSandbox \"48dbc5b11f242060ff194c28e0f8b0365a572fc4081e01eb3f95ed0678c1edfb\" returns successfully" Aug 5 22:15:16.580142 systemd[1]: Started sshd@24-172.31.21.119:22-139.178.89.65:42556.service - OpenSSH per-connection server daemon (139.178.89.65:42556). Aug 5 22:15:16.892459 sshd[6134]: Accepted publickey for core from 139.178.89.65 port 42556 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:16.905660 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:16.932059 systemd-logind[1985]: New session 25 of user core. Aug 5 22:15:16.961603 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:15:17.306267 sshd[6134]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:17.320713 systemd-logind[1985]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:15:17.321946 systemd[1]: sshd@24-172.31.21.119:22-139.178.89.65:42556.service: Deactivated successfully. Aug 5 22:15:17.353459 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:15:17.354791 systemd-logind[1985]: Removed session 25. Aug 5 22:15:22.340771 systemd[1]: Started sshd@25-172.31.21.119:22-139.178.89.65:55440.service - OpenSSH per-connection server daemon (139.178.89.65:55440). Aug 5 22:15:22.540809 sshd[6159]: Accepted publickey for core from 139.178.89.65 port 55440 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:22.541880 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:22.562338 systemd-logind[1985]: New session 26 of user core. Aug 5 22:15:22.580875 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:15:22.864200 sshd[6159]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:22.868374 systemd[1]: sshd@25-172.31.21.119:22-139.178.89.65:55440.service: Deactivated successfully. Aug 5 22:15:22.875705 systemd-logind[1985]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:15:22.876991 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:15:22.879166 systemd-logind[1985]: Removed session 26. Aug 5 22:15:37.373029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4475fa79679104b4e6c4566dd4ce0414c762719ca24d691c10090df2a773b9cb-rootfs.mount: Deactivated successfully. Aug 5 22:15:37.384733 containerd[2016]: time="2024-08-05T22:15:37.368135972Z" level=info msg="shim disconnected" id=4475fa79679104b4e6c4566dd4ce0414c762719ca24d691c10090df2a773b9cb namespace=k8s.io Aug 5 22:15:37.384733 containerd[2016]: time="2024-08-05T22:15:37.384734532Z" level=warning msg="cleaning up after shim disconnected" id=4475fa79679104b4e6c4566dd4ce0414c762719ca24d691c10090df2a773b9cb namespace=k8s.io Aug 5 22:15:37.385298 containerd[2016]: time="2024-08-05T22:15:37.384753718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:15:37.406663 containerd[2016]: time="2024-08-05T22:15:37.405912760Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:15:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:15:37.933650 containerd[2016]: time="2024-08-05T22:15:37.933580928Z" level=info msg="shim disconnected" id=180d7fdecb4ad8633aa7391f8d538ea987a51fe018fb436234790ad34c871559 namespace=k8s.io Aug 5 22:15:37.933650 containerd[2016]: time="2024-08-05T22:15:37.933650246Z" level=warning msg="cleaning up after shim disconnected" id=180d7fdecb4ad8633aa7391f8d538ea987a51fe018fb436234790ad34c871559 namespace=k8s.io Aug 5 22:15:37.937396 containerd[2016]: time="2024-08-05T22:15:37.933660839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:15:37.939718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-180d7fdecb4ad8633aa7391f8d538ea987a51fe018fb436234790ad34c871559-rootfs.mount: Deactivated successfully. Aug 5 22:15:38.161744 kubelet[3468]: I0805 22:15:38.161632 3468 scope.go:117] "RemoveContainer" containerID="4475fa79679104b4e6c4566dd4ce0414c762719ca24d691c10090df2a773b9cb" Aug 5 22:15:38.168765 kubelet[3468]: I0805 22:15:38.168189 3468 scope.go:117] "RemoveContainer" containerID="180d7fdecb4ad8633aa7391f8d538ea987a51fe018fb436234790ad34c871559" Aug 5 22:15:38.197732 containerd[2016]: time="2024-08-05T22:15:38.197315290Z" level=info msg="CreateContainer within sandbox \"b1afd86d4439cda0b41825f2c0d175a6d682f47e36cd50b6d57e65a77fb41078\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 5 22:15:38.199578 containerd[2016]: time="2024-08-05T22:15:38.199265214Z" level=info msg="CreateContainer within sandbox \"c2850f0d750a685465e4a4e6c9348b0d6ff655b31bffc62041bab60aa5c13248\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 5 22:15:38.308458 containerd[2016]: time="2024-08-05T22:15:38.305821539Z" level=info msg="CreateContainer within sandbox \"c2850f0d750a685465e4a4e6c9348b0d6ff655b31bffc62041bab60aa5c13248\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ee253510e9c4488d8d9be51063bead6847f8ca278e1922ea1383cc5b215c4b41\"" Aug 5 22:15:38.316478 containerd[2016]: time="2024-08-05T22:15:38.315754794Z" level=info msg="StartContainer for \"ee253510e9c4488d8d9be51063bead6847f8ca278e1922ea1383cc5b215c4b41\"" Aug 5 22:15:38.318319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356671684.mount: Deactivated successfully. Aug 5 22:15:38.319461 containerd[2016]: time="2024-08-05T22:15:38.319154471Z" level=info msg="CreateContainer within sandbox \"b1afd86d4439cda0b41825f2c0d175a6d682f47e36cd50b6d57e65a77fb41078\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"99f41a631dfc6ff5f56ae23713d7e11ce29ab62ad0f9b5cadf385060f2e32280\"" Aug 5 22:15:38.319970 containerd[2016]: time="2024-08-05T22:15:38.319861319Z" level=info msg="StartContainer for \"99f41a631dfc6ff5f56ae23713d7e11ce29ab62ad0f9b5cadf385060f2e32280\"" Aug 5 22:15:38.638107 containerd[2016]: time="2024-08-05T22:15:38.638026481Z" level=info msg="StartContainer for \"ee253510e9c4488d8d9be51063bead6847f8ca278e1922ea1383cc5b215c4b41\" returns successfully" Aug 5 22:15:38.641447 containerd[2016]: time="2024-08-05T22:15:38.638363746Z" level=info msg="StartContainer for \"99f41a631dfc6ff5f56ae23713d7e11ce29ab62ad0f9b5cadf385060f2e32280\" returns successfully" Aug 5 22:15:42.761569 containerd[2016]: time="2024-08-05T22:15:42.761287176Z" level=info msg="shim disconnected" id=eba5e770ba9568dab4686257e83919ca1a1327f9dec76521dcfdd47734653aa5 namespace=k8s.io Aug 5 22:15:42.761569 containerd[2016]: time="2024-08-05T22:15:42.761358446Z" level=warning msg="cleaning up after shim disconnected" id=eba5e770ba9568dab4686257e83919ca1a1327f9dec76521dcfdd47734653aa5 namespace=k8s.io Aug 5 22:15:42.761569 containerd[2016]: time="2024-08-05T22:15:42.761383441Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:15:42.772778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eba5e770ba9568dab4686257e83919ca1a1327f9dec76521dcfdd47734653aa5-rootfs.mount: Deactivated successfully. Aug 5 22:15:43.234044 kubelet[3468]: I0805 22:15:43.234010 3468 scope.go:117] "RemoveContainer" containerID="eba5e770ba9568dab4686257e83919ca1a1327f9dec76521dcfdd47734653aa5" Aug 5 22:15:43.237322 containerd[2016]: time="2024-08-05T22:15:43.237284334Z" level=info msg="CreateContainer within sandbox \"78f982455b11f9a92ace336a2708a0fc3c12670ab691ec20a5a40a4787118593\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 5 22:15:43.265330 containerd[2016]: time="2024-08-05T22:15:43.265279364Z" level=info msg="CreateContainer within sandbox \"78f982455b11f9a92ace336a2708a0fc3c12670ab691ec20a5a40a4787118593\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"06cacd2b01e41298286132a1e8893587ed844c6722f19f5114a8045325ba65dc\"" Aug 5 22:15:43.267437 containerd[2016]: time="2024-08-05T22:15:43.266047529Z" level=info msg="StartContainer for \"06cacd2b01e41298286132a1e8893587ed844c6722f19f5114a8045325ba65dc\"" Aug 5 22:15:43.269156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034474456.mount: Deactivated successfully. Aug 5 22:15:43.438225 containerd[2016]: time="2024-08-05T22:15:43.438174836Z" level=info msg="StartContainer for \"06cacd2b01e41298286132a1e8893587ed844c6722f19f5114a8045325ba65dc\" returns successfully" Aug 5 22:15:43.770186 systemd[1]: run-containerd-runc-k8s.io-06cacd2b01e41298286132a1e8893587ed844c6722f19f5114a8045325ba65dc-runc.UgSqaD.mount: Deactivated successfully. Aug 5 22:15:46.748136 kubelet[3468]: E0805 22:15:46.748057 3468 controller.go:193] "Failed to update lease" err="Put \"https://172.31.21.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-119?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 5 22:15:56.749315 kubelet[3468]: E0805 22:15:56.749204 3468 controller.go:193] "Failed to update lease" err="Put \"https://172.31.21.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-119?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"