Sep 4 17:22:04.968127 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:22:04.968167 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:22:04.968184 kernel: BIOS-provided physical RAM map: Sep 4 17:22:04.968193 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:22:04.968203 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:22:04.968213 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:22:04.968229 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Sep 4 17:22:04.968241 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Sep 4 17:22:04.968251 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Sep 4 17:22:04.968261 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:22:04.968274 kernel: NX (Execute Disable) protection: active Sep 4 17:22:04.968285 kernel: APIC: Static calls initialized Sep 4 17:22:04.968296 kernel: SMBIOS 2.7 present. Sep 4 17:22:04.968307 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 4 17:22:04.968324 kernel: Hypervisor detected: KVM Sep 4 17:22:04.968337 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:22:04.968349 kernel: kvm-clock: using sched offset of 6740984218 cycles Sep 4 17:22:04.968363 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:22:04.968376 kernel: tsc: Detected 2500.004 MHz processor Sep 4 17:22:04.968389 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:22:04.968402 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:22:04.968418 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Sep 4 17:22:04.968431 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:22:04.968446 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:22:04.968459 kernel: Using GB pages for direct mapping Sep 4 17:22:04.968474 kernel: ACPI: Early table checksum verification disabled Sep 4 17:22:04.968487 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Sep 4 17:22:04.968502 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Sep 4 17:22:04.968516 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:22:04.968550 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 4 17:22:04.968567 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Sep 4 17:22:04.968579 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 17:22:04.968589 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:22:04.968601 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 4 17:22:04.968612 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:22:04.968622 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 4 17:22:04.968634 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 4 17:22:04.968645 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 17:22:04.968660 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Sep 4 17:22:04.968672 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Sep 4 17:22:04.968690 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Sep 4 17:22:04.970134 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Sep 4 17:22:04.970149 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Sep 4 17:22:04.970162 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Sep 4 17:22:04.970180 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Sep 4 17:22:04.970192 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Sep 4 17:22:04.970210 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Sep 4 17:22:04.970223 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Sep 4 17:22:04.970236 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:22:04.970249 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:22:04.970263 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 4 17:22:04.970277 kernel: NUMA: Initialized distance table, cnt=1 Sep 4 17:22:04.970289 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Sep 4 17:22:04.970306 kernel: Zone ranges: Sep 4 17:22:04.970318 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:22:04.970331 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Sep 4 17:22:04.970345 kernel: Normal empty Sep 4 17:22:04.970358 kernel: Movable zone start for each node Sep 4 17:22:04.970371 kernel: Early memory node ranges Sep 4 17:22:04.970384 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:22:04.970397 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Sep 4 17:22:04.970410 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Sep 4 17:22:04.970425 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:22:04.970438 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:22:04.970450 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Sep 4 17:22:04.970462 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:22:04.970474 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:22:04.970487 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 4 17:22:04.970500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:22:04.970513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:22:04.970525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:22:04.970557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:22:04.970569 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:22:04.970582 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:22:04.970594 kernel: TSC deadline timer available Sep 4 17:22:04.970607 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:22:04.970620 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:22:04.970632 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Sep 4 17:22:04.970645 kernel: Booting paravirtualized kernel on KVM Sep 4 17:22:04.970658 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:22:04.970671 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:22:04.970687 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:22:04.970699 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:22:04.970711 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:22:04.970724 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:22:04.970737 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:22:04.970751 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:22:04.970764 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:22:04.970778 kernel: random: crng init done Sep 4 17:22:04.970791 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:22:04.970804 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:22:04.970816 kernel: Fallback order for Node 0: 0 Sep 4 17:22:04.970829 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Sep 4 17:22:04.970841 kernel: Policy zone: DMA32 Sep 4 17:22:04.970854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:22:04.970867 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 131296K reserved, 0K cma-reserved) Sep 4 17:22:04.970880 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:22:04.970895 kernel: Kernel/User page tables isolation: enabled Sep 4 17:22:04.970908 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:22:04.970920 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:22:04.970933 kernel: Dynamic Preempt: voluntary Sep 4 17:22:04.970945 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:22:04.970959 kernel: rcu: RCU event tracing is enabled. Sep 4 17:22:04.970972 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:22:04.970986 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:22:04.970999 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:22:04.971017 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:22:04.971031 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:22:04.971044 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:22:04.971059 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 17:22:04.971074 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:22:04.971088 kernel: Console: colour VGA+ 80x25 Sep 4 17:22:04.971103 kernel: printk: console [ttyS0] enabled Sep 4 17:22:04.971118 kernel: ACPI: Core revision 20230628 Sep 4 17:22:04.971133 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 4 17:22:04.971148 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:22:04.971167 kernel: x2apic enabled Sep 4 17:22:04.971183 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:22:04.971211 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 4 17:22:04.971231 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Sep 4 17:22:04.971248 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:22:04.971264 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:22:04.971281 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:22:04.971297 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:22:04.971313 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:22:04.971329 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:22:04.971345 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:22:04.971361 kernel: RETBleed: Vulnerable Sep 4 17:22:04.971381 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:22:04.971397 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:22:04.971412 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:22:04.971428 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:22:04.971444 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:22:04.971459 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:22:04.971480 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:22:04.971496 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 4 17:22:04.971512 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 4 17:22:04.971528 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:22:04.971565 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:22:04.971578 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:22:04.971592 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 4 17:22:04.971606 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:22:04.971621 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 4 17:22:04.971637 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 4 17:22:04.971652 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 4 17:22:04.971673 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 4 17:22:04.971688 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 4 17:22:04.971704 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 4 17:22:04.971721 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 4 17:22:04.971737 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:22:04.971753 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:22:04.971770 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:22:04.971786 kernel: SELinux: Initializing. Sep 4 17:22:04.971802 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:22:04.971819 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:22:04.971835 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:22:04.971851 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:22:04.971872 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:22:04.971888 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:22:04.971905 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:22:04.971922 kernel: signal: max sigframe size: 3632 Sep 4 17:22:04.971939 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:22:04.971956 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:22:04.971973 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:22:04.971989 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:22:04.972005 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:22:04.972027 kernel: .... node #0, CPUs: #1 Sep 4 17:22:04.972044 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 17:22:04.972061 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:22:04.972079 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:22:04.972095 kernel: smpboot: Max logical packages: 1 Sep 4 17:22:04.972112 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Sep 4 17:22:04.972128 kernel: devtmpfs: initialized Sep 4 17:22:04.972145 kernel: x86/mm: Memory block size: 128MB Sep 4 17:22:04.972165 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:22:04.972182 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:22:04.972198 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:22:04.972215 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:22:04.972231 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:22:04.972249 kernel: audit: type=2000 audit(1725470523.777:1): state=initialized audit_enabled=0 res=1 Sep 4 17:22:04.972265 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:22:04.972281 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:22:04.972298 kernel: cpuidle: using governor menu Sep 4 17:22:04.972318 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:22:04.972334 kernel: dca service started, version 1.12.1 Sep 4 17:22:04.972350 kernel: PCI: Using configuration type 1 for base access Sep 4 17:22:04.972367 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:22:04.972384 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:22:04.972401 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:22:04.972417 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:22:04.972434 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:22:04.972450 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:22:04.972470 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:22:04.972487 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:22:04.972503 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:22:04.972520 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 17:22:04.972552 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:22:04.972565 kernel: ACPI: Interpreter enabled Sep 4 17:22:04.972577 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:22:04.972589 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:22:04.972602 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:22:04.972619 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:22:04.972632 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 4 17:22:04.972645 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:22:04.974208 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:22:04.974715 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 17:22:04.974883 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 17:22:04.974907 kernel: acpiphp: Slot [3] registered Sep 4 17:22:04.974930 kernel: acpiphp: Slot [4] registered Sep 4 17:22:04.974947 kernel: acpiphp: Slot [5] registered Sep 4 17:22:04.974964 kernel: acpiphp: Slot [6] registered Sep 4 17:22:04.974981 kernel: acpiphp: Slot [7] registered Sep 4 17:22:04.974997 kernel: acpiphp: Slot [8] registered Sep 4 17:22:04.975547 kernel: acpiphp: Slot [9] registered Sep 4 17:22:04.975566 kernel: acpiphp: Slot [10] registered Sep 4 17:22:04.975579 kernel: acpiphp: Slot [11] registered Sep 4 17:22:04.975592 kernel: acpiphp: Slot [12] registered Sep 4 17:22:04.975648 kernel: acpiphp: Slot [13] registered Sep 4 17:22:04.975735 kernel: acpiphp: Slot [14] registered Sep 4 17:22:04.975748 kernel: acpiphp: Slot [15] registered Sep 4 17:22:04.975761 kernel: acpiphp: Slot [16] registered Sep 4 17:22:04.975773 kernel: acpiphp: Slot [17] registered Sep 4 17:22:04.975811 kernel: acpiphp: Slot [18] registered Sep 4 17:22:04.975823 kernel: acpiphp: Slot [19] registered Sep 4 17:22:04.975836 kernel: acpiphp: Slot [20] registered Sep 4 17:22:04.975849 kernel: acpiphp: Slot [21] registered Sep 4 17:22:04.975862 kernel: acpiphp: Slot [22] registered Sep 4 17:22:04.975879 kernel: acpiphp: Slot [23] registered Sep 4 17:22:04.975891 kernel: acpiphp: Slot [24] registered Sep 4 17:22:04.975904 kernel: acpiphp: Slot [25] registered Sep 4 17:22:04.975967 kernel: acpiphp: Slot [26] registered Sep 4 17:22:04.975980 kernel: acpiphp: Slot [27] registered Sep 4 17:22:04.975993 kernel: acpiphp: Slot [28] registered Sep 4 17:22:04.976009 kernel: acpiphp: Slot [29] registered Sep 4 17:22:04.976024 kernel: acpiphp: Slot [30] registered Sep 4 17:22:04.976037 kernel: acpiphp: Slot [31] registered Sep 4 17:22:04.976050 kernel: PCI host bridge to bus 0000:00 Sep 4 17:22:04.976698 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:22:04.976987 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:22:04.977172 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:22:04.977370 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 17:22:04.977561 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:22:04.977751 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:22:04.977923 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:22:04.978078 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 4 17:22:04.978374 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:22:04.978664 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Sep 4 17:22:04.978841 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 4 17:22:04.978979 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 4 17:22:04.979108 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 4 17:22:04.979322 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 4 17:22:04.979453 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 4 17:22:04.979605 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 4 17:22:04.979840 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 4 17:22:04.979984 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Sep 4 17:22:04.980118 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 4 17:22:04.980263 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:22:04.980562 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:22:04.980711 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Sep 4 17:22:04.980858 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:22:04.980995 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Sep 4 17:22:04.981016 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:22:04.981033 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:22:04.981050 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:22:04.981070 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:22:04.981086 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:22:04.981163 kernel: iommu: Default domain type: Translated Sep 4 17:22:04.981181 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:22:04.981253 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:22:04.981272 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:22:04.981288 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:22:04.981304 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Sep 4 17:22:04.981470 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 4 17:22:04.981708 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 4 17:22:04.981843 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:22:04.981861 kernel: vgaarb: loaded Sep 4 17:22:04.981877 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 4 17:22:04.981892 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 4 17:22:04.981907 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:22:04.981988 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:22:04.982005 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:22:04.982025 kernel: pnp: PnP ACPI init Sep 4 17:22:04.982040 kernel: pnp: PnP ACPI: found 5 devices Sep 4 17:22:04.982055 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:22:04.982070 kernel: NET: Registered PF_INET protocol family Sep 4 17:22:04.982086 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:22:04.982140 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 17:22:04.982159 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:22:04.982176 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:22:04.982291 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 17:22:04.982313 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 17:22:04.982328 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:22:04.982378 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:22:04.982396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:22:04.982411 kernel: NET: Registered PF_XDP protocol family Sep 4 17:22:04.982816 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:22:04.983366 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:22:04.983697 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:22:04.984203 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 17:22:04.984581 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:22:04.984670 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:22:04.985135 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:22:04.985155 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 4 17:22:04.985170 kernel: clocksource: Switched to clocksource tsc Sep 4 17:22:04.985185 kernel: Initialise system trusted keyrings Sep 4 17:22:04.985246 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 17:22:04.985269 kernel: Key type asymmetric registered Sep 4 17:22:04.985284 kernel: Asymmetric key parser 'x509' registered Sep 4 17:22:04.985298 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:22:04.985313 kernel: io scheduler mq-deadline registered Sep 4 17:22:04.985327 kernel: io scheduler kyber registered Sep 4 17:22:04.985342 kernel: io scheduler bfq registered Sep 4 17:22:04.985357 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:22:04.985371 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:22:04.985387 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:22:04.985402 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:22:04.985420 kernel: i8042: Warning: Keylock active Sep 4 17:22:04.985435 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:22:04.985451 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:22:04.985650 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 17:22:04.985778 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 17:22:04.985903 kernel: rtc_cmos 00:00: setting system clock to 2024-09-04T17:22:04 UTC (1725470524) Sep 4 17:22:04.986159 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 17:22:04.986190 kernel: intel_pstate: CPU model not supported Sep 4 17:22:04.986210 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:22:04.986224 kernel: Segment Routing with IPv6 Sep 4 17:22:04.986239 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:22:04.986252 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:22:04.986267 kernel: Key type dns_resolver registered Sep 4 17:22:04.986282 kernel: IPI shorthand broadcast: enabled Sep 4 17:22:04.986297 kernel: sched_clock: Marking stable (699002165, 259928457)->(1037837896, -78907274) Sep 4 17:22:04.986312 kernel: registered taskstats version 1 Sep 4 17:22:04.986327 kernel: Loading compiled-in X.509 certificates Sep 4 17:22:04.986346 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:22:04.986361 kernel: Key type .fscrypt registered Sep 4 17:22:04.986376 kernel: Key type fscrypt-provisioning registered Sep 4 17:22:04.986391 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:22:04.986406 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:22:04.986421 kernel: ima: No architecture policies found Sep 4 17:22:04.986436 kernel: clk: Disabling unused clocks Sep 4 17:22:04.986450 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:22:04.986467 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:22:04.986481 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:22:04.986496 kernel: Run /init as init process Sep 4 17:22:04.986511 kernel: with arguments: Sep 4 17:22:04.986526 kernel: /init Sep 4 17:22:04.986572 kernel: with environment: Sep 4 17:22:04.986585 kernel: HOME=/ Sep 4 17:22:04.986598 kernel: TERM=linux Sep 4 17:22:04.986611 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:22:04.986632 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:22:04.986650 systemd[1]: Detected virtualization amazon. Sep 4 17:22:04.986687 systemd[1]: Detected architecture x86-64. Sep 4 17:22:04.986703 systemd[1]: Running in initrd. Sep 4 17:22:04.986720 systemd[1]: No hostname configured, using default hostname. Sep 4 17:22:04.986742 systemd[1]: Hostname set to . Sep 4 17:22:04.986760 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:22:04.986776 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:22:04.986793 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:22:04.986809 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:22:04.986827 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:22:04.986844 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:22:04.986861 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:22:04.986881 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:22:04.986898 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:22:04.986913 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:22:04.986928 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:22:04.986944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:22:04.986960 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:22:04.986975 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:22:04.986995 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:22:04.987011 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:22:04.987029 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:22:04.987045 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:22:04.987062 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:22:04.987077 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:22:04.987094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:22:04.987110 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:22:04.987130 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:22:04.987266 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:22:04.987285 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:22:04.987303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:22:04.987317 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:22:04.987333 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:22:04.987349 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:22:04.987371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:22:04.987390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:22:04.987405 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:22:04.987455 systemd-journald[178]: Collecting audit messages is disabled. Sep 4 17:22:04.987497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:22:04.987513 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:22:04.987572 systemd-journald[178]: Journal started Sep 4 17:22:04.987606 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2a205d6bb801f5a4e3d242218c386c) is 4.8M, max 38.6M, 33.7M free. Sep 4 17:22:04.991257 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:22:04.998035 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:22:05.006559 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:22:05.018279 systemd-modules-load[179]: Inserted module 'overlay' Sep 4 17:22:05.164547 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:22:05.164636 kernel: Bridge firewalling registered Sep 4 17:22:05.029168 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:22:05.072900 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 4 17:22:05.172190 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:22:05.186750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:22:05.198848 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:22:05.209869 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:22:05.212400 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:22:05.221864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:22:05.228278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:22:05.247674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:22:05.259314 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:22:05.259854 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:22:05.266492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:22:05.275702 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:22:05.298600 dracut-cmdline[214]: dracut-dracut-053 Sep 4 17:22:05.308355 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:22:05.333486 systemd-resolved[207]: Positive Trust Anchors: Sep 4 17:22:05.333505 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:22:05.333588 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:22:05.350136 systemd-resolved[207]: Defaulting to hostname 'linux'. Sep 4 17:22:05.353073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:22:05.356222 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:22:05.417569 kernel: SCSI subsystem initialized Sep 4 17:22:05.430568 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:22:05.446558 kernel: iscsi: registered transport (tcp) Sep 4 17:22:05.476565 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:22:05.476634 kernel: QLogic iSCSI HBA Driver Sep 4 17:22:05.538464 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:22:05.547754 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:22:05.593057 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:22:05.593176 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:22:05.593266 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:22:05.697885 kernel: raid6: avx512x4 gen() 5091 MB/s Sep 4 17:22:05.714615 kernel: raid6: avx512x2 gen() 8121 MB/s Sep 4 17:22:05.731612 kernel: raid6: avx512x1 gen() 9905 MB/s Sep 4 17:22:05.748593 kernel: raid6: avx2x4 gen() 9742 MB/s Sep 4 17:22:05.766921 kernel: raid6: avx2x2 gen() 7571 MB/s Sep 4 17:22:05.783831 kernel: raid6: avx2x1 gen() 5447 MB/s Sep 4 17:22:05.784604 kernel: raid6: using algorithm avx512x1 gen() 9905 MB/s Sep 4 17:22:05.802130 kernel: raid6: .... xor() 18222 MB/s, rmw enabled Sep 4 17:22:05.802239 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:22:05.833685 kernel: xor: automatically using best checksumming function avx Sep 4 17:22:06.061598 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:22:06.074343 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:22:06.081800 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:22:06.109764 systemd-udevd[397]: Using default interface naming scheme 'v255'. Sep 4 17:22:06.115371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:22:06.126783 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:22:06.152244 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Sep 4 17:22:06.185433 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:22:06.195736 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:22:06.285917 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:22:06.301953 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:22:06.350490 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:22:06.358833 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:22:06.361355 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:22:06.366472 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:22:06.376920 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:22:06.423354 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:22:06.423663 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:22:06.425584 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 4 17:22:06.433839 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:22:06.439675 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:af:3d:67:9f:a5 Sep 4 17:22:06.451676 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:22:06.471823 (udev-worker)[446]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:22:06.483062 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:22:06.483141 kernel: AES CTR mode by8 optimization enabled Sep 4 17:22:06.502873 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:22:06.503119 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:22:06.507697 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:22:06.510251 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:22:06.510336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:22:06.511773 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:22:06.524860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:22:06.537041 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:22:06.537320 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 17:22:06.548554 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:22:06.554073 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:22:06.554135 kernel: GPT:9289727 != 16777215 Sep 4 17:22:06.554154 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:22:06.554172 kernel: GPT:9289727 != 16777215 Sep 4 17:22:06.554188 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:22:06.554284 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:22:06.645581 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (446) Sep 4 17:22:06.692979 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (455) Sep 4 17:22:06.714031 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:22:06.733869 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:22:06.740748 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:22:06.801836 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:22:06.802212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:22:06.812696 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:22:06.828321 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:22:06.829947 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:22:06.840789 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:22:06.847000 disk-uuid[630]: Primary Header is updated. Sep 4 17:22:06.847000 disk-uuid[630]: Secondary Entries is updated. Sep 4 17:22:06.847000 disk-uuid[630]: Secondary Header is updated. Sep 4 17:22:06.854563 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:22:06.859644 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:22:07.866582 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:22:07.869741 disk-uuid[631]: The operation has completed successfully. Sep 4 17:22:08.053496 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:22:08.053745 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:22:08.072821 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:22:08.081613 sh[974]: Success Sep 4 17:22:08.107582 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:22:08.212739 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:22:08.222698 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:22:08.225978 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:22:08.260164 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:22:08.260234 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:22:08.260263 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:22:08.261989 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:22:08.262045 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:22:08.350558 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:22:08.362464 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:22:08.363445 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:22:08.369699 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:22:08.372157 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:22:08.410079 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:22:08.410142 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:22:08.410155 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:22:08.414562 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:22:08.425928 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:22:08.427632 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:22:08.432729 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:22:08.439763 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:22:08.539208 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:22:08.547733 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:22:08.590860 systemd-networkd[1166]: lo: Link UP Sep 4 17:22:08.590870 systemd-networkd[1166]: lo: Gained carrier Sep 4 17:22:08.592710 systemd-networkd[1166]: Enumeration completed Sep 4 17:22:08.593519 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:22:08.593524 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:22:08.595365 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:22:08.597904 systemd[1]: Reached target network.target - Network. Sep 4 17:22:08.606163 systemd-networkd[1166]: eth0: Link UP Sep 4 17:22:08.606169 systemd-networkd[1166]: eth0: Gained carrier Sep 4 17:22:08.606202 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:22:08.635727 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.27.203/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:22:08.837994 ignition[1097]: Ignition 2.18.0 Sep 4 17:22:08.838012 ignition[1097]: Stage: fetch-offline Sep 4 17:22:08.838317 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:22:08.838332 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:22:08.839295 ignition[1097]: Ignition finished successfully Sep 4 17:22:08.844197 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:22:08.851809 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:22:08.873569 ignition[1176]: Ignition 2.18.0 Sep 4 17:22:08.873583 ignition[1176]: Stage: fetch Sep 4 17:22:08.874084 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:22:08.874099 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:22:08.874312 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:22:08.924040 ignition[1176]: PUT result: OK Sep 4 17:22:08.950442 ignition[1176]: parsed url from cmdline: "" Sep 4 17:22:08.950455 ignition[1176]: no config URL provided Sep 4 17:22:08.950465 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:22:08.950481 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:22:08.950505 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:22:08.955791 ignition[1176]: PUT result: OK Sep 4 17:22:08.955869 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:22:08.960405 ignition[1176]: GET result: OK Sep 4 17:22:08.960521 ignition[1176]: parsing config with SHA512: 57566fcece66002b8b826d10ff04bbfbc695671e9501e269b447174c25c4a43eedb77d195cd8cf6d9e72cf5b2978f957adf8ee15fc6e899796c40c873b03be41 Sep 4 17:22:08.967792 unknown[1176]: fetched base config from "system" Sep 4 17:22:08.967810 unknown[1176]: fetched base config from "system" Sep 4 17:22:08.969760 ignition[1176]: fetch: fetch complete Sep 4 17:22:08.967817 unknown[1176]: fetched user config from "aws" Sep 4 17:22:08.969766 ignition[1176]: fetch: fetch passed Sep 4 17:22:08.969822 ignition[1176]: Ignition finished successfully Sep 4 17:22:08.976076 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:22:08.982752 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:22:09.013192 ignition[1183]: Ignition 2.18.0 Sep 4 17:22:09.013210 ignition[1183]: Stage: kargs Sep 4 17:22:09.014287 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:22:09.014302 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:22:09.015014 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:22:09.019435 ignition[1183]: PUT result: OK Sep 4 17:22:09.023727 ignition[1183]: kargs: kargs passed Sep 4 17:22:09.023792 ignition[1183]: Ignition finished successfully Sep 4 17:22:09.027317 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:22:09.033853 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:22:09.071370 ignition[1190]: Ignition 2.18.0 Sep 4 17:22:09.071385 ignition[1190]: Stage: disks Sep 4 17:22:09.071941 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:22:09.071954 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:22:09.072138 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:22:09.076871 ignition[1190]: PUT result: OK Sep 4 17:22:09.086647 ignition[1190]: disks: disks passed Sep 4 17:22:09.086741 ignition[1190]: Ignition finished successfully Sep 4 17:22:09.088776 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:22:09.092455 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:22:09.094452 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:22:09.096186 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:22:09.110116 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:22:09.120004 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:22:09.137773 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:22:09.189674 systemd-fsck[1199]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:22:09.196371 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:22:09.208702 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:22:09.382562 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:22:09.383519 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:22:09.384577 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:22:09.394674 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:22:09.405892 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:22:09.412266 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:22:09.412349 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:22:09.412389 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:22:09.422406 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:22:09.441803 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:22:09.458204 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1218) Sep 4 17:22:09.465825 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:22:09.465915 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:22:09.465938 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:22:09.475074 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:22:09.476948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:22:09.836663 systemd-networkd[1166]: eth0: Gained IPv6LL Sep 4 17:22:09.880000 initrd-setup-root[1242]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:22:09.895248 initrd-setup-root[1249]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:22:09.902050 initrd-setup-root[1256]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:22:09.908634 initrd-setup-root[1263]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:22:10.154654 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:22:10.168774 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:22:10.171735 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:22:10.208245 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:22:10.209628 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:22:10.242571 ignition[1336]: INFO : Ignition 2.18.0 Sep 4 17:22:10.242571 ignition[1336]: INFO : Stage: mount Sep 4 17:22:10.242571 ignition[1336]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:22:10.242571 ignition[1336]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:22:10.242571 ignition[1336]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:22:10.250655 ignition[1336]: INFO : PUT result: OK Sep 4 17:22:10.257837 ignition[1336]: INFO : mount: mount passed Sep 4 17:22:10.260224 ignition[1336]: INFO : Ignition finished successfully Sep 4 17:22:10.264675 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:22:10.274833 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:22:10.277772 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:22:10.388780 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:22:10.423565 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1348) Sep 4 17:22:10.427144 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:22:10.427276 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:22:10.427305 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:22:10.431559 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:22:10.435510 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:22:10.468061 ignition[1365]: INFO : Ignition 2.18.0 Sep 4 17:22:10.469407 ignition[1365]: INFO : Stage: files Sep 4 17:22:10.470972 ignition[1365]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:22:10.472371 ignition[1365]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:22:10.473757 ignition[1365]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:22:10.475827 ignition[1365]: INFO : PUT result: OK Sep 4 17:22:10.481062 ignition[1365]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:22:10.485341 ignition[1365]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:22:10.485341 ignition[1365]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:22:10.522116 ignition[1365]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:22:10.523888 ignition[1365]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:22:10.534670 unknown[1365]: wrote ssh authorized keys file for user: core Sep 4 17:22:10.536102 ignition[1365]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:22:10.540141 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:22:10.540141 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:22:10.540141 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:22:10.540141 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:22:10.602819 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:22:10.696961 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:22:10.696961 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:22:10.701430 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:22:10.701430 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:22:10.705320 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:22:10.707435 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:22:10.709515 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:22:10.711628 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:22:10.713582 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:22:10.717272 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:22:10.726487 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:22:10.737944 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:22:10.743786 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:22:10.743786 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:22:10.760175 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:22:11.103233 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:22:11.467233 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:22:11.467233 ignition[1365]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:22:11.471679 ignition[1365]: INFO : files: files passed Sep 4 17:22:11.471679 ignition[1365]: INFO : Ignition finished successfully Sep 4 17:22:11.495962 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:22:11.514790 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:22:11.517848 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:22:11.553150 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:22:11.557353 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:22:11.582736 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:22:11.582736 initrd-setup-root-after-ignition[1395]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:22:11.592372 initrd-setup-root-after-ignition[1399]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:22:11.596814 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:22:11.600897 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:22:11.610814 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:22:11.653935 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:22:11.654090 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:22:11.664517 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:22:11.676197 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:22:11.676381 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:22:11.689448 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:22:11.734699 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:22:11.743191 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:22:11.782840 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:22:11.785827 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:22:11.787755 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:22:11.790318 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:22:11.790488 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:22:11.793653 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:22:11.797653 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:22:11.799013 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:22:11.801356 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:22:11.804017 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:22:11.805513 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:22:11.805658 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:22:11.810224 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:22:11.815834 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:22:11.817161 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:22:11.818575 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:22:11.818702 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:22:11.820502 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:22:11.820959 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:22:11.821260 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:22:11.823404 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:22:11.827149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:22:11.827272 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:22:11.834097 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:22:11.834400 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:22:11.849284 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:22:11.849576 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:22:11.868621 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:22:11.883858 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:22:11.886438 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:22:11.886686 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:22:11.891485 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:22:11.893064 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:22:11.896653 ignition[1419]: INFO : Ignition 2.18.0 Sep 4 17:22:11.896653 ignition[1419]: INFO : Stage: umount Sep 4 17:22:11.899335 ignition[1419]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:22:11.899335 ignition[1419]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:22:11.899335 ignition[1419]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:22:11.903360 ignition[1419]: INFO : PUT result: OK Sep 4 17:22:11.906466 ignition[1419]: INFO : umount: umount passed Sep 4 17:22:11.907430 ignition[1419]: INFO : Ignition finished successfully Sep 4 17:22:11.908988 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:22:11.909130 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:22:11.913891 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:22:11.914025 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:22:11.920363 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:22:11.920570 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:22:11.921991 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:22:11.922057 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:22:11.926624 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:22:11.926698 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:22:11.927763 systemd[1]: Stopped target network.target - Network. Sep 4 17:22:11.927909 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:22:11.927967 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:22:11.928443 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:22:11.944727 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:22:11.945008 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:22:11.946584 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:22:11.947845 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:22:11.956657 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:22:11.956714 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:22:11.959099 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:22:11.959149 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:22:11.964865 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:22:11.964940 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:22:11.966343 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:22:11.966392 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:22:11.967803 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:22:11.970769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:22:11.978228 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:22:11.978689 systemd-networkd[1166]: eth0: DHCPv6 lease lost Sep 4 17:22:11.984619 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:22:11.984754 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:22:11.989968 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:22:11.990163 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:22:11.993018 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:22:11.993132 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:22:11.997685 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:22:11.997752 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:22:12.000130 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:22:12.000187 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:22:12.010831 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:22:12.013949 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:22:12.014039 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:22:12.016626 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:22:12.016681 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:22:12.017918 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:22:12.017963 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:22:12.020827 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:22:12.020879 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:22:12.024507 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:22:12.051042 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:22:12.051343 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:22:12.054239 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:22:12.054354 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:22:12.057939 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:22:12.058017 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:22:12.059795 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:22:12.060042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:22:12.063212 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:22:12.063354 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:22:12.065843 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:22:12.065891 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:22:12.068187 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:22:12.068250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:22:12.080088 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:22:12.082507 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:22:12.082608 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:22:12.085485 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:22:12.085579 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:22:12.093933 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:22:12.093991 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:22:12.095522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:22:12.095592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:22:12.099388 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:22:12.099479 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:22:12.101438 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:22:12.110769 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:22:12.130602 systemd[1]: Switching root. Sep 4 17:22:12.157098 systemd-journald[178]: Journal stopped Sep 4 17:22:14.751107 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 4 17:22:14.751191 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:22:14.751221 kernel: SELinux: policy capability open_perms=1 Sep 4 17:22:14.751238 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:22:14.751259 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:22:14.751275 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:22:14.751292 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:22:14.751313 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:22:14.751331 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:22:14.751347 kernel: audit: type=1403 audit(1725470532.948:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:22:14.751364 systemd[1]: Successfully loaded SELinux policy in 74.872ms. Sep 4 17:22:14.751391 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.381ms. Sep 4 17:22:14.751417 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:22:14.751436 systemd[1]: Detected virtualization amazon. Sep 4 17:22:14.751454 systemd[1]: Detected architecture x86-64. Sep 4 17:22:14.751478 systemd[1]: Detected first boot. Sep 4 17:22:14.751496 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:22:14.751515 zram_generator::config[1480]: No configuration found. Sep 4 17:22:14.753581 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:22:14.753622 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:22:14.753642 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:22:14.753662 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:22:14.753680 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:22:14.753698 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:22:14.753722 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:22:14.753741 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:22:14.753758 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:22:14.753782 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:22:14.753800 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:22:14.753818 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:22:14.753837 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:22:14.753855 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:22:14.753877 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:22:14.753895 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:22:14.753913 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:22:14.753930 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:22:14.753951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:22:14.753969 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:22:14.753987 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:22:14.754010 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:22:14.754028 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:22:14.754049 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:22:14.754066 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:22:14.754084 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:22:14.754102 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:22:14.754120 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:22:14.754147 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:22:14.754165 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:22:14.754183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:22:14.754202 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:22:14.754222 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:22:14.754239 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:22:14.754257 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:22:14.754325 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:22:14.754343 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:22:14.754362 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:22:14.754380 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:22:14.754399 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:22:14.754420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:22:14.754438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:22:14.754456 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:22:14.754473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:22:14.754491 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:22:14.754691 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:22:14.754716 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:22:14.754735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:22:14.754753 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:22:14.754777 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 4 17:22:14.754796 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 4 17:22:14.754814 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:22:14.754831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:22:14.754848 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:22:14.754866 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:22:14.754884 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:22:14.754903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:22:14.754924 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:22:14.754941 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:22:14.754959 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:22:14.754978 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:22:14.755033 systemd-journald[1575]: Collecting audit messages is disabled. Sep 4 17:22:14.755074 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:22:14.755092 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:22:14.755109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:22:14.755131 systemd-journald[1575]: Journal started Sep 4 17:22:14.755165 systemd-journald[1575]: Runtime Journal (/run/log/journal/ec2a205d6bb801f5a4e3d242218c386c) is 4.8M, max 38.6M, 33.7M free. Sep 4 17:22:14.760552 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:22:14.759259 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:22:14.760678 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:22:14.775313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:22:14.775826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:22:14.784279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:22:14.784505 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:22:14.790091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:22:14.797037 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:22:14.813746 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:22:14.826871 kernel: loop: module loaded Sep 4 17:22:14.827757 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:22:14.844779 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:22:14.857611 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:22:14.870175 kernel: fuse: init (API version 7.39) Sep 4 17:22:14.871075 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:22:14.871371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:22:14.879076 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:22:14.881133 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:22:14.886951 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:22:14.889013 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:22:14.892582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:22:14.924969 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:22:14.926374 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:22:14.963832 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:22:14.967014 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:22:14.970708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:22:14.973593 kernel: ACPI: bus type drm_connector registered Sep 4 17:22:14.986899 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:22:14.988867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:22:15.006066 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:22:15.012595 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:22:15.014631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:22:15.020207 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:22:15.042924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:22:15.049152 systemd-journald[1575]: Time spent on flushing to /var/log/journal/ec2a205d6bb801f5a4e3d242218c386c is 52.039ms for 949 entries. Sep 4 17:22:15.049152 systemd-journald[1575]: System Journal (/var/log/journal/ec2a205d6bb801f5a4e3d242218c386c) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:22:15.127804 systemd-journald[1575]: Received client request to flush runtime journal. Sep 4 17:22:15.057770 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:22:15.071252 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:22:15.073050 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:22:15.076126 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Sep 4 17:22:15.076148 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Sep 4 17:22:15.094129 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:22:15.108962 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:22:15.114609 udevadm[1639]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:22:15.134278 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:22:15.178421 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:22:15.189831 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:22:15.218044 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Sep 4 17:22:15.218513 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Sep 4 17:22:15.227365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:22:16.031316 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:22:16.051767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:22:16.156723 systemd-udevd[1657]: Using default interface naming scheme 'v255'. Sep 4 17:22:16.222498 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:22:16.245381 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:22:16.319294 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:22:16.348891 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 4 17:22:16.391003 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1666) Sep 4 17:22:16.392682 (udev-worker)[1665]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:22:16.469405 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:22:16.511556 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Sep 4 17:22:16.534711 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 17:22:16.550556 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:22:16.550644 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 4 17:22:16.551980 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 17:22:16.579720 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Sep 4 17:22:16.667110 systemd-networkd[1663]: lo: Link UP Sep 4 17:22:16.668031 systemd-networkd[1663]: lo: Gained carrier Sep 4 17:22:16.678689 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1668) Sep 4 17:22:16.679288 systemd-networkd[1663]: Enumeration completed Sep 4 17:22:16.680093 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:22:16.680806 systemd-networkd[1663]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:22:16.680994 systemd-networkd[1663]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:22:16.710766 systemd-networkd[1663]: eth0: Link UP Sep 4 17:22:16.711130 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:22:16.715268 systemd-networkd[1663]: eth0: Gained carrier Sep 4 17:22:16.715299 systemd-networkd[1663]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:22:16.738289 systemd-networkd[1663]: eth0: DHCPv4 address 172.31.27.203/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:22:16.826036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:22:16.862444 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:22:17.008372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:22:17.025242 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:22:17.041282 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:22:17.083550 lvm[1778]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:22:17.119445 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:22:17.237216 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:22:17.254094 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:22:17.258758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:22:17.276840 lvm[1783]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:22:17.318060 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:22:17.320678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:22:17.322140 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:22:17.322185 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:22:17.325502 systemd[1]: Reached target machines.target - Containers. Sep 4 17:22:17.331745 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:22:17.345688 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:22:17.359842 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:22:17.361528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:22:17.363742 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:22:17.373884 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:22:17.380733 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:22:17.384806 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:22:17.421562 kernel: loop0: detected capacity change from 0 to 139904 Sep 4 17:22:17.421659 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:22:17.422334 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:22:17.471510 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:22:17.475648 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:22:17.532567 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:22:17.555587 kernel: loop1: detected capacity change from 0 to 209816 Sep 4 17:22:17.642559 kernel: loop2: detected capacity change from 0 to 80568 Sep 4 17:22:17.740558 kernel: loop3: detected capacity change from 0 to 60984 Sep 4 17:22:17.784562 kernel: loop4: detected capacity change from 0 to 139904 Sep 4 17:22:17.807563 kernel: loop5: detected capacity change from 0 to 209816 Sep 4 17:22:17.858605 kernel: loop6: detected capacity change from 0 to 80568 Sep 4 17:22:17.885558 kernel: loop7: detected capacity change from 0 to 60984 Sep 4 17:22:17.922355 (sd-merge)[1811]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:22:17.923136 (sd-merge)[1811]: Merged extensions into '/usr'. Sep 4 17:22:17.956327 systemd[1]: Reloading requested from client PID 1795 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:22:17.956351 systemd[1]: Reloading... Sep 4 17:22:18.066136 zram_generator::config[1837]: No configuration found. Sep 4 17:22:18.249409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:22:18.360352 systemd[1]: Reloading finished in 403 ms. Sep 4 17:22:18.386483 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:22:18.400863 systemd[1]: Starting ensure-sysext.service... Sep 4 17:22:18.406286 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:22:18.412720 systemd-networkd[1663]: eth0: Gained IPv6LL Sep 4 17:22:18.420016 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:22:18.426987 systemd[1]: Reloading requested from client PID 1891 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:22:18.427015 systemd[1]: Reloading... Sep 4 17:22:18.463089 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:22:18.463705 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:22:18.465219 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:22:18.465668 systemd-tmpfiles[1892]: ACLs are not supported, ignoring. Sep 4 17:22:18.465756 systemd-tmpfiles[1892]: ACLs are not supported, ignoring. Sep 4 17:22:18.474615 systemd-tmpfiles[1892]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:22:18.474630 systemd-tmpfiles[1892]: Skipping /boot Sep 4 17:22:18.492822 systemd-tmpfiles[1892]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:22:18.492841 systemd-tmpfiles[1892]: Skipping /boot Sep 4 17:22:18.530564 zram_generator::config[1918]: No configuration found. Sep 4 17:22:18.734945 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:22:18.802486 ldconfig[1791]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:22:18.823576 systemd[1]: Reloading finished in 395 ms. Sep 4 17:22:18.842792 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:22:18.850253 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:22:18.864729 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:22:18.878730 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:22:18.883441 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:22:18.895742 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:22:18.902092 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:22:18.923256 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:22:18.925771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:22:18.935862 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:22:18.942522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:22:18.954797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:22:18.959057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:22:18.960319 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:22:18.966708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:22:18.967304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:22:18.969590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:22:18.969812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:22:18.986783 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:22:18.987071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:22:18.992236 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:22:18.992724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:22:19.001955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:22:19.015006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:22:19.018897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:22:19.019079 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:22:19.028982 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:22:19.032351 augenrules[2015]: No rules Sep 4 17:22:19.034697 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:22:19.037148 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:22:19.044062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:22:19.044234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:22:19.064168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:22:19.065409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:22:19.068350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:22:19.068712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:22:19.075937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:22:19.078589 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:22:19.124987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:22:19.127902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:22:19.128126 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:22:19.128362 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:22:19.146145 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:22:19.148165 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:22:19.155127 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:22:19.157917 systemd-resolved[1990]: Positive Trust Anchors: Sep 4 17:22:19.158588 systemd-resolved[1990]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:22:19.158655 systemd-resolved[1990]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:22:19.171786 systemd-resolved[1990]: Defaulting to hostname 'linux'. Sep 4 17:22:19.173372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:22:19.173867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:22:19.179759 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:22:19.182144 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:22:19.182363 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:22:19.185436 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:22:19.186363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:22:19.195929 systemd[1]: Finished ensure-sysext.service. Sep 4 17:22:19.210056 systemd[1]: Reached target network.target - Network. Sep 4 17:22:19.213154 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:22:19.214764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:22:19.216313 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:22:19.216347 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:22:19.217493 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:22:19.220587 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:22:19.223207 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:22:19.226201 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:22:19.228164 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:22:19.229777 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:22:19.231426 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:22:19.232794 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:22:19.232847 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:22:19.235454 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:22:19.238826 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:22:19.244721 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:22:19.248263 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:22:19.253007 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:22:19.254492 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:22:19.255679 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:22:19.260368 systemd[1]: System is tainted: cgroupsv1 Sep 4 17:22:19.260440 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:22:19.260472 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:22:19.263651 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:22:19.268737 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:22:19.277759 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:22:19.282139 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:22:19.295738 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:22:19.296959 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:22:19.300018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:19.305492 jq[2051]: false Sep 4 17:22:19.328223 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:22:19.332737 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:22:19.337749 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:22:19.359804 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:22:19.372662 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:22:19.387115 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:22:19.394527 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:22:19.394763 extend-filesystems[2053]: Found loop4 Sep 4 17:22:19.396396 extend-filesystems[2053]: Found loop5 Sep 4 17:22:19.396396 extend-filesystems[2053]: Found loop6 Sep 4 17:22:19.396396 extend-filesystems[2053]: Found loop7 Sep 4 17:22:19.396396 extend-filesystems[2053]: Found nvme0n1 Sep 4 17:22:19.396396 extend-filesystems[2053]: Found nvme0n1p1 Sep 4 17:22:19.396396 extend-filesystems[2053]: Found nvme0n1p2 Sep 4 17:22:19.396396 extend-filesystems[2053]: Found nvme0n1p3 Sep 4 17:22:19.396396 extend-filesystems[2053]: Found usr Sep 4 17:22:19.396396 extend-filesystems[2053]: Found nvme0n1p4 Sep 4 17:22:19.434189 extend-filesystems[2053]: Found nvme0n1p6 Sep 4 17:22:19.434189 extend-filesystems[2053]: Found nvme0n1p7 Sep 4 17:22:19.434189 extend-filesystems[2053]: Found nvme0n1p9 Sep 4 17:22:19.434189 extend-filesystems[2053]: Checking size of /dev/nvme0n1p9 Sep 4 17:22:19.410899 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:22:19.413257 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:22:19.446896 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:22:19.453114 dbus-daemon[2050]: [system] SELinux support is enabled Sep 4 17:22:19.459649 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:22:19.461420 dbus-daemon[2050]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1663 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:22:19.465498 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:22:19.490981 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:22:19.491305 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:22:19.506757 ntpd[2058]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:12:45 UTC 2024 (1): Starting Sep 4 17:22:19.515561 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:12:45 UTC 2024 (1): Starting Sep 4 17:22:19.515561 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:22:19.515561 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: ---------------------------------------------------- Sep 4 17:22:19.515561 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:22:19.515561 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:22:19.515561 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: corporation. Support and training for ntp-4 are Sep 4 17:22:19.515561 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: available at https://www.nwtime.org/support Sep 4 17:22:19.515561 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: ---------------------------------------------------- Sep 4 17:22:19.515561 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: proto: precision = 0.086 usec (-23) Sep 4 17:22:19.506788 ntpd[2058]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:22:19.506799 ntpd[2058]: ---------------------------------------------------- Sep 4 17:22:19.506810 ntpd[2058]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:22:19.506820 ntpd[2058]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:22:19.506829 ntpd[2058]: corporation. Support and training for ntp-4 are Sep 4 17:22:19.506838 ntpd[2058]: available at https://www.nwtime.org/support Sep 4 17:22:19.506847 ntpd[2058]: ---------------------------------------------------- Sep 4 17:22:19.509625 ntpd[2058]: proto: precision = 0.086 usec (-23) Sep 4 17:22:19.519979 ntpd[2058]: basedate set to 2024-08-23 Sep 4 17:22:19.529847 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: basedate set to 2024-08-23 Sep 4 17:22:19.529847 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: gps base set to 2024-08-25 (week 2329) Sep 4 17:22:19.520008 ntpd[2058]: gps base set to 2024-08-25 (week 2329) Sep 4 17:22:19.531262 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:22:19.534688 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:22:19.534688 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:22:19.534688 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:22:19.534688 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: Listen normally on 3 eth0 172.31.27.203:123 Sep 4 17:22:19.534688 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: Listen normally on 4 lo [::1]:123 Sep 4 17:22:19.534688 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: Listen normally on 5 eth0 [fe80::4af:3dff:fe67:9fa5%2]:123 Sep 4 17:22:19.534688 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: Listening on routing socket on fd #22 for interface updates Sep 4 17:22:19.533520 ntpd[2058]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:22:19.531638 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:22:19.535066 jq[2078]: true Sep 4 17:22:19.533598 ntpd[2058]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:22:19.533809 ntpd[2058]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:22:19.533849 ntpd[2058]: Listen normally on 3 eth0 172.31.27.203:123 Sep 4 17:22:19.533893 ntpd[2058]: Listen normally on 4 lo [::1]:123 Sep 4 17:22:19.533938 ntpd[2058]: Listen normally on 5 eth0 [fe80::4af:3dff:fe67:9fa5%2]:123 Sep 4 17:22:19.534034 ntpd[2058]: Listening on routing socket on fd #22 for interface updates Sep 4 17:22:19.557956 extend-filesystems[2053]: Resized partition /dev/nvme0n1p9 Sep 4 17:22:19.560686 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:22:19.560686 ntpd[2058]: 4 Sep 17:22:19 ntpd[2058]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:22:19.557764 ntpd[2058]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:22:19.557801 ntpd[2058]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:22:19.581721 extend-filesystems[2098]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:22:19.597398 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:22:19.597892 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:22:19.628345 jq[2097]: true Sep 4 17:22:19.643899 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:22:19.687194 (ntainerd)[2104]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:22:19.734012 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:22:19.734960 update_engine[2076]: I0904 17:22:19.725265 2076 main.cc:92] Flatcar Update Engine starting Sep 4 17:22:19.757143 systemd-logind[2069]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:22:19.757169 systemd-logind[2069]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 4 17:22:19.757194 systemd-logind[2069]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:22:19.759768 systemd-logind[2069]: New seat seat0. Sep 4 17:22:19.765959 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:22:19.768965 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:22:19.769037 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:22:19.772776 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:22:19.772810 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:22:19.778408 tar[2090]: linux-amd64/helm Sep 4 17:22:19.790490 dbus-daemon[2050]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:22:19.808055 coreos-metadata[2049]: Sep 04 17:22:19.807 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:22:19.812962 update_engine[2076]: I0904 17:22:19.812448 2076 update_check_scheduler.cc:74] Next update check in 6m34s Sep 4 17:22:19.819252 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:22:19.821558 coreos-metadata[2049]: Sep 04 17:22:19.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:22:19.829820 coreos-metadata[2049]: Sep 04 17:22:19.825 INFO Fetch successful Sep 4 17:22:19.829820 coreos-metadata[2049]: Sep 04 17:22:19.825 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:22:19.830352 coreos-metadata[2049]: Sep 04 17:22:19.830 INFO Fetch successful Sep 4 17:22:19.832334 coreos-metadata[2049]: Sep 04 17:22:19.832 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:22:19.835621 coreos-metadata[2049]: Sep 04 17:22:19.835 INFO Fetch successful Sep 4 17:22:19.835621 coreos-metadata[2049]: Sep 04 17:22:19.835 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:22:19.847415 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:22:19.850344 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:22:19.852909 coreos-metadata[2049]: Sep 04 17:22:19.852 INFO Fetch successful Sep 4 17:22:19.853057 coreos-metadata[2049]: Sep 04 17:22:19.853 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:22:19.862936 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:22:19.872106 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:22:19.875339 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:22:19.883559 coreos-metadata[2049]: Sep 04 17:22:19.882 INFO Fetch failed with 404: resource not found Sep 4 17:22:19.883559 coreos-metadata[2049]: Sep 04 17:22:19.882 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:22:19.913125 coreos-metadata[2049]: Sep 04 17:22:19.885 INFO Fetch successful Sep 4 17:22:19.913125 coreos-metadata[2049]: Sep 04 17:22:19.887 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:22:19.913125 coreos-metadata[2049]: Sep 04 17:22:19.892 INFO Fetch successful Sep 4 17:22:19.913125 coreos-metadata[2049]: Sep 04 17:22:19.896 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:22:19.913125 coreos-metadata[2049]: Sep 04 17:22:19.904 INFO Fetch successful Sep 4 17:22:19.913125 coreos-metadata[2049]: Sep 04 17:22:19.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:22:19.913125 coreos-metadata[2049]: Sep 04 17:22:19.906 INFO Fetch successful Sep 4 17:22:19.913125 coreos-metadata[2049]: Sep 04 17:22:19.906 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:22:19.883657 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:22:19.915566 coreos-metadata[2049]: Sep 04 17:22:19.913 INFO Fetch successful Sep 4 17:22:19.921788 extend-filesystems[2098]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:22:19.921788 extend-filesystems[2098]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:22:19.921788 extend-filesystems[2098]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:22:19.933779 extend-filesystems[2053]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:22:19.979195 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:22:19.979646 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:22:19.994655 bash[2150]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:22:20.004737 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:22:20.037944 systemd[1]: Starting sshkeys.service... Sep 4 17:22:20.103842 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:22:20.118012 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:22:20.120350 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:22:20.132009 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:22:20.337575 amazon-ssm-agent[2132]: Initializing new seelog logger Sep 4 17:22:20.337575 amazon-ssm-agent[2132]: New Seelog Logger Creation Complete Sep 4 17:22:20.337575 amazon-ssm-agent[2132]: 2024/09/04 17:22:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:22:20.337575 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:22:20.337575 amazon-ssm-agent[2132]: 2024/09/04 17:22:20 processing appconfig overrides Sep 4 17:22:20.338316 amazon-ssm-agent[2132]: 2024/09/04 17:22:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:22:20.338316 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:22:20.338316 amazon-ssm-agent[2132]: 2024/09/04 17:22:20 processing appconfig overrides Sep 4 17:22:20.342819 amazon-ssm-agent[2132]: 2024/09/04 17:22:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:22:20.342819 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:22:20.345756 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO Proxy environment variables: Sep 4 17:22:20.346325 amazon-ssm-agent[2132]: 2024/09/04 17:22:20 processing appconfig overrides Sep 4 17:22:20.360563 amazon-ssm-agent[2132]: 2024/09/04 17:22:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:22:20.360563 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:22:20.360563 amazon-ssm-agent[2132]: 2024/09/04 17:22:20 processing appconfig overrides Sep 4 17:22:20.389594 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2149) Sep 4 17:22:20.447971 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO no_proxy: Sep 4 17:22:20.510992 dbus-daemon[2050]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:22:20.511297 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:22:20.519417 dbus-daemon[2050]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2126 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:22:20.534719 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:22:20.552277 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO https_proxy: Sep 4 17:22:20.589502 coreos-metadata[2170]: Sep 04 17:22:20.589 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:22:20.596381 coreos-metadata[2170]: Sep 04 17:22:20.596 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:22:20.599465 coreos-metadata[2170]: Sep 04 17:22:20.598 INFO Fetch successful Sep 4 17:22:20.599465 coreos-metadata[2170]: Sep 04 17:22:20.598 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:22:20.601469 coreos-metadata[2170]: Sep 04 17:22:20.601 INFO Fetch successful Sep 4 17:22:20.608956 unknown[2170]: wrote ssh authorized keys file for user: core Sep 4 17:22:20.613282 polkitd[2211]: Started polkitd version 121 Sep 4 17:22:20.653582 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO http_proxy: Sep 4 17:22:20.687355 polkitd[2211]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:22:20.687454 polkitd[2211]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:22:20.699665 polkitd[2211]: Finished loading, compiling and executing 2 rules Sep 4 17:22:20.706812 dbus-daemon[2050]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:22:20.707112 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:22:20.708357 update-ssh-keys[2227]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:22:20.710718 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:22:20.735341 polkitd[2211]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:22:20.739435 systemd[1]: Finished sshkeys.service. Sep 4 17:22:20.753059 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:22:20.817092 locksmithd[2137]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:22:20.841805 systemd-hostnamed[2126]: Hostname set to (transient) Sep 4 17:22:20.845316 systemd-resolved[1990]: System hostname changed to 'ip-172-31-27-203'. Sep 4 17:22:20.859040 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:22:20.969808 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO Agent will take identity from EC2 Sep 4 17:22:21.060835 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:22:21.160664 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:22:21.201603 containerd[2104]: time="2024-09-04T17:22:21.201325614Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:22:21.261391 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:22:21.360546 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:22:21.436661 containerd[2104]: time="2024-09-04T17:22:21.436590088Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:22:21.439465 containerd[2104]: time="2024-09-04T17:22:21.436843626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:22:21.444655 containerd[2104]: time="2024-09-04T17:22:21.443865105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:22:21.444655 containerd[2104]: time="2024-09-04T17:22:21.443921090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:22:21.444655 containerd[2104]: time="2024-09-04T17:22:21.444271756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:22:21.444655 containerd[2104]: time="2024-09-04T17:22:21.444295790Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:22:21.444655 containerd[2104]: time="2024-09-04T17:22:21.444414209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:22:21.444655 containerd[2104]: time="2024-09-04T17:22:21.444477675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:22:21.445154 containerd[2104]: time="2024-09-04T17:22:21.445126422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:22:21.445440 containerd[2104]: time="2024-09-04T17:22:21.445416819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:22:21.449138 containerd[2104]: time="2024-09-04T17:22:21.448618174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:22:21.449138 containerd[2104]: time="2024-09-04T17:22:21.448655833Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:22:21.449138 containerd[2104]: time="2024-09-04T17:22:21.448673930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:22:21.449138 containerd[2104]: time="2024-09-04T17:22:21.448931757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:22:21.449138 containerd[2104]: time="2024-09-04T17:22:21.448951957Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:22:21.449138 containerd[2104]: time="2024-09-04T17:22:21.449029532Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:22:21.449138 containerd[2104]: time="2024-09-04T17:22:21.449103923Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:22:21.460130 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 4 17:22:21.460258 containerd[2104]: time="2024-09-04T17:22:21.459962591Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:22:21.460258 containerd[2104]: time="2024-09-04T17:22:21.460017030Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:22:21.460258 containerd[2104]: time="2024-09-04T17:22:21.460036931Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:22:21.460258 containerd[2104]: time="2024-09-04T17:22:21.460083390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:22:21.460258 containerd[2104]: time="2024-09-04T17:22:21.460104487Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462452846Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462510660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462718852Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462743341Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462762803Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462785453Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462807970Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462835789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462855917Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462876165Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462897535Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462917927Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462937632Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:22:21.463721 containerd[2104]: time="2024-09-04T17:22:21.462958364Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:22:21.464992 containerd[2104]: time="2024-09-04T17:22:21.463080121Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470614111Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470705012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470740120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470782493Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470859430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470885144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470910262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470933478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470958286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.470983445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.471007072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.471030188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.471055582Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:22:21.473553 containerd[2104]: time="2024-09-04T17:22:21.471248792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.475386 containerd[2104]: time="2024-09-04T17:22:21.471278028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.475386 containerd[2104]: time="2024-09-04T17:22:21.471302841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.475386 containerd[2104]: time="2024-09-04T17:22:21.471392975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.475386 containerd[2104]: time="2024-09-04T17:22:21.471421385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.475386 containerd[2104]: time="2024-09-04T17:22:21.471449478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.475386 containerd[2104]: time="2024-09-04T17:22:21.471474345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.475386 containerd[2104]: time="2024-09-04T17:22:21.471498922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:22:21.475679 containerd[2104]: time="2024-09-04T17:22:21.472887895Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:22:21.475679 containerd[2104]: time="2024-09-04T17:22:21.472996166Z" level=info msg="Connect containerd service" Sep 4 17:22:21.475679 containerd[2104]: time="2024-09-04T17:22:21.473054936Z" level=info msg="using legacy CRI server" Sep 4 17:22:21.475679 containerd[2104]: time="2024-09-04T17:22:21.473073373Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:22:21.475679 containerd[2104]: time="2024-09-04T17:22:21.473311440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:22:21.482361 containerd[2104]: time="2024-09-04T17:22:21.482316328Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:22:21.482777 containerd[2104]: time="2024-09-04T17:22:21.482654686Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:22:21.482777 containerd[2104]: time="2024-09-04T17:22:21.482737478Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:22:21.482777 containerd[2104]: time="2024-09-04T17:22:21.482756063Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:22:21.483504 containerd[2104]: time="2024-09-04T17:22:21.482967194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:22:21.483504 containerd[2104]: time="2024-09-04T17:22:21.482699239Z" level=info msg="Start subscribing containerd event" Sep 4 17:22:21.483504 containerd[2104]: time="2024-09-04T17:22:21.483065610Z" level=info msg="Start recovering state" Sep 4 17:22:21.483504 containerd[2104]: time="2024-09-04T17:22:21.483151167Z" level=info msg="Start event monitor" Sep 4 17:22:21.483504 containerd[2104]: time="2024-09-04T17:22:21.483171155Z" level=info msg="Start snapshots syncer" Sep 4 17:22:21.483504 containerd[2104]: time="2024-09-04T17:22:21.483183135Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:22:21.483504 containerd[2104]: time="2024-09-04T17:22:21.483193263Z" level=info msg="Start streaming server" Sep 4 17:22:21.485001 containerd[2104]: time="2024-09-04T17:22:21.484730749Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:22:21.485001 containerd[2104]: time="2024-09-04T17:22:21.484790038Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:22:21.487113 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:22:21.488179 containerd[2104]: time="2024-09-04T17:22:21.487480747Z" level=info msg="containerd successfully booted in 0.307474s" Sep 4 17:22:21.560967 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:22:21.563199 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:22:21.563199 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO [Registrar] Starting registrar module Sep 4 17:22:21.563199 amazon-ssm-agent[2132]: 2024-09-04 17:22:20 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:22:21.563199 amazon-ssm-agent[2132]: 2024-09-04 17:22:21 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:22:21.563199 amazon-ssm-agent[2132]: 2024-09-04 17:22:21 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:22:21.563199 amazon-ssm-agent[2132]: 2024-09-04 17:22:21 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:22:21.565680 amazon-ssm-agent[2132]: 2024-09-04 17:22:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:22:21.588033 sshd_keygen[2096]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:22:21.654935 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:22:21.663881 amazon-ssm-agent[2132]: 2024-09-04 17:22:21 INFO [CredentialRefresher] Next credential rotation will be in 30.874931219383335 minutes Sep 4 17:22:21.669920 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:22:21.688993 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:22:21.689433 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:22:21.704976 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:22:21.745221 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:22:21.764546 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:22:21.778958 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:22:21.781955 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:22:21.971023 tar[2090]: linux-amd64/LICENSE Sep 4 17:22:21.971420 tar[2090]: linux-amd64/README.md Sep 4 17:22:21.989181 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:22:22.611464 amazon-ssm-agent[2132]: 2024-09-04 17:22:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:22:22.711816 amazon-ssm-agent[2132]: 2024-09-04 17:22:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2332) started Sep 4 17:22:22.813705 amazon-ssm-agent[2132]: 2024-09-04 17:22:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:22:22.824672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:22.835593 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:22:22.843988 systemd[1]: Startup finished in 9.051s (kernel) + 9.969s (userspace) = 19.020s. Sep 4 17:22:23.009197 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:22:24.137733 kubelet[2346]: E0904 17:22:24.137559 2346 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:22:24.140640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:22:24.140914 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:22:26.749230 systemd-resolved[1990]: Clock change detected. Flushing caches. Sep 4 17:22:27.399789 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:22:27.412201 systemd[1]: Started sshd@0-172.31.27.203:22-139.178.68.195:53270.service - OpenSSH per-connection server daemon (139.178.68.195:53270). Sep 4 17:22:27.600230 sshd[2363]: Accepted publickey for core from 139.178.68.195 port 53270 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:22:27.603096 sshd[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:27.613551 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:22:27.623147 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:22:27.627716 systemd-logind[2069]: New session 1 of user core. Sep 4 17:22:27.667276 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:22:27.676561 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:22:27.685673 (systemd)[2369]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:27.879991 systemd[2369]: Queued start job for default target default.target. Sep 4 17:22:27.880551 systemd[2369]: Created slice app.slice - User Application Slice. Sep 4 17:22:27.880590 systemd[2369]: Reached target paths.target - Paths. Sep 4 17:22:27.880609 systemd[2369]: Reached target timers.target - Timers. Sep 4 17:22:27.893905 systemd[2369]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:22:27.929241 systemd[2369]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:22:27.929326 systemd[2369]: Reached target sockets.target - Sockets. Sep 4 17:22:27.929346 systemd[2369]: Reached target basic.target - Basic System. Sep 4 17:22:27.929398 systemd[2369]: Reached target default.target - Main User Target. Sep 4 17:22:27.929435 systemd[2369]: Startup finished in 215ms. Sep 4 17:22:27.930190 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:22:27.938976 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:22:28.104045 systemd[1]: Started sshd@1-172.31.27.203:22-139.178.68.195:53272.service - OpenSSH per-connection server daemon (139.178.68.195:53272). Sep 4 17:22:28.268546 sshd[2381]: Accepted publickey for core from 139.178.68.195 port 53272 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:22:28.270489 sshd[2381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:28.275842 systemd-logind[2069]: New session 2 of user core. Sep 4 17:22:28.283188 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:22:28.403166 sshd[2381]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:28.407052 systemd[1]: sshd@1-172.31.27.203:22-139.178.68.195:53272.service: Deactivated successfully. Sep 4 17:22:28.413430 systemd-logind[2069]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:22:28.414455 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:22:28.416029 systemd-logind[2069]: Removed session 2. Sep 4 17:22:28.431156 systemd[1]: Started sshd@2-172.31.27.203:22-139.178.68.195:53278.service - OpenSSH per-connection server daemon (139.178.68.195:53278). Sep 4 17:22:28.590166 sshd[2389]: Accepted publickey for core from 139.178.68.195 port 53278 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:22:28.592307 sshd[2389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:28.598136 systemd-logind[2069]: New session 3 of user core. Sep 4 17:22:28.607153 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:22:28.724146 sshd[2389]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:28.728693 systemd[1]: sshd@2-172.31.27.203:22-139.178.68.195:53278.service: Deactivated successfully. Sep 4 17:22:28.733359 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:22:28.734139 systemd-logind[2069]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:22:28.735491 systemd-logind[2069]: Removed session 3. Sep 4 17:22:28.756381 systemd[1]: Started sshd@3-172.31.27.203:22-139.178.68.195:53288.service - OpenSSH per-connection server daemon (139.178.68.195:53288). Sep 4 17:22:28.941648 sshd[2397]: Accepted publickey for core from 139.178.68.195 port 53288 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:22:28.943292 sshd[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:28.950689 systemd-logind[2069]: New session 4 of user core. Sep 4 17:22:28.957090 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:22:29.092673 sshd[2397]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:29.098906 systemd[1]: sshd@3-172.31.27.203:22-139.178.68.195:53288.service: Deactivated successfully. Sep 4 17:22:29.105391 systemd-logind[2069]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:22:29.106729 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:22:29.110825 systemd-logind[2069]: Removed session 4. Sep 4 17:22:29.119178 systemd[1]: Started sshd@4-172.31.27.203:22-139.178.68.195:53298.service - OpenSSH per-connection server daemon (139.178.68.195:53298). Sep 4 17:22:29.287928 sshd[2405]: Accepted publickey for core from 139.178.68.195 port 53298 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:22:29.289974 sshd[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:29.297653 systemd-logind[2069]: New session 5 of user core. Sep 4 17:22:29.304683 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:22:29.445496 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:22:29.446451 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:22:29.466818 sudo[2409]: pam_unix(sudo:session): session closed for user root Sep 4 17:22:29.490543 sshd[2405]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:29.496829 systemd-logind[2069]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:22:29.497851 systemd[1]: sshd@4-172.31.27.203:22-139.178.68.195:53298.service: Deactivated successfully. Sep 4 17:22:29.502727 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:22:29.503958 systemd-logind[2069]: Removed session 5. Sep 4 17:22:29.525566 systemd[1]: Started sshd@5-172.31.27.203:22-139.178.68.195:53306.service - OpenSSH per-connection server daemon (139.178.68.195:53306). Sep 4 17:22:29.716679 sshd[2414]: Accepted publickey for core from 139.178.68.195 port 53306 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:22:29.721562 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:29.736377 systemd-logind[2069]: New session 6 of user core. Sep 4 17:22:29.745273 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:22:29.871945 sudo[2419]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:22:29.872321 sudo[2419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:22:29.876311 sudo[2419]: pam_unix(sudo:session): session closed for user root Sep 4 17:22:29.883062 sudo[2418]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:22:29.883428 sudo[2418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:22:29.901336 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:22:29.904144 auditctl[2422]: No rules Sep 4 17:22:29.904695 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:22:29.904979 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:22:29.913251 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:22:29.944202 augenrules[2441]: No rules Sep 4 17:22:29.951086 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:22:29.956509 sudo[2418]: pam_unix(sudo:session): session closed for user root Sep 4 17:22:29.980840 sshd[2414]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:29.985790 systemd-logind[2069]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:22:29.987274 systemd[1]: sshd@5-172.31.27.203:22-139.178.68.195:53306.service: Deactivated successfully. Sep 4 17:22:29.991586 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:22:29.992690 systemd-logind[2069]: Removed session 6. Sep 4 17:22:30.007140 systemd[1]: Started sshd@6-172.31.27.203:22-139.178.68.195:53312.service - OpenSSH per-connection server daemon (139.178.68.195:53312). Sep 4 17:22:30.166814 sshd[2450]: Accepted publickey for core from 139.178.68.195 port 53312 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:22:30.167494 sshd[2450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:30.184870 systemd-logind[2069]: New session 7 of user core. Sep 4 17:22:30.194005 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:22:30.309207 sudo[2454]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:22:30.309595 sudo[2454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:22:30.658179 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:22:30.658425 (dockerd)[2463]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:22:31.358956 dockerd[2463]: time="2024-09-04T17:22:31.358891321Z" level=info msg="Starting up" Sep 4 17:22:32.182076 dockerd[2463]: time="2024-09-04T17:22:32.182023265Z" level=info msg="Loading containers: start." Sep 4 17:22:32.454155 kernel: Initializing XFRM netlink socket Sep 4 17:22:32.489244 (udev-worker)[2475]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:22:32.553941 systemd-networkd[1663]: docker0: Link UP Sep 4 17:22:32.570458 dockerd[2463]: time="2024-09-04T17:22:32.570407193Z" level=info msg="Loading containers: done." Sep 4 17:22:32.712415 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck792795964-merged.mount: Deactivated successfully. Sep 4 17:22:32.715155 dockerd[2463]: time="2024-09-04T17:22:32.715103928Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:22:32.715447 dockerd[2463]: time="2024-09-04T17:22:32.715418240Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:22:32.715693 dockerd[2463]: time="2024-09-04T17:22:32.715658481Z" level=info msg="Daemon has completed initialization" Sep 4 17:22:32.755212 dockerd[2463]: time="2024-09-04T17:22:32.755068550Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:22:32.756261 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:22:34.067643 containerd[2104]: time="2024-09-04T17:22:34.067146485Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:22:34.516787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:22:34.523660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:34.916798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:34.933869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092624240.mount: Deactivated successfully. Sep 4 17:22:34.936362 (kubelet)[2616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:22:35.052691 kubelet[2616]: E0904 17:22:35.052612 2616 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:22:35.057535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:22:35.057796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:22:38.560464 containerd[2104]: time="2024-09-04T17:22:38.560407052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:38.561995 containerd[2104]: time="2024-09-04T17:22:38.561787636Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530735" Sep 4 17:22:38.565471 containerd[2104]: time="2024-09-04T17:22:38.564001543Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:38.567789 containerd[2104]: time="2024-09-04T17:22:38.567330661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:38.569702 containerd[2104]: time="2024-09-04T17:22:38.568899065Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 4.501706949s" Sep 4 17:22:38.569702 containerd[2104]: time="2024-09-04T17:22:38.568949691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 4 17:22:38.603484 containerd[2104]: time="2024-09-04T17:22:38.603445801Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:22:41.976395 containerd[2104]: time="2024-09-04T17:22:41.976347873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:41.977939 containerd[2104]: time="2024-09-04T17:22:41.977865619Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849709" Sep 4 17:22:41.980534 containerd[2104]: time="2024-09-04T17:22:41.978846822Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:41.982294 containerd[2104]: time="2024-09-04T17:22:41.982210860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:41.984163 containerd[2104]: time="2024-09-04T17:22:41.984117278Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 3.380633224s" Sep 4 17:22:41.984253 containerd[2104]: time="2024-09-04T17:22:41.984161053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 4 17:22:42.011655 containerd[2104]: time="2024-09-04T17:22:42.011618862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:22:44.374434 containerd[2104]: time="2024-09-04T17:22:44.374380279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:44.375914 containerd[2104]: time="2024-09-04T17:22:44.375864621Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097777" Sep 4 17:22:44.376888 containerd[2104]: time="2024-09-04T17:22:44.376853433Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:44.381207 containerd[2104]: time="2024-09-04T17:22:44.381137552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:44.382501 containerd[2104]: time="2024-09-04T17:22:44.382309667Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 2.370643875s" Sep 4 17:22:44.382501 containerd[2104]: time="2024-09-04T17:22:44.382355560Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 4 17:22:44.410911 containerd[2104]: time="2024-09-04T17:22:44.410872518Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:22:45.153336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:22:45.164072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:45.458990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:45.471574 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:22:45.575566 kubelet[2712]: E0904 17:22:45.575253 2712 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:22:45.579650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:22:45.579940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:22:45.841887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719975985.mount: Deactivated successfully. Sep 4 17:22:46.354557 containerd[2104]: time="2024-09-04T17:22:46.354503175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:46.355739 containerd[2104]: time="2024-09-04T17:22:46.355608826Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303449" Sep 4 17:22:46.356875 containerd[2104]: time="2024-09-04T17:22:46.356840608Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:46.359844 containerd[2104]: time="2024-09-04T17:22:46.359787009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:46.360915 containerd[2104]: time="2024-09-04T17:22:46.360456646Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 1.949534308s" Sep 4 17:22:46.360915 containerd[2104]: time="2024-09-04T17:22:46.360557553Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 4 17:22:46.395629 containerd[2104]: time="2024-09-04T17:22:46.395540192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:22:46.982414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977945826.mount: Deactivated successfully. Sep 4 17:22:46.987686 containerd[2104]: time="2024-09-04T17:22:46.987637379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:46.988745 containerd[2104]: time="2024-09-04T17:22:46.988697901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:22:46.992219 containerd[2104]: time="2024-09-04T17:22:46.992159304Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:46.995427 containerd[2104]: time="2024-09-04T17:22:46.995329212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:46.996530 containerd[2104]: time="2024-09-04T17:22:46.996242504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 600.601885ms" Sep 4 17:22:46.996530 containerd[2104]: time="2024-09-04T17:22:46.996288115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:22:47.030133 containerd[2104]: time="2024-09-04T17:22:47.030096317Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:22:47.540787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3125671119.mount: Deactivated successfully. Sep 4 17:22:50.954333 containerd[2104]: time="2024-09-04T17:22:50.954190361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:50.955207 containerd[2104]: time="2024-09-04T17:22:50.955021122Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:22:50.957147 containerd[2104]: time="2024-09-04T17:22:50.956930960Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:50.962693 containerd[2104]: time="2024-09-04T17:22:50.962274146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:50.963810 containerd[2104]: time="2024-09-04T17:22:50.963748694Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.933610362s" Sep 4 17:22:50.963928 containerd[2104]: time="2024-09-04T17:22:50.963819849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:22:51.058990 containerd[2104]: time="2024-09-04T17:22:51.058943432Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:22:51.116806 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:22:51.711570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382567604.mount: Deactivated successfully. Sep 4 17:22:52.642965 containerd[2104]: time="2024-09-04T17:22:52.642913333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:52.645255 containerd[2104]: time="2024-09-04T17:22:52.644930634Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Sep 4 17:22:52.648097 containerd[2104]: time="2024-09-04T17:22:52.646549411Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:52.649786 containerd[2104]: time="2024-09-04T17:22:52.649308619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:22:52.650397 containerd[2104]: time="2024-09-04T17:22:52.650212143Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.59121981s" Sep 4 17:22:52.650397 containerd[2104]: time="2024-09-04T17:22:52.650260338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 4 17:22:55.766888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:22:55.777686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:56.044002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:56.057323 (kubelet)[2874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:22:56.134168 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:56.137972 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:22:56.138397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:56.159281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:56.184990 systemd[1]: Reloading requested from client PID 2890 ('systemctl') (unit session-7.scope)... Sep 4 17:22:56.185161 systemd[1]: Reloading... Sep 4 17:22:56.353188 zram_generator::config[2928]: No configuration found. Sep 4 17:22:56.549257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:22:56.718476 systemd[1]: Reloading finished in 532 ms. Sep 4 17:22:56.780983 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:22:56.781113 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:22:56.781694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:56.789119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:22:56.976528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:22:56.992269 (kubelet)[3000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:22:57.053930 kubelet[3000]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:22:57.053930 kubelet[3000]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:22:57.053930 kubelet[3000]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:22:57.054437 kubelet[3000]: I0904 17:22:57.054002 3000 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:22:57.765968 kubelet[3000]: I0904 17:22:57.765931 3000 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:22:57.765968 kubelet[3000]: I0904 17:22:57.765961 3000 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:22:57.766346 kubelet[3000]: I0904 17:22:57.766323 3000 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:22:57.843475 kubelet[3000]: I0904 17:22:57.838581 3000 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:22:57.843944 kubelet[3000]: E0904 17:22:57.843914 3000 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:57.874458 kubelet[3000]: I0904 17:22:57.874416 3000 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:22:57.876735 kubelet[3000]: I0904 17:22:57.876602 3000 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:22:57.877319 kubelet[3000]: I0904 17:22:57.877290 3000 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:22:57.878015 kubelet[3000]: I0904 17:22:57.877989 3000 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:22:57.878098 kubelet[3000]: I0904 17:22:57.878019 3000 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:22:57.880301 kubelet[3000]: I0904 17:22:57.880270 3000 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:22:57.882104 kubelet[3000]: I0904 17:22:57.882079 3000 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:22:57.882206 kubelet[3000]: I0904 17:22:57.882111 3000 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:22:57.882206 kubelet[3000]: I0904 17:22:57.882148 3000 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:22:57.882206 kubelet[3000]: I0904 17:22:57.882168 3000 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:22:57.886805 kubelet[3000]: I0904 17:22:57.885050 3000 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:22:57.889915 kubelet[3000]: W0904 17:22:57.889856 3000 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.27.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-203&limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:57.890103 kubelet[3000]: E0904 17:22:57.890091 3000 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-203&limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:57.890273 kubelet[3000]: W0904 17:22:57.890243 3000 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.27.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:57.890356 kubelet[3000]: E0904 17:22:57.890348 3000 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:57.890690 kubelet[3000]: W0904 17:22:57.890677 3000 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:22:57.891465 kubelet[3000]: I0904 17:22:57.891446 3000 server.go:1232] "Started kubelet" Sep 4 17:22:57.891742 kubelet[3000]: I0904 17:22:57.891717 3000 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:22:57.892493 kubelet[3000]: I0904 17:22:57.892464 3000 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:22:57.897711 kubelet[3000]: I0904 17:22:57.895921 3000 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:22:57.897711 kubelet[3000]: I0904 17:22:57.896209 3000 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:22:57.897711 kubelet[3000]: I0904 17:22:57.896889 3000 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:22:57.898268 kubelet[3000]: E0904 17:22:57.898156 3000 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-27-203.17f21a59419b0fbf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-27-203", UID:"ip-172-31-27-203", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-27-203"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 22, 57, 891413951, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 22, 57, 891413951, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-27-203"}': 'Post "https://172.31.27.203:6443/api/v1/namespaces/default/events": dial tcp 172.31.27.203:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:22:57.898852 kubelet[3000]: E0904 17:22:57.898832 3000 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:22:57.900482 kubelet[3000]: E0904 17:22:57.899819 3000 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:22:57.903161 kubelet[3000]: E0904 17:22:57.902829 3000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-27-203\" not found" Sep 4 17:22:57.903161 kubelet[3000]: I0904 17:22:57.902866 3000 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:22:57.903161 kubelet[3000]: I0904 17:22:57.903024 3000 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:22:57.903161 kubelet[3000]: I0904 17:22:57.903146 3000 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:22:57.903863 kubelet[3000]: W0904 17:22:57.903820 3000 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.27.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:57.903945 kubelet[3000]: E0904 17:22:57.903877 3000 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:57.905018 kubelet[3000]: E0904 17:22:57.904733 3000 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-203?timeout=10s\": dial tcp 172.31.27.203:6443: connect: connection refused" interval="200ms" Sep 4 17:22:57.939953 kubelet[3000]: I0904 17:22:57.939925 3000 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:22:57.943745 kubelet[3000]: I0904 17:22:57.943343 3000 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:22:57.943745 kubelet[3000]: I0904 17:22:57.943373 3000 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:22:57.943745 kubelet[3000]: I0904 17:22:57.943397 3000 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:22:57.943745 kubelet[3000]: E0904 17:22:57.943452 3000 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:22:57.959126 kubelet[3000]: W0904 17:22:57.959079 3000 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.27.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:57.962377 kubelet[3000]: E0904 17:22:57.961230 3000 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:57.978925 kubelet[3000]: I0904 17:22:57.978893 3000 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:22:57.979089 kubelet[3000]: I0904 17:22:57.979079 3000 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:22:57.979158 kubelet[3000]: I0904 17:22:57.979151 3000 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:22:57.981365 kubelet[3000]: I0904 17:22:57.981345 3000 policy_none.go:49] "None policy: Start" Sep 4 17:22:57.982280 kubelet[3000]: I0904 17:22:57.982257 3000 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:22:57.982391 kubelet[3000]: I0904 17:22:57.982299 3000 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:22:57.989416 kubelet[3000]: I0904 17:22:57.989387 3000 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:22:57.990166 kubelet[3000]: I0904 17:22:57.990147 3000 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:22:57.992430 kubelet[3000]: E0904 17:22:57.992395 3000 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-203\" not found" Sep 4 17:22:58.005098 kubelet[3000]: I0904 17:22:58.005044 3000 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-27-203" Sep 4 17:22:58.005428 kubelet[3000]: E0904 17:22:58.005407 3000 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.27.203:6443/api/v1/nodes\": dial tcp 172.31.27.203:6443: connect: connection refused" node="ip-172-31-27-203" Sep 4 17:22:58.044098 kubelet[3000]: I0904 17:22:58.043888 3000 topology_manager.go:215] "Topology Admit Handler" podUID="14b77d526d7de00d975320faeeb15c3e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-203" Sep 4 17:22:58.045677 kubelet[3000]: I0904 17:22:58.045651 3000 topology_manager.go:215] "Topology Admit Handler" podUID="efdb39a9051e3bc464a24a1515d8ee81" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-203" Sep 4 17:22:58.053375 kubelet[3000]: I0904 17:22:58.052458 3000 topology_manager.go:215] "Topology Admit Handler" podUID="f130210c95a76821e95582f614a5c728" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-203" Sep 4 17:22:58.103805 kubelet[3000]: I0904 17:22:58.103750 3000 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14b77d526d7de00d975320faeeb15c3e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-203\" (UID: \"14b77d526d7de00d975320faeeb15c3e\") " pod="kube-system/kube-apiserver-ip-172-31-27-203" Sep 4 17:22:58.104790 kubelet[3000]: I0904 17:22:58.103843 3000 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:22:58.104790 kubelet[3000]: I0904 17:22:58.103875 3000 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:22:58.104790 kubelet[3000]: I0904 17:22:58.103906 3000 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:22:58.104790 kubelet[3000]: I0904 17:22:58.103935 3000 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f130210c95a76821e95582f614a5c728-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-203\" (UID: \"f130210c95a76821e95582f614a5c728\") " pod="kube-system/kube-scheduler-ip-172-31-27-203" Sep 4 17:22:58.104790 kubelet[3000]: I0904 17:22:58.104285 3000 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14b77d526d7de00d975320faeeb15c3e-ca-certs\") pod \"kube-apiserver-ip-172-31-27-203\" (UID: \"14b77d526d7de00d975320faeeb15c3e\") " pod="kube-system/kube-apiserver-ip-172-31-27-203" Sep 4 17:22:58.105214 kubelet[3000]: I0904 17:22:58.104360 3000 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14b77d526d7de00d975320faeeb15c3e-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-203\" (UID: \"14b77d526d7de00d975320faeeb15c3e\") " pod="kube-system/kube-apiserver-ip-172-31-27-203" Sep 4 17:22:58.105214 kubelet[3000]: I0904 17:22:58.104395 3000 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:22:58.105214 kubelet[3000]: I0904 17:22:58.104443 3000 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:22:58.105631 kubelet[3000]: E0904 17:22:58.105607 3000 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-203?timeout=10s\": dial tcp 172.31.27.203:6443: connect: connection refused" interval="400ms" Sep 4 17:22:58.208024 kubelet[3000]: I0904 17:22:58.207989 3000 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-27-203" Sep 4 17:22:58.208367 kubelet[3000]: E0904 17:22:58.208341 3000 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.27.203:6443/api/v1/nodes\": dial tcp 172.31.27.203:6443: connect: connection refused" node="ip-172-31-27-203" Sep 4 17:22:58.369694 containerd[2104]: time="2024-09-04T17:22:58.369454128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-203,Uid:efdb39a9051e3bc464a24a1515d8ee81,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:58.376211 containerd[2104]: time="2024-09-04T17:22:58.376171784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-203,Uid:f130210c95a76821e95582f614a5c728,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:58.378013 containerd[2104]: time="2024-09-04T17:22:58.376464416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-203,Uid:14b77d526d7de00d975320faeeb15c3e,Namespace:kube-system,Attempt:0,}" Sep 4 17:22:58.506937 kubelet[3000]: E0904 17:22:58.506805 3000 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-203?timeout=10s\": dial tcp 172.31.27.203:6443: connect: connection refused" interval="800ms" Sep 4 17:22:58.611413 kubelet[3000]: I0904 17:22:58.611384 3000 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-27-203" Sep 4 17:22:58.611898 kubelet[3000]: E0904 17:22:58.611871 3000 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.27.203:6443/api/v1/nodes\": dial tcp 172.31.27.203:6443: connect: connection refused" node="ip-172-31-27-203" Sep 4 17:22:58.693778 kubelet[3000]: E0904 17:22:58.693508 3000 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-27-203.17f21a59419b0fbf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-27-203", UID:"ip-172-31-27-203", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-27-203"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 22, 57, 891413951, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 22, 57, 891413951, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-27-203"}': 'Post "https://172.31.27.203:6443/api/v1/namespaces/default/events": dial tcp 172.31.27.203:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:22:58.867412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027775635.mount: Deactivated successfully. Sep 4 17:22:58.877628 containerd[2104]: time="2024-09-04T17:22:58.877568561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:58.879035 containerd[2104]: time="2024-09-04T17:22:58.878963033Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:58.880243 containerd[2104]: time="2024-09-04T17:22:58.880198924Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:22:58.880631 containerd[2104]: time="2024-09-04T17:22:58.880594349Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:22:58.901649 containerd[2104]: time="2024-09-04T17:22:58.901585658Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:58.909999 containerd[2104]: time="2024-09-04T17:22:58.909941691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:22:58.928484 containerd[2104]: time="2024-09-04T17:22:58.928429604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.253238ms" Sep 4 17:22:58.935536 containerd[2104]: time="2024-09-04T17:22:58.935372713Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:58.940022 containerd[2104]: time="2024-09-04T17:22:58.939967885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:22:58.940570 containerd[2104]: time="2024-09-04T17:22:58.940479956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.389865ms" Sep 4 17:22:58.947923 containerd[2104]: time="2024-09-04T17:22:58.947811067Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.604628ms" Sep 4 17:22:59.121665 kubelet[3000]: W0904 17:22:59.121592 3000 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.27.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:59.121665 kubelet[3000]: E0904 17:22:59.121663 3000 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:59.150775 kubelet[3000]: W0904 17:22:59.150698 3000 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.27.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-203&limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:59.150923 kubelet[3000]: E0904 17:22:59.150793 3000 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-203&limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:59.199456 containerd[2104]: time="2024-09-04T17:22:59.197807945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:22:59.199456 containerd[2104]: time="2024-09-04T17:22:59.197867478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:59.199456 containerd[2104]: time="2024-09-04T17:22:59.197899552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:22:59.199456 containerd[2104]: time="2024-09-04T17:22:59.197922622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:59.200091 containerd[2104]: time="2024-09-04T17:22:59.195571901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:22:59.200560 containerd[2104]: time="2024-09-04T17:22:59.200132389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:59.200560 containerd[2104]: time="2024-09-04T17:22:59.200181164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:22:59.200560 containerd[2104]: time="2024-09-04T17:22:59.200196472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:59.206727 containerd[2104]: time="2024-09-04T17:22:59.206171146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:22:59.206727 containerd[2104]: time="2024-09-04T17:22:59.206247400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:59.206727 containerd[2104]: time="2024-09-04T17:22:59.206283098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:22:59.206727 containerd[2104]: time="2024-09-04T17:22:59.206305282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:22:59.287523 kubelet[3000]: W0904 17:22:59.287436 3000 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.27.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:59.287523 kubelet[3000]: E0904 17:22:59.287540 3000 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:59.308326 kubelet[3000]: E0904 17:22:59.308024 3000 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-203?timeout=10s\": dial tcp 172.31.27.203:6443: connect: connection refused" interval="1.6s" Sep 4 17:22:59.379298 containerd[2104]: time="2024-09-04T17:22:59.378161202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-203,Uid:efdb39a9051e3bc464a24a1515d8ee81,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d4a725ab9e55b2e06865b95f03647145d0fd826a028b09356f44c76cd1703ce\"" Sep 4 17:22:59.388047 containerd[2104]: time="2024-09-04T17:22:59.387799617Z" level=info msg="CreateContainer within sandbox \"6d4a725ab9e55b2e06865b95f03647145d0fd826a028b09356f44c76cd1703ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:22:59.395748 containerd[2104]: time="2024-09-04T17:22:59.395684815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-203,Uid:f130210c95a76821e95582f614a5c728,Namespace:kube-system,Attempt:0,} returns sandbox id \"60c7838edc571cc7968a3ca74e78498176a798226a111fc910328f5b93193c19\"" Sep 4 17:22:59.399959 containerd[2104]: time="2024-09-04T17:22:59.399481005Z" level=info msg="CreateContainer within sandbox \"60c7838edc571cc7968a3ca74e78498176a798226a111fc910328f5b93193c19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:22:59.406565 containerd[2104]: time="2024-09-04T17:22:59.406528563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-203,Uid:14b77d526d7de00d975320faeeb15c3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f295af1affcd564739a13c1035a615bd65d8b5c35fa9c23c66b0b5087bfd4e50\"" Sep 4 17:22:59.413413 containerd[2104]: time="2024-09-04T17:22:59.413275610Z" level=info msg="CreateContainer within sandbox \"f295af1affcd564739a13c1035a615bd65d8b5c35fa9c23c66b0b5087bfd4e50\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:22:59.415916 kubelet[3000]: I0904 17:22:59.415468 3000 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-27-203" Sep 4 17:22:59.415916 kubelet[3000]: E0904 17:22:59.415896 3000 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.27.203:6443/api/v1/nodes\": dial tcp 172.31.27.203:6443: connect: connection refused" node="ip-172-31-27-203" Sep 4 17:22:59.424723 containerd[2104]: time="2024-09-04T17:22:59.424679069Z" level=info msg="CreateContainer within sandbox \"60c7838edc571cc7968a3ca74e78498176a798226a111fc910328f5b93193c19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ac7e6c7a3c50ec34cfb9c3a6c27e58ac830b351b23a649601c6700b8f9a0d7c\"" Sep 4 17:22:59.426819 containerd[2104]: time="2024-09-04T17:22:59.425683519Z" level=info msg="StartContainer for \"2ac7e6c7a3c50ec34cfb9c3a6c27e58ac830b351b23a649601c6700b8f9a0d7c\"" Sep 4 17:22:59.432662 containerd[2104]: time="2024-09-04T17:22:59.432565728Z" level=info msg="CreateContainer within sandbox \"6d4a725ab9e55b2e06865b95f03647145d0fd826a028b09356f44c76cd1703ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"52983a7d0ed6e92230e4c834308aa807eb0c32a621394e17cf7ce798ad4a3b04\"" Sep 4 17:22:59.433826 containerd[2104]: time="2024-09-04T17:22:59.433602311Z" level=info msg="StartContainer for \"52983a7d0ed6e92230e4c834308aa807eb0c32a621394e17cf7ce798ad4a3b04\"" Sep 4 17:22:59.435334 containerd[2104]: time="2024-09-04T17:22:59.435298322Z" level=info msg="CreateContainer within sandbox \"f295af1affcd564739a13c1035a615bd65d8b5c35fa9c23c66b0b5087bfd4e50\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6b295eac5eec59d2a135c8524525d8be6e0c2a139c2a655940b191372f933e10\"" Sep 4 17:22:59.436528 containerd[2104]: time="2024-09-04T17:22:59.436496322Z" level=info msg="StartContainer for \"6b295eac5eec59d2a135c8524525d8be6e0c2a139c2a655940b191372f933e10\"" Sep 4 17:22:59.521163 kubelet[3000]: W0904 17:22:59.520594 3000 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.27.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:59.523395 kubelet[3000]: E0904 17:22:59.523166 3000 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:22:59.613542 containerd[2104]: time="2024-09-04T17:22:59.613490254Z" level=info msg="StartContainer for \"52983a7d0ed6e92230e4c834308aa807eb0c32a621394e17cf7ce798ad4a3b04\" returns successfully" Sep 4 17:22:59.667961 containerd[2104]: time="2024-09-04T17:22:59.667850147Z" level=info msg="StartContainer for \"6b295eac5eec59d2a135c8524525d8be6e0c2a139c2a655940b191372f933e10\" returns successfully" Sep 4 17:22:59.690883 containerd[2104]: time="2024-09-04T17:22:59.690735388Z" level=info msg="StartContainer for \"2ac7e6c7a3c50ec34cfb9c3a6c27e58ac830b351b23a649601c6700b8f9a0d7c\" returns successfully" Sep 4 17:22:59.848578 kubelet[3000]: E0904 17:22:59.848448 3000 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.203:6443: connect: connection refused Sep 4 17:23:01.019849 kubelet[3000]: I0904 17:23:01.018298 3000 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-27-203" Sep 4 17:23:03.637627 kubelet[3000]: E0904 17:23:03.637571 3000 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-203\" not found" node="ip-172-31-27-203" Sep 4 17:23:03.760262 kubelet[3000]: I0904 17:23:03.759827 3000 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-27-203" Sep 4 17:23:03.894846 kubelet[3000]: I0904 17:23:03.893907 3000 apiserver.go:52] "Watching apiserver" Sep 4 17:23:03.909688 kubelet[3000]: I0904 17:23:03.909630 3000 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:23:05.077954 update_engine[2076]: I0904 17:23:05.077868 2076 update_attempter.cc:509] Updating boot flags... Sep 4 17:23:05.167805 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3290) Sep 4 17:23:05.426947 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3289) Sep 4 17:23:06.838420 systemd[1]: Reloading requested from client PID 3459 ('systemctl') (unit session-7.scope)... Sep 4 17:23:06.838443 systemd[1]: Reloading... Sep 4 17:23:07.078664 zram_generator::config[3497]: No configuration found. Sep 4 17:23:07.325053 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:23:07.533816 systemd[1]: Reloading finished in 694 ms. Sep 4 17:23:07.582065 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:23:07.584041 kubelet[3000]: I0904 17:23:07.582774 3000 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:23:07.592604 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:23:07.593494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:23:07.599457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:23:07.914195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:23:07.927159 (kubelet)[3564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:23:08.070517 kubelet[3564]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:23:08.070517 kubelet[3564]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:23:08.070517 kubelet[3564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:23:08.070517 kubelet[3564]: I0904 17:23:08.070079 3564 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:23:08.083166 kubelet[3564]: I0904 17:23:08.082901 3564 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:23:08.083166 kubelet[3564]: I0904 17:23:08.082940 3564 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:23:08.084471 kubelet[3564]: I0904 17:23:08.083974 3564 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:23:08.086768 kubelet[3564]: I0904 17:23:08.086722 3564 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:23:08.096392 kubelet[3564]: I0904 17:23:08.096355 3564 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:23:08.114771 kubelet[3564]: I0904 17:23:08.114565 3564 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:23:08.115529 kubelet[3564]: I0904 17:23:08.115454 3564 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:23:08.116110 kubelet[3564]: I0904 17:23:08.116065 3564 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:23:08.117058 kubelet[3564]: I0904 17:23:08.116334 3564 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:23:08.117058 kubelet[3564]: I0904 17:23:08.116357 3564 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:23:08.117058 kubelet[3564]: I0904 17:23:08.116471 3564 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:23:08.117058 kubelet[3564]: I0904 17:23:08.116633 3564 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:23:08.117058 kubelet[3564]: I0904 17:23:08.116652 3564 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:23:08.117058 kubelet[3564]: I0904 17:23:08.116697 3564 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:23:08.117058 kubelet[3564]: I0904 17:23:08.116721 3564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:23:08.120574 kubelet[3564]: I0904 17:23:08.120556 3564 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:23:08.126642 kubelet[3564]: I0904 17:23:08.126615 3564 server.go:1232] "Started kubelet" Sep 4 17:23:08.133107 kubelet[3564]: I0904 17:23:08.132941 3564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:23:08.147864 kubelet[3564]: I0904 17:23:08.147252 3564 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:23:08.148690 kubelet[3564]: I0904 17:23:08.148672 3564 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:23:08.151596 kubelet[3564]: I0904 17:23:08.151566 3564 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:23:08.153105 kubelet[3564]: I0904 17:23:08.153085 3564 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:23:08.155823 kubelet[3564]: E0904 17:23:08.155800 3564 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:23:08.155975 kubelet[3564]: E0904 17:23:08.155965 3564 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:23:08.158661 kubelet[3564]: I0904 17:23:08.158634 3564 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:23:08.172986 kubelet[3564]: I0904 17:23:08.171012 3564 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:23:08.172986 kubelet[3564]: I0904 17:23:08.171324 3564 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:23:08.180809 kubelet[3564]: I0904 17:23:08.180035 3564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:23:08.184494 kubelet[3564]: I0904 17:23:08.184355 3564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:23:08.184494 kubelet[3564]: I0904 17:23:08.184407 3564 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:23:08.184494 kubelet[3564]: I0904 17:23:08.184431 3564 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:23:08.184721 kubelet[3564]: E0904 17:23:08.184592 3564 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:23:08.277724 kubelet[3564]: I0904 17:23:08.277400 3564 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-27-203" Sep 4 17:23:08.287688 kubelet[3564]: E0904 17:23:08.286995 3564 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:23:08.292708 kubelet[3564]: I0904 17:23:08.292682 3564 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-27-203" Sep 4 17:23:08.295388 kubelet[3564]: I0904 17:23:08.294048 3564 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-27-203" Sep 4 17:23:08.391667 kubelet[3564]: I0904 17:23:08.391405 3564 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:23:08.391667 kubelet[3564]: I0904 17:23:08.391434 3564 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:23:08.391667 kubelet[3564]: I0904 17:23:08.391451 3564 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:23:08.391667 kubelet[3564]: I0904 17:23:08.391590 3564 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:23:08.391667 kubelet[3564]: I0904 17:23:08.391608 3564 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:23:08.391667 kubelet[3564]: I0904 17:23:08.391615 3564 policy_none.go:49] "None policy: Start" Sep 4 17:23:08.393520 kubelet[3564]: I0904 17:23:08.393214 3564 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:23:08.393520 kubelet[3564]: I0904 17:23:08.393238 3564 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:23:08.393520 kubelet[3564]: I0904 17:23:08.393440 3564 state_mem.go:75] "Updated machine memory state" Sep 4 17:23:08.396453 kubelet[3564]: I0904 17:23:08.395504 3564 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:23:08.397525 kubelet[3564]: I0904 17:23:08.397508 3564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:23:08.487891 kubelet[3564]: I0904 17:23:08.487779 3564 topology_manager.go:215] "Topology Admit Handler" podUID="f130210c95a76821e95582f614a5c728" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-203" Sep 4 17:23:08.488293 kubelet[3564]: I0904 17:23:08.487910 3564 topology_manager.go:215] "Topology Admit Handler" podUID="14b77d526d7de00d975320faeeb15c3e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-203" Sep 4 17:23:08.488293 kubelet[3564]: I0904 17:23:08.487962 3564 topology_manager.go:215] "Topology Admit Handler" podUID="efdb39a9051e3bc464a24a1515d8ee81" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-203" Sep 4 17:23:08.498423 kubelet[3564]: E0904 17:23:08.497277 3564 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-27-203\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:23:08.499009 kubelet[3564]: E0904 17:23:08.498980 3564 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-203\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-203" Sep 4 17:23:08.500167 kubelet[3564]: E0904 17:23:08.500139 3564 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-27-203\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-203" Sep 4 17:23:08.582990 kubelet[3564]: I0904 17:23:08.582435 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:23:08.582990 kubelet[3564]: I0904 17:23:08.582482 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:23:08.582990 kubelet[3564]: I0904 17:23:08.582580 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f130210c95a76821e95582f614a5c728-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-203\" (UID: \"f130210c95a76821e95582f614a5c728\") " pod="kube-system/kube-scheduler-ip-172-31-27-203" Sep 4 17:23:08.582990 kubelet[3564]: I0904 17:23:08.582603 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14b77d526d7de00d975320faeeb15c3e-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-203\" (UID: \"14b77d526d7de00d975320faeeb15c3e\") " pod="kube-system/kube-apiserver-ip-172-31-27-203" Sep 4 17:23:08.582990 kubelet[3564]: I0904 17:23:08.582627 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14b77d526d7de00d975320faeeb15c3e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-203\" (UID: \"14b77d526d7de00d975320faeeb15c3e\") " pod="kube-system/kube-apiserver-ip-172-31-27-203" Sep 4 17:23:08.583336 kubelet[3564]: I0904 17:23:08.582649 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:23:08.583336 kubelet[3564]: I0904 17:23:08.582669 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14b77d526d7de00d975320faeeb15c3e-ca-certs\") pod \"kube-apiserver-ip-172-31-27-203\" (UID: \"14b77d526d7de00d975320faeeb15c3e\") " pod="kube-system/kube-apiserver-ip-172-31-27-203" Sep 4 17:23:08.583336 kubelet[3564]: I0904 17:23:08.582695 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:23:08.583336 kubelet[3564]: I0904 17:23:08.582801 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efdb39a9051e3bc464a24a1515d8ee81-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-203\" (UID: \"efdb39a9051e3bc464a24a1515d8ee81\") " pod="kube-system/kube-controller-manager-ip-172-31-27-203" Sep 4 17:23:09.118631 kubelet[3564]: I0904 17:23:09.118549 3564 apiserver.go:52] "Watching apiserver" Sep 4 17:23:09.172790 kubelet[3564]: I0904 17:23:09.172065 3564 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:23:09.420673 kubelet[3564]: I0904 17:23:09.418332 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-203" podStartSLOduration=3.415822062 podCreationTimestamp="2024-09-04 17:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:23:09.343473829 +0000 UTC m=+1.401898688" watchObservedRunningTime="2024-09-04 17:23:09.415822062 +0000 UTC m=+1.474246904" Sep 4 17:23:09.445781 kubelet[3564]: I0904 17:23:09.445099 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-203" podStartSLOduration=5.445050557 podCreationTimestamp="2024-09-04 17:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:23:09.418546324 +0000 UTC m=+1.476971167" watchObservedRunningTime="2024-09-04 17:23:09.445050557 +0000 UTC m=+1.503475403" Sep 4 17:23:09.461642 kubelet[3564]: I0904 17:23:09.460061 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-203" podStartSLOduration=5.460013815 podCreationTimestamp="2024-09-04 17:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:23:09.446729732 +0000 UTC m=+1.505154571" watchObservedRunningTime="2024-09-04 17:23:09.460013815 +0000 UTC m=+1.518438657" Sep 4 17:23:13.922279 sudo[2454]: pam_unix(sudo:session): session closed for user root Sep 4 17:23:13.958851 sshd[2450]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:13.980850 systemd[1]: sshd@6-172.31.27.203:22-139.178.68.195:53312.service: Deactivated successfully. Sep 4 17:23:13.988136 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:23:13.990451 systemd-logind[2069]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:23:13.992582 systemd-logind[2069]: Removed session 7. Sep 4 17:23:19.682874 kubelet[3564]: I0904 17:23:19.682689 3564 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:23:19.684329 containerd[2104]: time="2024-09-04T17:23:19.683388089Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:23:19.685943 kubelet[3564]: I0904 17:23:19.685079 3564 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:23:20.390745 kubelet[3564]: I0904 17:23:20.390676 3564 topology_manager.go:215] "Topology Admit Handler" podUID="c9f7ceca-aa86-4284-ba67-bdef931e2f28" podNamespace="kube-system" podName="kube-proxy-84tvs" Sep 4 17:23:20.514539 kubelet[3564]: I0904 17:23:20.514010 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czmlk\" (UniqueName: \"kubernetes.io/projected/c9f7ceca-aa86-4284-ba67-bdef931e2f28-kube-api-access-czmlk\") pod \"kube-proxy-84tvs\" (UID: \"c9f7ceca-aa86-4284-ba67-bdef931e2f28\") " pod="kube-system/kube-proxy-84tvs" Sep 4 17:23:20.514539 kubelet[3564]: I0904 17:23:20.514071 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9f7ceca-aa86-4284-ba67-bdef931e2f28-xtables-lock\") pod \"kube-proxy-84tvs\" (UID: \"c9f7ceca-aa86-4284-ba67-bdef931e2f28\") " pod="kube-system/kube-proxy-84tvs" Sep 4 17:23:20.514539 kubelet[3564]: I0904 17:23:20.514109 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9f7ceca-aa86-4284-ba67-bdef931e2f28-lib-modules\") pod \"kube-proxy-84tvs\" (UID: \"c9f7ceca-aa86-4284-ba67-bdef931e2f28\") " pod="kube-system/kube-proxy-84tvs" Sep 4 17:23:20.514539 kubelet[3564]: I0904 17:23:20.514141 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9f7ceca-aa86-4284-ba67-bdef931e2f28-kube-proxy\") pod \"kube-proxy-84tvs\" (UID: \"c9f7ceca-aa86-4284-ba67-bdef931e2f28\") " pod="kube-system/kube-proxy-84tvs" Sep 4 17:23:20.553546 kubelet[3564]: I0904 17:23:20.550546 3564 topology_manager.go:215] "Topology Admit Handler" podUID="32345abc-6484-4cec-bdcb-da09368d5cab" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-5k5jj" Sep 4 17:23:20.614597 kubelet[3564]: I0904 17:23:20.614562 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32345abc-6484-4cec-bdcb-da09368d5cab-var-lib-calico\") pod \"tigera-operator-5d56685c77-5k5jj\" (UID: \"32345abc-6484-4cec-bdcb-da09368d5cab\") " pod="tigera-operator/tigera-operator-5d56685c77-5k5jj" Sep 4 17:23:20.614884 kubelet[3564]: I0904 17:23:20.614867 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48qn8\" (UniqueName: \"kubernetes.io/projected/32345abc-6484-4cec-bdcb-da09368d5cab-kube-api-access-48qn8\") pod \"tigera-operator-5d56685c77-5k5jj\" (UID: \"32345abc-6484-4cec-bdcb-da09368d5cab\") " pod="tigera-operator/tigera-operator-5d56685c77-5k5jj" Sep 4 17:23:20.708720 containerd[2104]: time="2024-09-04T17:23:20.708074577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-84tvs,Uid:c9f7ceca-aa86-4284-ba67-bdef931e2f28,Namespace:kube-system,Attempt:0,}" Sep 4 17:23:20.765285 containerd[2104]: time="2024-09-04T17:23:20.765145143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:20.765532 containerd[2104]: time="2024-09-04T17:23:20.765330623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:20.765698 containerd[2104]: time="2024-09-04T17:23:20.765406345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:20.765698 containerd[2104]: time="2024-09-04T17:23:20.765430532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:20.872000 containerd[2104]: time="2024-09-04T17:23:20.871950077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-5k5jj,Uid:32345abc-6484-4cec-bdcb-da09368d5cab,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:23:20.885314 containerd[2104]: time="2024-09-04T17:23:20.885268952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-84tvs,Uid:c9f7ceca-aa86-4284-ba67-bdef931e2f28,Namespace:kube-system,Attempt:0,} returns sandbox id \"bddb6f9a0fa3733b049edc398ea8b9c0a8bbff1a23910c4d5990d1ab3abdcccf\"" Sep 4 17:23:20.891543 containerd[2104]: time="2024-09-04T17:23:20.891494040Z" level=info msg="CreateContainer within sandbox \"bddb6f9a0fa3733b049edc398ea8b9c0a8bbff1a23910c4d5990d1ab3abdcccf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:23:20.924899 containerd[2104]: time="2024-09-04T17:23:20.924705631Z" level=info msg="CreateContainer within sandbox \"bddb6f9a0fa3733b049edc398ea8b9c0a8bbff1a23910c4d5990d1ab3abdcccf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d1c690f21ea7d1cc1d61a6c1b1c340ce4cbf95f8f54a201804ffe4e696d65e0\"" Sep 4 17:23:20.938791 containerd[2104]: time="2024-09-04T17:23:20.930701696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:20.938791 containerd[2104]: time="2024-09-04T17:23:20.930815035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:20.938791 containerd[2104]: time="2024-09-04T17:23:20.930880783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:20.938791 containerd[2104]: time="2024-09-04T17:23:20.930905617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:20.947841 containerd[2104]: time="2024-09-04T17:23:20.947715816Z" level=info msg="StartContainer for \"9d1c690f21ea7d1cc1d61a6c1b1c340ce4cbf95f8f54a201804ffe4e696d65e0\"" Sep 4 17:23:21.109194 containerd[2104]: time="2024-09-04T17:23:21.109144422Z" level=info msg="StartContainer for \"9d1c690f21ea7d1cc1d61a6c1b1c340ce4cbf95f8f54a201804ffe4e696d65e0\" returns successfully" Sep 4 17:23:21.112985 containerd[2104]: time="2024-09-04T17:23:21.111665686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-5k5jj,Uid:32345abc-6484-4cec-bdcb-da09368d5cab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"66f20e738be9989ea115fde2fff2769b26bc29ae3099923fdc4751fb9349ebf3\"" Sep 4 17:23:21.114024 containerd[2104]: time="2024-09-04T17:23:21.113992085Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:23:22.526851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256354288.mount: Deactivated successfully. Sep 4 17:23:23.460898 containerd[2104]: time="2024-09-04T17:23:23.460847767Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:23.462102 containerd[2104]: time="2024-09-04T17:23:23.461952048Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136541" Sep 4 17:23:23.463173 containerd[2104]: time="2024-09-04T17:23:23.462972899Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:23.467225 containerd[2104]: time="2024-09-04T17:23:23.467181821Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:23.468024 containerd[2104]: time="2024-09-04T17:23:23.467984760Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.353927367s" Sep 4 17:23:23.468924 containerd[2104]: time="2024-09-04T17:23:23.468029210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:23:23.471502 containerd[2104]: time="2024-09-04T17:23:23.470288559Z" level=info msg="CreateContainer within sandbox \"66f20e738be9989ea115fde2fff2769b26bc29ae3099923fdc4751fb9349ebf3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:23:23.486412 containerd[2104]: time="2024-09-04T17:23:23.486367211Z" level=info msg="CreateContainer within sandbox \"66f20e738be9989ea115fde2fff2769b26bc29ae3099923fdc4751fb9349ebf3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab614db05721fcda4688cafc636f3f85b21e3ff6f3f1228d7289f55200970f40\"" Sep 4 17:23:23.491201 containerd[2104]: time="2024-09-04T17:23:23.491119193Z" level=info msg="StartContainer for \"ab614db05721fcda4688cafc636f3f85b21e3ff6f3f1228d7289f55200970f40\"" Sep 4 17:23:23.570402 containerd[2104]: time="2024-09-04T17:23:23.569426680Z" level=info msg="StartContainer for \"ab614db05721fcda4688cafc636f3f85b21e3ff6f3f1228d7289f55200970f40\" returns successfully" Sep 4 17:23:24.354859 kubelet[3564]: I0904 17:23:24.354684 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-84tvs" podStartSLOduration=4.354569929 podCreationTimestamp="2024-09-04 17:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:23:21.30868913 +0000 UTC m=+13.367113976" watchObservedRunningTime="2024-09-04 17:23:24.354569929 +0000 UTC m=+16.412994773" Sep 4 17:23:24.354859 kubelet[3564]: I0904 17:23:24.354829 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-5k5jj" podStartSLOduration=1.999877713 podCreationTimestamp="2024-09-04 17:23:20 +0000 UTC" firstStartedPulling="2024-09-04 17:23:21.113469075 +0000 UTC m=+13.171893907" lastFinishedPulling="2024-09-04 17:23:23.468389444 +0000 UTC m=+15.526814282" observedRunningTime="2024-09-04 17:23:24.354412015 +0000 UTC m=+16.412836862" watchObservedRunningTime="2024-09-04 17:23:24.354798088 +0000 UTC m=+16.413222935" Sep 4 17:23:27.017783 kubelet[3564]: I0904 17:23:27.012938 3564 topology_manager.go:215] "Topology Admit Handler" podUID="261c2e6c-68f2-467e-b239-c173b9d87ec4" podNamespace="calico-system" podName="calico-typha-59699d6468-9kd5t" Sep 4 17:23:27.081795 kubelet[3564]: I0904 17:23:27.081738 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz4fh\" (UniqueName: \"kubernetes.io/projected/261c2e6c-68f2-467e-b239-c173b9d87ec4-kube-api-access-cz4fh\") pod \"calico-typha-59699d6468-9kd5t\" (UID: \"261c2e6c-68f2-467e-b239-c173b9d87ec4\") " pod="calico-system/calico-typha-59699d6468-9kd5t" Sep 4 17:23:27.084721 kubelet[3564]: I0904 17:23:27.084691 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/261c2e6c-68f2-467e-b239-c173b9d87ec4-typha-certs\") pod \"calico-typha-59699d6468-9kd5t\" (UID: \"261c2e6c-68f2-467e-b239-c173b9d87ec4\") " pod="calico-system/calico-typha-59699d6468-9kd5t" Sep 4 17:23:27.084970 kubelet[3564]: I0904 17:23:27.084954 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/261c2e6c-68f2-467e-b239-c173b9d87ec4-tigera-ca-bundle\") pod \"calico-typha-59699d6468-9kd5t\" (UID: \"261c2e6c-68f2-467e-b239-c173b9d87ec4\") " pod="calico-system/calico-typha-59699d6468-9kd5t" Sep 4 17:23:27.247874 kubelet[3564]: I0904 17:23:27.246326 3564 topology_manager.go:215] "Topology Admit Handler" podUID="5d441f44-873e-421c-8454-c1be9a99891f" podNamespace="calico-system" podName="calico-node-rgvh8" Sep 4 17:23:27.289038 kubelet[3564]: I0904 17:23:27.288929 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5d441f44-873e-421c-8454-c1be9a99891f-node-certs\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.289373 kubelet[3564]: I0904 17:23:27.289234 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5d441f44-873e-421c-8454-c1be9a99891f-cni-log-dir\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.289559 kubelet[3564]: I0904 17:23:27.289546 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d441f44-873e-421c-8454-c1be9a99891f-lib-modules\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.289670 kubelet[3564]: I0904 17:23:27.289660 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d441f44-873e-421c-8454-c1be9a99891f-xtables-lock\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.289977 kubelet[3564]: I0904 17:23:27.289808 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kq94\" (UniqueName: \"kubernetes.io/projected/5d441f44-873e-421c-8454-c1be9a99891f-kube-api-access-4kq94\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.289977 kubelet[3564]: I0904 17:23:27.289849 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5d441f44-873e-421c-8454-c1be9a99891f-var-lib-calico\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.289977 kubelet[3564]: I0904 17:23:27.289881 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5d441f44-873e-421c-8454-c1be9a99891f-cni-bin-dir\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.289977 kubelet[3564]: I0904 17:23:27.289906 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5d441f44-873e-421c-8454-c1be9a99891f-cni-net-dir\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.289977 kubelet[3564]: I0904 17:23:27.289930 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5d441f44-873e-421c-8454-c1be9a99891f-flexvol-driver-host\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.290196 kubelet[3564]: I0904 17:23:27.289957 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5d441f44-873e-421c-8454-c1be9a99891f-policysync\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.290196 kubelet[3564]: I0904 17:23:27.289996 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d441f44-873e-421c-8454-c1be9a99891f-tigera-ca-bundle\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.290196 kubelet[3564]: I0904 17:23:27.290041 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5d441f44-873e-421c-8454-c1be9a99891f-var-run-calico\") pod \"calico-node-rgvh8\" (UID: \"5d441f44-873e-421c-8454-c1be9a99891f\") " pod="calico-system/calico-node-rgvh8" Sep 4 17:23:27.326211 containerd[2104]: time="2024-09-04T17:23:27.326159552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59699d6468-9kd5t,Uid:261c2e6c-68f2-467e-b239-c173b9d87ec4,Namespace:calico-system,Attempt:0,}" Sep 4 17:23:27.377509 kubelet[3564]: I0904 17:23:27.377209 3564 topology_manager.go:215] "Topology Admit Handler" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" podNamespace="calico-system" podName="csi-node-driver-plsms" Sep 4 17:23:27.382228 kubelet[3564]: E0904 17:23:27.381092 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plsms" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" Sep 4 17:23:27.452658 kubelet[3564]: E0904 17:23:27.448892 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.452658 kubelet[3564]: W0904 17:23:27.448920 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.452658 kubelet[3564]: E0904 17:23:27.448955 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.456648 kubelet[3564]: E0904 17:23:27.452996 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.456648 kubelet[3564]: W0904 17:23:27.453034 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.456648 kubelet[3564]: E0904 17:23:27.453065 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.456648 kubelet[3564]: E0904 17:23:27.453451 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.456648 kubelet[3564]: W0904 17:23:27.453465 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.456648 kubelet[3564]: E0904 17:23:27.453484 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.456648 kubelet[3564]: E0904 17:23:27.453793 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.456648 kubelet[3564]: W0904 17:23:27.453806 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.456648 kubelet[3564]: E0904 17:23:27.453823 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.468559 kubelet[3564]: E0904 17:23:27.463832 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.468559 kubelet[3564]: W0904 17:23:27.463861 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.468559 kubelet[3564]: E0904 17:23:27.463892 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.468559 kubelet[3564]: E0904 17:23:27.464221 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.468559 kubelet[3564]: W0904 17:23:27.464234 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.468559 kubelet[3564]: E0904 17:23:27.464252 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.468559 kubelet[3564]: E0904 17:23:27.464457 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.468559 kubelet[3564]: W0904 17:23:27.464467 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.468559 kubelet[3564]: E0904 17:23:27.464483 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.469335 kubelet[3564]: E0904 17:23:27.468663 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.469335 kubelet[3564]: W0904 17:23:27.468680 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.469335 kubelet[3564]: E0904 17:23:27.468707 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.474956 kubelet[3564]: E0904 17:23:27.474272 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.474956 kubelet[3564]: W0904 17:23:27.474298 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.474956 kubelet[3564]: E0904 17:23:27.474326 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.476937 kubelet[3564]: E0904 17:23:27.476877 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.476937 kubelet[3564]: W0904 17:23:27.476898 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.476937 kubelet[3564]: E0904 17:23:27.476926 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.479339 kubelet[3564]: E0904 17:23:27.478919 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.479339 kubelet[3564]: W0904 17:23:27.478941 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.479339 kubelet[3564]: E0904 17:23:27.478972 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.491745 kubelet[3564]: E0904 17:23:27.490117 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.491745 kubelet[3564]: W0904 17:23:27.490145 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.497368 kubelet[3564]: E0904 17:23:27.496817 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.499951 kubelet[3564]: E0904 17:23:27.499873 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.499951 kubelet[3564]: W0904 17:23:27.499904 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.499951 kubelet[3564]: E0904 17:23:27.499941 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.500947 kubelet[3564]: E0904 17:23:27.500827 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.506163 kubelet[3564]: W0904 17:23:27.500847 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.506163 kubelet[3564]: E0904 17:23:27.501179 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.519114 kubelet[3564]: E0904 17:23:27.517648 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.519114 kubelet[3564]: W0904 17:23:27.517684 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.519114 kubelet[3564]: E0904 17:23:27.517715 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.520079 kubelet[3564]: E0904 17:23:27.520049 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.520079 kubelet[3564]: W0904 17:23:27.520078 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.520276 kubelet[3564]: E0904 17:23:27.520105 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.528771 kubelet[3564]: E0904 17:23:27.523884 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.528771 kubelet[3564]: W0904 17:23:27.523911 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.528771 kubelet[3564]: E0904 17:23:27.523943 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.548792 kubelet[3564]: E0904 17:23:27.535845 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.548792 kubelet[3564]: W0904 17:23:27.535872 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.548792 kubelet[3564]: E0904 17:23:27.535914 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.548792 kubelet[3564]: E0904 17:23:27.536524 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.548792 kubelet[3564]: W0904 17:23:27.536540 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.548792 kubelet[3564]: E0904 17:23:27.536744 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.548792 kubelet[3564]: E0904 17:23:27.547516 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.548792 kubelet[3564]: W0904 17:23:27.547582 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.554725 kubelet[3564]: E0904 17:23:27.549019 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.566152 kubelet[3564]: E0904 17:23:27.564050 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.566152 kubelet[3564]: W0904 17:23:27.565607 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.573907 kubelet[3564]: E0904 17:23:27.570964 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.575575 kubelet[3564]: E0904 17:23:27.574403 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.575575 kubelet[3564]: W0904 17:23:27.574427 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.575575 kubelet[3564]: E0904 17:23:27.574458 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.591473 kubelet[3564]: E0904 17:23:27.584954 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.591473 kubelet[3564]: W0904 17:23:27.584983 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.591473 kubelet[3564]: E0904 17:23:27.585012 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.594062 kubelet[3564]: E0904 17:23:27.594029 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.594062 kubelet[3564]: W0904 17:23:27.594053 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.594269 kubelet[3564]: E0904 17:23:27.594079 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.598590 kubelet[3564]: E0904 17:23:27.598482 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.598590 kubelet[3564]: W0904 17:23:27.598511 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.598590 kubelet[3564]: E0904 17:23:27.598542 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.600002 kubelet[3564]: E0904 17:23:27.599113 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.600002 kubelet[3564]: W0904 17:23:27.599126 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.600002 kubelet[3564]: E0904 17:23:27.599146 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.602960 containerd[2104]: time="2024-09-04T17:23:27.601957591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:27.602960 containerd[2104]: time="2024-09-04T17:23:27.602050984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:27.602960 containerd[2104]: time="2024-09-04T17:23:27.602077266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:27.602960 containerd[2104]: time="2024-09-04T17:23:27.602100325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:27.607074 containerd[2104]: time="2024-09-04T17:23:27.607019159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rgvh8,Uid:5d441f44-873e-421c-8454-c1be9a99891f,Namespace:calico-system,Attempt:0,}" Sep 4 17:23:27.610220 kubelet[3564]: E0904 17:23:27.610191 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.610220 kubelet[3564]: W0904 17:23:27.610220 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.610642 kubelet[3564]: E0904 17:23:27.610251 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.610642 kubelet[3564]: I0904 17:23:27.610302 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dbb2c308-b34e-470f-bf61-160922ef3eb4-socket-dir\") pod \"csi-node-driver-plsms\" (UID: \"dbb2c308-b34e-470f-bf61-160922ef3eb4\") " pod="calico-system/csi-node-driver-plsms" Sep 4 17:23:27.618163 kubelet[3564]: E0904 17:23:27.617854 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.618163 kubelet[3564]: W0904 17:23:27.617882 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.618163 kubelet[3564]: E0904 17:23:27.617915 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.618163 kubelet[3564]: I0904 17:23:27.617959 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnql2\" (UniqueName: \"kubernetes.io/projected/dbb2c308-b34e-470f-bf61-160922ef3eb4-kube-api-access-mnql2\") pod \"csi-node-driver-plsms\" (UID: \"dbb2c308-b34e-470f-bf61-160922ef3eb4\") " pod="calico-system/csi-node-driver-plsms" Sep 4 17:23:27.618652 kubelet[3564]: E0904 17:23:27.618332 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.618652 kubelet[3564]: W0904 17:23:27.618347 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.618652 kubelet[3564]: E0904 17:23:27.618381 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.618652 kubelet[3564]: I0904 17:23:27.618418 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbb2c308-b34e-470f-bf61-160922ef3eb4-kubelet-dir\") pod \"csi-node-driver-plsms\" (UID: \"dbb2c308-b34e-470f-bf61-160922ef3eb4\") " pod="calico-system/csi-node-driver-plsms" Sep 4 17:23:27.620901 kubelet[3564]: E0904 17:23:27.618661 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.620901 kubelet[3564]: W0904 17:23:27.618673 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.620901 kubelet[3564]: E0904 17:23:27.618692 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.620901 kubelet[3564]: I0904 17:23:27.618731 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dbb2c308-b34e-470f-bf61-160922ef3eb4-varrun\") pod \"csi-node-driver-plsms\" (UID: \"dbb2c308-b34e-470f-bf61-160922ef3eb4\") " pod="calico-system/csi-node-driver-plsms" Sep 4 17:23:27.620901 kubelet[3564]: E0904 17:23:27.619417 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.620901 kubelet[3564]: W0904 17:23:27.619439 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.620901 kubelet[3564]: E0904 17:23:27.619460 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.620901 kubelet[3564]: I0904 17:23:27.619489 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dbb2c308-b34e-470f-bf61-160922ef3eb4-registration-dir\") pod \"csi-node-driver-plsms\" (UID: \"dbb2c308-b34e-470f-bf61-160922ef3eb4\") " pod="calico-system/csi-node-driver-plsms" Sep 4 17:23:27.624785 kubelet[3564]: E0904 17:23:27.624624 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.624785 kubelet[3564]: W0904 17:23:27.624653 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.624785 kubelet[3564]: E0904 17:23:27.624693 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.627324 kubelet[3564]: E0904 17:23:27.626964 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.627324 kubelet[3564]: W0904 17:23:27.626991 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.627324 kubelet[3564]: E0904 17:23:27.627036 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.630705 kubelet[3564]: E0904 17:23:27.630072 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.630705 kubelet[3564]: W0904 17:23:27.630105 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.630705 kubelet[3564]: E0904 17:23:27.630158 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.636008 kubelet[3564]: E0904 17:23:27.634026 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.636008 kubelet[3564]: W0904 17:23:27.634051 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.636008 kubelet[3564]: E0904 17:23:27.634098 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.639447 kubelet[3564]: E0904 17:23:27.639411 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.639784 kubelet[3564]: W0904 17:23:27.639691 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.641778 kubelet[3564]: E0904 17:23:27.640047 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.643572 kubelet[3564]: E0904 17:23:27.643523 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.644000 kubelet[3564]: W0904 17:23:27.643926 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.644804 kubelet[3564]: E0904 17:23:27.644538 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.655531 kubelet[3564]: E0904 17:23:27.652317 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.655531 kubelet[3564]: W0904 17:23:27.652346 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.663743 kubelet[3564]: E0904 17:23:27.663709 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.664889 kubelet[3564]: E0904 17:23:27.664863 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.665106 kubelet[3564]: W0904 17:23:27.665022 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.665407 kubelet[3564]: E0904 17:23:27.665395 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.667206 kubelet[3564]: E0904 17:23:27.667006 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.667206 kubelet[3564]: W0904 17:23:27.667025 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.667206 kubelet[3564]: E0904 17:23:27.667053 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.669062 kubelet[3564]: E0904 17:23:27.668809 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.669062 kubelet[3564]: W0904 17:23:27.668827 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.669062 kubelet[3564]: E0904 17:23:27.668852 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.725534 kubelet[3564]: E0904 17:23:27.725455 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.725534 kubelet[3564]: W0904 17:23:27.725481 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.725534 kubelet[3564]: E0904 17:23:27.725514 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.729073 kubelet[3564]: E0904 17:23:27.729034 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.729434 kubelet[3564]: W0904 17:23:27.729253 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.729434 kubelet[3564]: E0904 17:23:27.729300 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.731057 kubelet[3564]: E0904 17:23:27.730145 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.731057 kubelet[3564]: W0904 17:23:27.730163 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.731057 kubelet[3564]: E0904 17:23:27.730190 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.731057 kubelet[3564]: E0904 17:23:27.730981 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.731057 kubelet[3564]: W0904 17:23:27.730993 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.731669 kubelet[3564]: E0904 17:23:27.731486 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.732031 kubelet[3564]: E0904 17:23:27.731917 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.732031 kubelet[3564]: W0904 17:23:27.731931 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.733547 kubelet[3564]: E0904 17:23:27.732364 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.733856 kubelet[3564]: E0904 17:23:27.733834 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.734006 kubelet[3564]: W0904 17:23:27.733956 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.734221 kubelet[3564]: E0904 17:23:27.734196 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.735329 kubelet[3564]: E0904 17:23:27.735302 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.735584 kubelet[3564]: W0904 17:23:27.735502 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.736931 kubelet[3564]: E0904 17:23:27.735951 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.736931 kubelet[3564]: E0904 17:23:27.736389 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.736931 kubelet[3564]: W0904 17:23:27.736401 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.737472 kubelet[3564]: E0904 17:23:27.737456 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.737472 kubelet[3564]: W0904 17:23:27.737472 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.738261 kubelet[3564]: E0904 17:23:27.738239 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.738339 kubelet[3564]: E0904 17:23:27.738318 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.739224 kubelet[3564]: E0904 17:23:27.738861 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.739224 kubelet[3564]: W0904 17:23:27.738880 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.739224 kubelet[3564]: E0904 17:23:27.738904 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.739224 kubelet[3564]: E0904 17:23:27.739148 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.739224 kubelet[3564]: W0904 17:23:27.739158 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.739224 kubelet[3564]: E0904 17:23:27.739217 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.739786 kubelet[3564]: E0904 17:23:27.739750 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.739786 kubelet[3564]: W0904 17:23:27.739786 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.739895 kubelet[3564]: E0904 17:23:27.739808 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.740771 kubelet[3564]: E0904 17:23:27.740717 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.740771 kubelet[3564]: W0904 17:23:27.740736 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.741176 kubelet[3564]: E0904 17:23:27.741158 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.741527 kubelet[3564]: E0904 17:23:27.741511 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.741527 kubelet[3564]: W0904 17:23:27.741528 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.741679 kubelet[3564]: E0904 17:23:27.741552 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.742598 kubelet[3564]: E0904 17:23:27.742566 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.742684 kubelet[3564]: W0904 17:23:27.742601 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.742774 kubelet[3564]: E0904 17:23:27.742693 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.744949 kubelet[3564]: E0904 17:23:27.744232 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.744949 kubelet[3564]: W0904 17:23:27.744249 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.744949 kubelet[3564]: E0904 17:23:27.744351 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.744949 kubelet[3564]: E0904 17:23:27.744574 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.744949 kubelet[3564]: W0904 17:23:27.744584 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.746914 kubelet[3564]: E0904 17:23:27.745032 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.746914 kubelet[3564]: E0904 17:23:27.746323 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.746914 kubelet[3564]: W0904 17:23:27.746336 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.746914 kubelet[3564]: E0904 17:23:27.746468 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.746914 kubelet[3564]: E0904 17:23:27.746623 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.746914 kubelet[3564]: W0904 17:23:27.746632 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.746914 kubelet[3564]: E0904 17:23:27.746718 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.747325 kubelet[3564]: E0904 17:23:27.747099 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.747325 kubelet[3564]: W0904 17:23:27.747110 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.747325 kubelet[3564]: E0904 17:23:27.747295 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.749497 kubelet[3564]: E0904 17:23:27.747931 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.749497 kubelet[3564]: W0904 17:23:27.747945 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.749497 kubelet[3564]: E0904 17:23:27.747969 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.750865 kubelet[3564]: E0904 17:23:27.750841 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.750865 kubelet[3564]: W0904 17:23:27.750860 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.751014 kubelet[3564]: E0904 17:23:27.750965 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.751661 kubelet[3564]: E0904 17:23:27.751633 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.751661 kubelet[3564]: W0904 17:23:27.751649 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.752822 kubelet[3564]: E0904 17:23:27.752800 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.759796 kubelet[3564]: E0904 17:23:27.759587 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.759796 kubelet[3564]: W0904 17:23:27.759615 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.761112 kubelet[3564]: E0904 17:23:27.760969 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.761112 kubelet[3564]: W0904 17:23:27.760991 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.761112 kubelet[3564]: E0904 17:23:27.761021 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.761112 kubelet[3564]: E0904 17:23:27.761066 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.762471 kubelet[3564]: E0904 17:23:27.762204 3564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:23:27.762471 kubelet[3564]: W0904 17:23:27.762224 3564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:23:27.762471 kubelet[3564]: E0904 17:23:27.762249 3564 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:23:27.770599 containerd[2104]: time="2024-09-04T17:23:27.770299850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:27.771946 containerd[2104]: time="2024-09-04T17:23:27.771708319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:27.771946 containerd[2104]: time="2024-09-04T17:23:27.771834747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:27.772653 containerd[2104]: time="2024-09-04T17:23:27.771909182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:27.859817 containerd[2104]: time="2024-09-04T17:23:27.858138080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rgvh8,Uid:5d441f44-873e-421c-8454-c1be9a99891f,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1e91b7febdb0dc0cb5bd96a93ee40c4d341305d9e19e3f72106e9eb48ef4afd\"" Sep 4 17:23:27.865424 containerd[2104]: time="2024-09-04T17:23:27.865040963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:23:28.123952 containerd[2104]: time="2024-09-04T17:23:28.122620305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59699d6468-9kd5t,Uid:261c2e6c-68f2-467e-b239-c173b9d87ec4,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce1aabd755d49266a5a5af2850b4bddafd1fa5567ced92d428ea835c7f7f1cc4\"" Sep 4 17:23:29.185956 kubelet[3564]: E0904 17:23:29.185915 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plsms" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" Sep 4 17:23:29.704324 containerd[2104]: time="2024-09-04T17:23:29.704280984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:29.707898 containerd[2104]: time="2024-09-04T17:23:29.706309515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:23:29.711780 containerd[2104]: time="2024-09-04T17:23:29.710555896Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:29.714659 containerd[2104]: time="2024-09-04T17:23:29.714622716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:29.716651 containerd[2104]: time="2024-09-04T17:23:29.716600200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.851500436s" Sep 4 17:23:29.716840 containerd[2104]: time="2024-09-04T17:23:29.716818944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:23:29.717943 containerd[2104]: time="2024-09-04T17:23:29.717916777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:23:29.720141 containerd[2104]: time="2024-09-04T17:23:29.720110258Z" level=info msg="CreateContainer within sandbox \"e1e91b7febdb0dc0cb5bd96a93ee40c4d341305d9e19e3f72106e9eb48ef4afd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:23:29.750005 containerd[2104]: time="2024-09-04T17:23:29.749960596Z" level=info msg="CreateContainer within sandbox \"e1e91b7febdb0dc0cb5bd96a93ee40c4d341305d9e19e3f72106e9eb48ef4afd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"146c353663aec0b73d9221788aa587154fead6b94a9381f053aa0587b63ec92e\"" Sep 4 17:23:29.751370 containerd[2104]: time="2024-09-04T17:23:29.751082106Z" level=info msg="StartContainer for \"146c353663aec0b73d9221788aa587154fead6b94a9381f053aa0587b63ec92e\"" Sep 4 17:23:29.864288 containerd[2104]: time="2024-09-04T17:23:29.863288028Z" level=info msg="StartContainer for \"146c353663aec0b73d9221788aa587154fead6b94a9381f053aa0587b63ec92e\" returns successfully" Sep 4 17:23:29.952793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-146c353663aec0b73d9221788aa587154fead6b94a9381f053aa0587b63ec92e-rootfs.mount: Deactivated successfully. Sep 4 17:23:29.985765 containerd[2104]: time="2024-09-04T17:23:29.966630800Z" level=info msg="shim disconnected" id=146c353663aec0b73d9221788aa587154fead6b94a9381f053aa0587b63ec92e namespace=k8s.io Sep 4 17:23:29.987272 containerd[2104]: time="2024-09-04T17:23:29.986037072Z" level=warning msg="cleaning up after shim disconnected" id=146c353663aec0b73d9221788aa587154fead6b94a9381f053aa0587b63ec92e namespace=k8s.io Sep 4 17:23:29.987272 containerd[2104]: time="2024-09-04T17:23:29.986101005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:31.188849 kubelet[3564]: E0904 17:23:31.186035 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plsms" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" Sep 4 17:23:33.091672 containerd[2104]: time="2024-09-04T17:23:33.091618469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:33.093018 containerd[2104]: time="2024-09-04T17:23:33.092969386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:23:33.095392 containerd[2104]: time="2024-09-04T17:23:33.095260293Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:33.100037 containerd[2104]: time="2024-09-04T17:23:33.099044814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:33.100037 containerd[2104]: time="2024-09-04T17:23:33.099899047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.381795922s" Sep 4 17:23:33.100037 containerd[2104]: time="2024-09-04T17:23:33.099940009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:23:33.104266 containerd[2104]: time="2024-09-04T17:23:33.104196816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:23:33.128572 containerd[2104]: time="2024-09-04T17:23:33.128027499Z" level=info msg="CreateContainer within sandbox \"ce1aabd755d49266a5a5af2850b4bddafd1fa5567ced92d428ea835c7f7f1cc4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:23:33.145694 containerd[2104]: time="2024-09-04T17:23:33.145632124Z" level=info msg="CreateContainer within sandbox \"ce1aabd755d49266a5a5af2850b4bddafd1fa5567ced92d428ea835c7f7f1cc4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8839d59d0aaab46cb9e9612f87f97848b197c0c207645d2ff99d1ec21bdfcd2b\"" Sep 4 17:23:33.151025 containerd[2104]: time="2024-09-04T17:23:33.150983185Z" level=info msg="StartContainer for \"8839d59d0aaab46cb9e9612f87f97848b197c0c207645d2ff99d1ec21bdfcd2b\"" Sep 4 17:23:33.192178 kubelet[3564]: E0904 17:23:33.190384 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plsms" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" Sep 4 17:23:33.272705 containerd[2104]: time="2024-09-04T17:23:33.272650138Z" level=info msg="StartContainer for \"8839d59d0aaab46cb9e9612f87f97848b197c0c207645d2ff99d1ec21bdfcd2b\" returns successfully" Sep 4 17:23:33.462138 kubelet[3564]: I0904 17:23:33.461510 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-59699d6468-9kd5t" podStartSLOduration=2.486534435 podCreationTimestamp="2024-09-04 17:23:26 +0000 UTC" firstStartedPulling="2024-09-04 17:23:28.125685208 +0000 UTC m=+20.184110034" lastFinishedPulling="2024-09-04 17:23:33.100591589 +0000 UTC m=+25.159016435" observedRunningTime="2024-09-04 17:23:33.461008871 +0000 UTC m=+25.519433718" watchObservedRunningTime="2024-09-04 17:23:33.461440836 +0000 UTC m=+25.519865683" Sep 4 17:23:35.189098 kubelet[3564]: E0904 17:23:35.189044 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plsms" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" Sep 4 17:23:37.185409 kubelet[3564]: E0904 17:23:37.185367 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plsms" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" Sep 4 17:23:39.034195 containerd[2104]: time="2024-09-04T17:23:39.034087658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:39.036225 containerd[2104]: time="2024-09-04T17:23:39.036072429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:23:39.038833 containerd[2104]: time="2024-09-04T17:23:39.037374849Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:39.040365 containerd[2104]: time="2024-09-04T17:23:39.040313771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:39.042289 containerd[2104]: time="2024-09-04T17:23:39.042244552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.937964222s" Sep 4 17:23:39.042385 containerd[2104]: time="2024-09-04T17:23:39.042299502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:23:39.048390 containerd[2104]: time="2024-09-04T17:23:39.048337805Z" level=info msg="CreateContainer within sandbox \"e1e91b7febdb0dc0cb5bd96a93ee40c4d341305d9e19e3f72106e9eb48ef4afd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:23:39.071843 containerd[2104]: time="2024-09-04T17:23:39.071802095Z" level=info msg="CreateContainer within sandbox \"e1e91b7febdb0dc0cb5bd96a93ee40c4d341305d9e19e3f72106e9eb48ef4afd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9b18a96c60c894cc39796d4d9acde66fa9e2810e936379eb26734fd882e05843\"" Sep 4 17:23:39.073530 containerd[2104]: time="2024-09-04T17:23:39.073494606Z" level=info msg="StartContainer for \"9b18a96c60c894cc39796d4d9acde66fa9e2810e936379eb26734fd882e05843\"" Sep 4 17:23:39.185417 kubelet[3564]: E0904 17:23:39.185298 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-plsms" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" Sep 4 17:23:39.217449 containerd[2104]: time="2024-09-04T17:23:39.217168085Z" level=info msg="StartContainer for \"9b18a96c60c894cc39796d4d9acde66fa9e2810e936379eb26734fd882e05843\" returns successfully" Sep 4 17:23:40.117153 kubelet[3564]: I0904 17:23:40.117066 3564 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:23:40.134383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b18a96c60c894cc39796d4d9acde66fa9e2810e936379eb26734fd882e05843-rootfs.mount: Deactivated successfully. Sep 4 17:23:40.205214 containerd[2104]: time="2024-09-04T17:23:40.199320406Z" level=info msg="shim disconnected" id=9b18a96c60c894cc39796d4d9acde66fa9e2810e936379eb26734fd882e05843 namespace=k8s.io Sep 4 17:23:40.205214 containerd[2104]: time="2024-09-04T17:23:40.200376548Z" level=warning msg="cleaning up after shim disconnected" id=9b18a96c60c894cc39796d4d9acde66fa9e2810e936379eb26734fd882e05843 namespace=k8s.io Sep 4 17:23:40.205214 containerd[2104]: time="2024-09-04T17:23:40.200394682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:23:40.251784 kubelet[3564]: I0904 17:23:40.234720 3564 topology_manager.go:215] "Topology Admit Handler" podUID="b219a213-82f1-4b33-8672-d779b1685a8a" podNamespace="kube-system" podName="coredns-5dd5756b68-zl2bl" Sep 4 17:23:40.272101 kubelet[3564]: I0904 17:23:40.272064 3564 topology_manager.go:215] "Topology Admit Handler" podUID="1378bcfe-e321-41f9-bb2f-e3d1489fb204" podNamespace="kube-system" podName="coredns-5dd5756b68-2t9pf" Sep 4 17:23:40.288212 kubelet[3564]: I0904 17:23:40.287098 3564 topology_manager.go:215] "Topology Admit Handler" podUID="c4d606cc-0f01-4476-aeea-7e3289eab77d" podNamespace="calico-system" podName="calico-kube-controllers-6b8878b445-h6vkf" Sep 4 17:23:40.331645 kubelet[3564]: I0904 17:23:40.329700 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmns4\" (UniqueName: \"kubernetes.io/projected/1378bcfe-e321-41f9-bb2f-e3d1489fb204-kube-api-access-cmns4\") pod \"coredns-5dd5756b68-2t9pf\" (UID: \"1378bcfe-e321-41f9-bb2f-e3d1489fb204\") " pod="kube-system/coredns-5dd5756b68-2t9pf" Sep 4 17:23:40.331645 kubelet[3564]: I0904 17:23:40.329789 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b219a213-82f1-4b33-8672-d779b1685a8a-config-volume\") pod \"coredns-5dd5756b68-zl2bl\" (UID: \"b219a213-82f1-4b33-8672-d779b1685a8a\") " pod="kube-system/coredns-5dd5756b68-zl2bl" Sep 4 17:23:40.331645 kubelet[3564]: I0904 17:23:40.329838 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcx8h\" (UniqueName: \"kubernetes.io/projected/b219a213-82f1-4b33-8672-d779b1685a8a-kube-api-access-mcx8h\") pod \"coredns-5dd5756b68-zl2bl\" (UID: \"b219a213-82f1-4b33-8672-d779b1685a8a\") " pod="kube-system/coredns-5dd5756b68-zl2bl" Sep 4 17:23:40.331645 kubelet[3564]: I0904 17:23:40.329882 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4d606cc-0f01-4476-aeea-7e3289eab77d-tigera-ca-bundle\") pod \"calico-kube-controllers-6b8878b445-h6vkf\" (UID: \"c4d606cc-0f01-4476-aeea-7e3289eab77d\") " pod="calico-system/calico-kube-controllers-6b8878b445-h6vkf" Sep 4 17:23:40.331645 kubelet[3564]: I0904 17:23:40.330021 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvwlk\" (UniqueName: \"kubernetes.io/projected/c4d606cc-0f01-4476-aeea-7e3289eab77d-kube-api-access-vvwlk\") pod \"calico-kube-controllers-6b8878b445-h6vkf\" (UID: \"c4d606cc-0f01-4476-aeea-7e3289eab77d\") " pod="calico-system/calico-kube-controllers-6b8878b445-h6vkf" Sep 4 17:23:40.332504 kubelet[3564]: I0904 17:23:40.330096 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1378bcfe-e321-41f9-bb2f-e3d1489fb204-config-volume\") pod \"coredns-5dd5756b68-2t9pf\" (UID: \"1378bcfe-e321-41f9-bb2f-e3d1489fb204\") " pod="kube-system/coredns-5dd5756b68-2t9pf" Sep 4 17:23:40.465855 containerd[2104]: time="2024-09-04T17:23:40.458021904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:23:40.620349 containerd[2104]: time="2024-09-04T17:23:40.619876579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zl2bl,Uid:b219a213-82f1-4b33-8672-d779b1685a8a,Namespace:kube-system,Attempt:0,}" Sep 4 17:23:40.635391 containerd[2104]: time="2024-09-04T17:23:40.634960353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8878b445-h6vkf,Uid:c4d606cc-0f01-4476-aeea-7e3289eab77d,Namespace:calico-system,Attempt:0,}" Sep 4 17:23:40.637431 containerd[2104]: time="2024-09-04T17:23:40.637174418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2t9pf,Uid:1378bcfe-e321-41f9-bb2f-e3d1489fb204,Namespace:kube-system,Attempt:0,}" Sep 4 17:23:41.136861 containerd[2104]: time="2024-09-04T17:23:41.136623339Z" level=error msg="Failed to destroy network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.150025 containerd[2104]: time="2024-09-04T17:23:41.147686023Z" level=error msg="encountered an error cleaning up failed sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.150043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120-shm.mount: Deactivated successfully. Sep 4 17:23:41.188778 containerd[2104]: time="2024-09-04T17:23:41.185822269Z" level=error msg="Failed to destroy network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.192430 containerd[2104]: time="2024-09-04T17:23:41.191231440Z" level=error msg="encountered an error cleaning up failed sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.192430 containerd[2104]: time="2024-09-04T17:23:41.191346400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2t9pf,Uid:1378bcfe-e321-41f9-bb2f-e3d1489fb204,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.192430 containerd[2104]: time="2024-09-04T17:23:41.191608184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zl2bl,Uid:b219a213-82f1-4b33-8672-d779b1685a8a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.192891 containerd[2104]: time="2024-09-04T17:23:41.192858984Z" level=error msg="Failed to destroy network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.193261 kubelet[3564]: E0904 17:23:41.193232 3564 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.193526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16-shm.mount: Deactivated successfully. Sep 4 17:23:41.195958 kubelet[3564]: E0904 17:23:41.193893 3564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-zl2bl" Sep 4 17:23:41.195958 kubelet[3564]: E0904 17:23:41.193951 3564 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-zl2bl" Sep 4 17:23:41.195958 kubelet[3564]: E0904 17:23:41.194086 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-zl2bl_kube-system(b219a213-82f1-4b33-8672-d779b1685a8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-zl2bl_kube-system(b219a213-82f1-4b33-8672-d779b1685a8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-zl2bl" podUID="b219a213-82f1-4b33-8672-d779b1685a8a" Sep 4 17:23:41.196195 containerd[2104]: time="2024-09-04T17:23:41.195285613Z" level=error msg="encountered an error cleaning up failed sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.196195 containerd[2104]: time="2024-09-04T17:23:41.195437984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8878b445-h6vkf,Uid:c4d606cc-0f01-4476-aeea-7e3289eab77d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.197043 kubelet[3564]: E0904 17:23:41.196485 3564 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.197043 kubelet[3564]: E0904 17:23:41.196537 3564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-2t9pf" Sep 4 17:23:41.197043 kubelet[3564]: E0904 17:23:41.196562 3564 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-2t9pf" Sep 4 17:23:41.197301 kubelet[3564]: E0904 17:23:41.196632 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-2t9pf_kube-system(1378bcfe-e321-41f9-bb2f-e3d1489fb204)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-2t9pf_kube-system(1378bcfe-e321-41f9-bb2f-e3d1489fb204)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-2t9pf" podUID="1378bcfe-e321-41f9-bb2f-e3d1489fb204" Sep 4 17:23:41.197301 kubelet[3564]: E0904 17:23:41.196872 3564 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.197301 kubelet[3564]: E0904 17:23:41.196925 3564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b8878b445-h6vkf" Sep 4 17:23:41.198802 kubelet[3564]: E0904 17:23:41.196966 3564 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b8878b445-h6vkf" Sep 4 17:23:41.198802 kubelet[3564]: E0904 17:23:41.197016 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b8878b445-h6vkf_calico-system(c4d606cc-0f01-4476-aeea-7e3289eab77d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b8878b445-h6vkf_calico-system(c4d606cc-0f01-4476-aeea-7e3289eab77d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b8878b445-h6vkf" podUID="c4d606cc-0f01-4476-aeea-7e3289eab77d" Sep 4 17:23:41.202300 containerd[2104]: time="2024-09-04T17:23:41.201014757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plsms,Uid:dbb2c308-b34e-470f-bf61-160922ef3eb4,Namespace:calico-system,Attempt:0,}" Sep 4 17:23:41.209265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0-shm.mount: Deactivated successfully. Sep 4 17:23:41.295018 containerd[2104]: time="2024-09-04T17:23:41.294960149Z" level=error msg="Failed to destroy network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.295861 containerd[2104]: time="2024-09-04T17:23:41.295571888Z" level=error msg="encountered an error cleaning up failed sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.295861 containerd[2104]: time="2024-09-04T17:23:41.295678782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plsms,Uid:dbb2c308-b34e-470f-bf61-160922ef3eb4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.296147 kubelet[3564]: E0904 17:23:41.296043 3564 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.296515 kubelet[3564]: E0904 17:23:41.296170 3564 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plsms" Sep 4 17:23:41.296515 kubelet[3564]: E0904 17:23:41.296202 3564 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-plsms" Sep 4 17:23:41.296729 kubelet[3564]: E0904 17:23:41.296644 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-plsms_calico-system(dbb2c308-b34e-470f-bf61-160922ef3eb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-plsms_calico-system(dbb2c308-b34e-470f-bf61-160922ef3eb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-plsms" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" Sep 4 17:23:41.460001 kubelet[3564]: I0904 17:23:41.458531 3564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:23:41.461466 kubelet[3564]: I0904 17:23:41.461433 3564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:23:41.468524 kubelet[3564]: I0904 17:23:41.468489 3564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:23:41.472107 kubelet[3564]: I0904 17:23:41.472069 3564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:23:41.523213 containerd[2104]: time="2024-09-04T17:23:41.521105849Z" level=info msg="StopPodSandbox for \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\"" Sep 4 17:23:41.523213 containerd[2104]: time="2024-09-04T17:23:41.521696261Z" level=info msg="Ensure that sandbox 15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb in task-service has been cleanup successfully" Sep 4 17:23:41.524382 containerd[2104]: time="2024-09-04T17:23:41.524340501Z" level=info msg="StopPodSandbox for \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\"" Sep 4 17:23:41.524711 containerd[2104]: time="2024-09-04T17:23:41.524542047Z" level=info msg="StopPodSandbox for \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\"" Sep 4 17:23:41.527263 containerd[2104]: time="2024-09-04T17:23:41.527226306Z" level=info msg="Ensure that sandbox 0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16 in task-service has been cleanup successfully" Sep 4 17:23:41.527732 containerd[2104]: time="2024-09-04T17:23:41.527617126Z" level=info msg="Ensure that sandbox 3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0 in task-service has been cleanup successfully" Sep 4 17:23:41.531516 containerd[2104]: time="2024-09-04T17:23:41.524588093Z" level=info msg="StopPodSandbox for \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\"" Sep 4 17:23:41.531966 containerd[2104]: time="2024-09-04T17:23:41.531933855Z" level=info msg="Ensure that sandbox dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120 in task-service has been cleanup successfully" Sep 4 17:23:41.681170 containerd[2104]: time="2024-09-04T17:23:41.681111421Z" level=error msg="StopPodSandbox for \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\" failed" error="failed to destroy network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.681879 kubelet[3564]: E0904 17:23:41.681652 3564 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:23:41.694184 kubelet[3564]: E0904 17:23:41.693906 3564 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16"} Sep 4 17:23:41.694184 kubelet[3564]: E0904 17:23:41.694092 3564 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1378bcfe-e321-41f9-bb2f-e3d1489fb204\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:23:41.694184 kubelet[3564]: E0904 17:23:41.694147 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1378bcfe-e321-41f9-bb2f-e3d1489fb204\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-2t9pf" podUID="1378bcfe-e321-41f9-bb2f-e3d1489fb204" Sep 4 17:23:41.705705 containerd[2104]: time="2024-09-04T17:23:41.705235510Z" level=error msg="StopPodSandbox for \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\" failed" error="failed to destroy network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.706310 kubelet[3564]: E0904 17:23:41.706213 3564 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:23:41.706310 kubelet[3564]: E0904 17:23:41.706262 3564 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0"} Sep 4 17:23:41.706310 kubelet[3564]: E0904 17:23:41.706310 3564 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4d606cc-0f01-4476-aeea-7e3289eab77d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:23:41.706803 kubelet[3564]: E0904 17:23:41.706353 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4d606cc-0f01-4476-aeea-7e3289eab77d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b8878b445-h6vkf" podUID="c4d606cc-0f01-4476-aeea-7e3289eab77d" Sep 4 17:23:41.716545 containerd[2104]: time="2024-09-04T17:23:41.716393429Z" level=error msg="StopPodSandbox for \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\" failed" error="failed to destroy network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.716892 kubelet[3564]: E0904 17:23:41.716700 3564 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:23:41.716892 kubelet[3564]: E0904 17:23:41.716750 3564 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb"} Sep 4 17:23:41.716892 kubelet[3564]: E0904 17:23:41.716820 3564 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbb2c308-b34e-470f-bf61-160922ef3eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:23:41.716892 kubelet[3564]: E0904 17:23:41.716862 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbb2c308-b34e-470f-bf61-160922ef3eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-plsms" podUID="dbb2c308-b34e-470f-bf61-160922ef3eb4" Sep 4 17:23:41.720646 containerd[2104]: time="2024-09-04T17:23:41.720591663Z" level=error msg="StopPodSandbox for \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\" failed" error="failed to destroy network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:23:41.721054 kubelet[3564]: E0904 17:23:41.721028 3564 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:23:41.721189 kubelet[3564]: E0904 17:23:41.721081 3564 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120"} Sep 4 17:23:41.721189 kubelet[3564]: E0904 17:23:41.721128 3564 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b219a213-82f1-4b33-8672-d779b1685a8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:23:41.721189 kubelet[3564]: E0904 17:23:41.721168 3564 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b219a213-82f1-4b33-8672-d779b1685a8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-zl2bl" podUID="b219a213-82f1-4b33-8672-d779b1685a8a" Sep 4 17:23:41.854873 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:23:41.853912 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:23:41.853990 systemd-resolved[1990]: Flushed all caches. Sep 4 17:23:42.121496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb-shm.mount: Deactivated successfully. Sep 4 17:23:47.806216 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:23:47.808434 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:23:47.806249 systemd-resolved[1990]: Flushed all caches. Sep 4 17:23:48.913140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount163005211.mount: Deactivated successfully. Sep 4 17:23:49.037491 containerd[2104]: time="2024-09-04T17:23:49.018962536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:23:49.042833 containerd[2104]: time="2024-09-04T17:23:49.042704320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:49.093977 containerd[2104]: time="2024-09-04T17:23:49.093072431Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:49.095815 containerd[2104]: time="2024-09-04T17:23:49.094870909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:49.099015 containerd[2104]: time="2024-09-04T17:23:49.098963822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 8.63578262s" Sep 4 17:23:49.099133 containerd[2104]: time="2024-09-04T17:23:49.099021272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:23:49.173892 containerd[2104]: time="2024-09-04T17:23:49.173316207Z" level=info msg="CreateContainer within sandbox \"e1e91b7febdb0dc0cb5bd96a93ee40c4d341305d9e19e3f72106e9eb48ef4afd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:23:49.238372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662713684.mount: Deactivated successfully. Sep 4 17:23:49.277986 containerd[2104]: time="2024-09-04T17:23:49.277930996Z" level=info msg="CreateContainer within sandbox \"e1e91b7febdb0dc0cb5bd96a93ee40c4d341305d9e19e3f72106e9eb48ef4afd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3d3191ee7a607052c9592b5a094ef50dbeebd51c0756a9506e1d582629f34f0e\"" Sep 4 17:23:49.279219 containerd[2104]: time="2024-09-04T17:23:49.278991431Z" level=info msg="StartContainer for \"3d3191ee7a607052c9592b5a094ef50dbeebd51c0756a9506e1d582629f34f0e\"" Sep 4 17:23:49.488697 containerd[2104]: time="2024-09-04T17:23:49.488273185Z" level=info msg="StartContainer for \"3d3191ee7a607052c9592b5a094ef50dbeebd51c0756a9506e1d582629f34f0e\" returns successfully" Sep 4 17:23:49.611364 kubelet[3564]: I0904 17:23:49.611322 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-rgvh8" podStartSLOduration=1.329625947 podCreationTimestamp="2024-09-04 17:23:27 +0000 UTC" firstStartedPulling="2024-09-04 17:23:27.864510589 +0000 UTC m=+19.922935419" lastFinishedPulling="2024-09-04 17:23:49.09932077 +0000 UTC m=+41.157745600" observedRunningTime="2024-09-04 17:23:49.558375867 +0000 UTC m=+41.616800714" watchObservedRunningTime="2024-09-04 17:23:49.564436128 +0000 UTC m=+41.622860995" Sep 4 17:23:49.858978 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:23:49.855360 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:23:49.855382 systemd-resolved[1990]: Flushed all caches. Sep 4 17:23:49.913592 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:23:49.917051 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:23:52.460784 kernel: bpftool[4755]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:23:52.803035 (udev-worker)[4559]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:23:52.810554 systemd-networkd[1663]: vxlan.calico: Link UP Sep 4 17:23:52.810565 systemd-networkd[1663]: vxlan.calico: Gained carrier Sep 4 17:23:52.853094 (udev-worker)[4565]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:23:53.189195 containerd[2104]: time="2024-09-04T17:23:53.189059014Z" level=info msg="StopPodSandbox for \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\"" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.362 [INFO][4819] k8s.go 608: Cleaning up netns ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.364 [INFO][4819] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" iface="eth0" netns="/var/run/netns/cni-41ce7547-5372-1858-0eb2-296eb1ec59df" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.364 [INFO][4819] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" iface="eth0" netns="/var/run/netns/cni-41ce7547-5372-1858-0eb2-296eb1ec59df" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.365 [INFO][4819] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" iface="eth0" netns="/var/run/netns/cni-41ce7547-5372-1858-0eb2-296eb1ec59df" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.365 [INFO][4819] k8s.go 615: Releasing IP address(es) ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.365 [INFO][4819] utils.go 188: Calico CNI releasing IP address ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.753 [INFO][4840] ipam_plugin.go 417: Releasing address using handleID ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" HandleID="k8s-pod-network.dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.756 [INFO][4840] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.758 [INFO][4840] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.777 [WARNING][4840] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" HandleID="k8s-pod-network.dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.777 [INFO][4840] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" HandleID="k8s-pod-network.dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.779 [INFO][4840] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:23:53.784735 containerd[2104]: 2024-09-04 17:23:53.781 [INFO][4819] k8s.go 621: Teardown processing complete. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:23:53.785512 containerd[2104]: time="2024-09-04T17:23:53.784991620Z" level=info msg="TearDown network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\" successfully" Sep 4 17:23:53.785512 containerd[2104]: time="2024-09-04T17:23:53.785066557Z" level=info msg="StopPodSandbox for \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\" returns successfully" Sep 4 17:23:53.787806 containerd[2104]: time="2024-09-04T17:23:53.786096256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zl2bl,Uid:b219a213-82f1-4b33-8672-d779b1685a8a,Namespace:kube-system,Attempt:1,}" Sep 4 17:23:53.796019 systemd[1]: run-netns-cni\x2d41ce7547\x2d5372\x2d1858\x2d0eb2\x2d296eb1ec59df.mount: Deactivated successfully. Sep 4 17:23:54.050118 systemd-networkd[1663]: cali048a52b4339: Link UP Sep 4 17:23:54.050447 systemd-networkd[1663]: cali048a52b4339: Gained carrier Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:53.920 [INFO][4848] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0 coredns-5dd5756b68- kube-system b219a213-82f1-4b33-8672-d779b1685a8a 679 0 2024-09-04 17:23:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-203 coredns-5dd5756b68-zl2bl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali048a52b4339 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Namespace="kube-system" Pod="coredns-5dd5756b68-zl2bl" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:53.922 [INFO][4848] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Namespace="kube-system" Pod="coredns-5dd5756b68-zl2bl" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:53.976 [INFO][4859] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" HandleID="k8s-pod-network.4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:53.994 [INFO][4859] ipam_plugin.go 270: Auto assigning IP ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" HandleID="k8s-pod-network.4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318770), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-203", "pod":"coredns-5dd5756b68-zl2bl", "timestamp":"2024-09-04 17:23:53.97622612 +0000 UTC"}, Hostname:"ip-172-31-27-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:53.994 [INFO][4859] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:53.994 [INFO][4859] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:53.995 [INFO][4859] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-203' Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:53.997 [INFO][4859] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" host="ip-172-31-27-203" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.014 [INFO][4859] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-203" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.021 [INFO][4859] ipam.go 489: Trying affinity for 192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.024 [INFO][4859] ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.026 [INFO][4859] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.026 [INFO][4859] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" host="ip-172-31-27-203" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.028 [INFO][4859] ipam.go 1685: Creating new handle: k8s-pod-network.4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1 Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.034 [INFO][4859] ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" host="ip-172-31-27-203" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.039 [INFO][4859] ipam.go 1216: Successfully claimed IPs: [192.168.92.193/26] block=192.168.92.192/26 handle="k8s-pod-network.4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" host="ip-172-31-27-203" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.039 [INFO][4859] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.193/26] handle="k8s-pod-network.4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" host="ip-172-31-27-203" Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.039 [INFO][4859] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:23:54.076703 containerd[2104]: 2024-09-04 17:23:54.039 [INFO][4859] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.92.193/26] IPv6=[] ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" HandleID="k8s-pod-network.4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:54.078659 containerd[2104]: 2024-09-04 17:23:54.043 [INFO][4848] k8s.go 386: Populated endpoint ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Namespace="kube-system" Pod="coredns-5dd5756b68-zl2bl" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b219a213-82f1-4b33-8672-d779b1685a8a", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"", Pod:"coredns-5dd5756b68-zl2bl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali048a52b4339", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:23:54.078659 containerd[2104]: 2024-09-04 17:23:54.044 [INFO][4848] k8s.go 387: Calico CNI using IPs: [192.168.92.193/32] ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Namespace="kube-system" Pod="coredns-5dd5756b68-zl2bl" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:54.078659 containerd[2104]: 2024-09-04 17:23:54.044 [INFO][4848] dataplane_linux.go 68: Setting the host side veth name to cali048a52b4339 ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Namespace="kube-system" Pod="coredns-5dd5756b68-zl2bl" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:54.078659 containerd[2104]: 2024-09-04 17:23:54.047 [INFO][4848] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Namespace="kube-system" Pod="coredns-5dd5756b68-zl2bl" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:54.078659 containerd[2104]: 2024-09-04 17:23:54.047 [INFO][4848] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Namespace="kube-system" Pod="coredns-5dd5756b68-zl2bl" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b219a213-82f1-4b33-8672-d779b1685a8a", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1", Pod:"coredns-5dd5756b68-zl2bl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali048a52b4339", MAC:"e2:8b:d9:14:69:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:23:54.078659 containerd[2104]: 2024-09-04 17:23:54.070 [INFO][4848] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1" Namespace="kube-system" Pod="coredns-5dd5756b68-zl2bl" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:23:54.152587 containerd[2104]: time="2024-09-04T17:23:54.152268827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:54.152587 containerd[2104]: time="2024-09-04T17:23:54.152334267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:54.152587 containerd[2104]: time="2024-09-04T17:23:54.152357334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:54.152587 containerd[2104]: time="2024-09-04T17:23:54.152373907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:54.206745 containerd[2104]: time="2024-09-04T17:23:54.199816230Z" level=info msg="StopPodSandbox for \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\"" Sep 4 17:23:54.297155 containerd[2104]: time="2024-09-04T17:23:54.297095396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zl2bl,Uid:b219a213-82f1-4b33-8672-d779b1685a8a,Namespace:kube-system,Attempt:1,} returns sandbox id \"4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1\"" Sep 4 17:23:54.310823 containerd[2104]: time="2024-09-04T17:23:54.309179529Z" level=info msg="CreateContainer within sandbox \"4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.328 [INFO][4926] k8s.go 608: Cleaning up netns ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.329 [INFO][4926] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" iface="eth0" netns="/var/run/netns/cni-f0689cb3-1d8b-f2f7-7062-1d0b659c9a22" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.330 [INFO][4926] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" iface="eth0" netns="/var/run/netns/cni-f0689cb3-1d8b-f2f7-7062-1d0b659c9a22" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.330 [INFO][4926] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" iface="eth0" netns="/var/run/netns/cni-f0689cb3-1d8b-f2f7-7062-1d0b659c9a22" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.330 [INFO][4926] k8s.go 615: Releasing IP address(es) ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.330 [INFO][4926] utils.go 188: Calico CNI releasing IP address ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.366 [INFO][4940] ipam_plugin.go 417: Releasing address using handleID ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" HandleID="k8s-pod-network.3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.367 [INFO][4940] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.367 [INFO][4940] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.375 [WARNING][4940] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" HandleID="k8s-pod-network.3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.375 [INFO][4940] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" HandleID="k8s-pod-network.3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.377 [INFO][4940] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:23:54.382558 containerd[2104]: 2024-09-04 17:23:54.379 [INFO][4926] k8s.go 621: Teardown processing complete. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:23:54.384312 containerd[2104]: time="2024-09-04T17:23:54.383582626Z" level=info msg="TearDown network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\" successfully" Sep 4 17:23:54.384312 containerd[2104]: time="2024-09-04T17:23:54.383613793Z" level=info msg="StopPodSandbox for \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\" returns successfully" Sep 4 17:23:54.385515 containerd[2104]: time="2024-09-04T17:23:54.385084741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8878b445-h6vkf,Uid:c4d606cc-0f01-4476-aeea-7e3289eab77d,Namespace:calico-system,Attempt:1,}" Sep 4 17:23:54.402325 systemd-networkd[1663]: vxlan.calico: Gained IPv6LL Sep 4 17:23:54.409030 containerd[2104]: time="2024-09-04T17:23:54.408991012Z" level=info msg="CreateContainer within sandbox \"4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cfd8c736479583e21f65bc01bfa0b8295c791cf68d1f6692f582ecccbf69683\"" Sep 4 17:23:54.411046 containerd[2104]: time="2024-09-04T17:23:54.410040261Z" level=info msg="StartContainer for \"2cfd8c736479583e21f65bc01bfa0b8295c791cf68d1f6692f582ecccbf69683\"" Sep 4 17:23:54.545519 containerd[2104]: time="2024-09-04T17:23:54.545320329Z" level=info msg="StartContainer for \"2cfd8c736479583e21f65bc01bfa0b8295c791cf68d1f6692f582ecccbf69683\" returns successfully" Sep 4 17:23:54.820040 systemd[1]: run-netns-cni\x2df0689cb3\x2d1d8b\x2df2f7\x2d7062\x2d1d0b659c9a22.mount: Deactivated successfully. Sep 4 17:23:54.822180 systemd-networkd[1663]: cali81abff7150f: Link UP Sep 4 17:23:54.822628 systemd-networkd[1663]: cali81abff7150f: Gained carrier Sep 4 17:23:54.864742 kubelet[3564]: I0904 17:23:54.864699 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zl2bl" podStartSLOduration=34.864639027 podCreationTimestamp="2024-09-04 17:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:23:54.624995691 +0000 UTC m=+46.683420538" watchObservedRunningTime="2024-09-04 17:23:54.864639027 +0000 UTC m=+46.923063873" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.545 [INFO][4953] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0 calico-kube-controllers-6b8878b445- calico-system c4d606cc-0f01-4476-aeea-7e3289eab77d 687 0 2024-09-04 17:23:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b8878b445 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-27-203 calico-kube-controllers-6b8878b445-h6vkf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali81abff7150f [] []}} ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Namespace="calico-system" Pod="calico-kube-controllers-6b8878b445-h6vkf" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.546 [INFO][4953] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Namespace="calico-system" Pod="calico-kube-controllers-6b8878b445-h6vkf" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.646 [INFO][4990] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" HandleID="k8s-pod-network.1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.683 [INFO][4990] ipam_plugin.go 270: Auto assigning IP ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" HandleID="k8s-pod-network.1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285e30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-203", "pod":"calico-kube-controllers-6b8878b445-h6vkf", "timestamp":"2024-09-04 17:23:54.646499168 +0000 UTC"}, Hostname:"ip-172-31-27-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.683 [INFO][4990] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.683 [INFO][4990] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.683 [INFO][4990] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-203' Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.699 [INFO][4990] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" host="ip-172-31-27-203" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.707 [INFO][4990] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-203" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.748 [INFO][4990] ipam.go 489: Trying affinity for 192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.752 [INFO][4990] ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.758 [INFO][4990] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.759 [INFO][4990] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" host="ip-172-31-27-203" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.763 [INFO][4990] ipam.go 1685: Creating new handle: k8s-pod-network.1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.785 [INFO][4990] ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" host="ip-172-31-27-203" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.798 [INFO][4990] ipam.go 1216: Successfully claimed IPs: [192.168.92.194/26] block=192.168.92.192/26 handle="k8s-pod-network.1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" host="ip-172-31-27-203" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.798 [INFO][4990] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.194/26] handle="k8s-pod-network.1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" host="ip-172-31-27-203" Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.798 [INFO][4990] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:23:54.874784 containerd[2104]: 2024-09-04 17:23:54.799 [INFO][4990] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.92.194/26] IPv6=[] ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" HandleID="k8s-pod-network.1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.882964 containerd[2104]: 2024-09-04 17:23:54.804 [INFO][4953] k8s.go 386: Populated endpoint ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Namespace="calico-system" Pod="calico-kube-controllers-6b8878b445-h6vkf" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0", GenerateName:"calico-kube-controllers-6b8878b445-", Namespace:"calico-system", SelfLink:"", UID:"c4d606cc-0f01-4476-aeea-7e3289eab77d", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8878b445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"", Pod:"calico-kube-controllers-6b8878b445-h6vkf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81abff7150f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:23:54.882964 containerd[2104]: 2024-09-04 17:23:54.805 [INFO][4953] k8s.go 387: Calico CNI using IPs: [192.168.92.194/32] ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Namespace="calico-system" Pod="calico-kube-controllers-6b8878b445-h6vkf" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.882964 containerd[2104]: 2024-09-04 17:23:54.806 [INFO][4953] dataplane_linux.go 68: Setting the host side veth name to cali81abff7150f ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Namespace="calico-system" Pod="calico-kube-controllers-6b8878b445-h6vkf" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.882964 containerd[2104]: 2024-09-04 17:23:54.815 [INFO][4953] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Namespace="calico-system" Pod="calico-kube-controllers-6b8878b445-h6vkf" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.882964 containerd[2104]: 2024-09-04 17:23:54.819 [INFO][4953] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Namespace="calico-system" Pod="calico-kube-controllers-6b8878b445-h6vkf" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0", GenerateName:"calico-kube-controllers-6b8878b445-", Namespace:"calico-system", SelfLink:"", UID:"c4d606cc-0f01-4476-aeea-7e3289eab77d", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8878b445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad", Pod:"calico-kube-controllers-6b8878b445-h6vkf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81abff7150f", MAC:"fa:de:ef:4e:c7:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:23:54.882964 containerd[2104]: 2024-09-04 17:23:54.868 [INFO][4953] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad" Namespace="calico-system" Pod="calico-kube-controllers-6b8878b445-h6vkf" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:23:54.970710 containerd[2104]: time="2024-09-04T17:23:54.969141801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:54.970710 containerd[2104]: time="2024-09-04T17:23:54.969219180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:54.970710 containerd[2104]: time="2024-09-04T17:23:54.969254914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:54.970710 containerd[2104]: time="2024-09-04T17:23:54.969277708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:55.099323 systemd[1]: run-containerd-runc-k8s.io-1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad-runc.SWeXIY.mount: Deactivated successfully. Sep 4 17:23:55.160011 containerd[2104]: time="2024-09-04T17:23:55.158643755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b8878b445-h6vkf,Uid:c4d606cc-0f01-4476-aeea-7e3289eab77d,Namespace:calico-system,Attempt:1,} returns sandbox id \"1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad\"" Sep 4 17:23:55.162470 containerd[2104]: time="2024-09-04T17:23:55.162202260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:23:55.871177 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:23:55.870937 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:23:55.870962 systemd-resolved[1990]: Flushed all caches. Sep 4 17:23:55.871707 systemd-networkd[1663]: cali048a52b4339: Gained IPv6LL Sep 4 17:23:56.190379 containerd[2104]: time="2024-09-04T17:23:56.190106290Z" level=info msg="StopPodSandbox for \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\"" Sep 4 17:23:56.205037 containerd[2104]: time="2024-09-04T17:23:56.195379959Z" level=info msg="StopPodSandbox for \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\"" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.370 [INFO][5094] k8s.go 608: Cleaning up netns ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.371 [INFO][5094] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" iface="eth0" netns="/var/run/netns/cni-2a125477-39c0-43a0-692e-3cee0528adac" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.371 [INFO][5094] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" iface="eth0" netns="/var/run/netns/cni-2a125477-39c0-43a0-692e-3cee0528adac" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.371 [INFO][5094] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" iface="eth0" netns="/var/run/netns/cni-2a125477-39c0-43a0-692e-3cee0528adac" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.371 [INFO][5094] k8s.go 615: Releasing IP address(es) ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.371 [INFO][5094] utils.go 188: Calico CNI releasing IP address ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.428 [INFO][5105] ipam_plugin.go 417: Releasing address using handleID ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" HandleID="k8s-pod-network.0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.428 [INFO][5105] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.428 [INFO][5105] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.441 [WARNING][5105] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" HandleID="k8s-pod-network.0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.442 [INFO][5105] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" HandleID="k8s-pod-network.0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.453 [INFO][5105] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:23:56.533831 containerd[2104]: 2024-09-04 17:23:56.508 [INFO][5094] k8s.go 621: Teardown processing complete. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:23:56.547056 containerd[2104]: time="2024-09-04T17:23:56.546891963Z" level=info msg="TearDown network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\" successfully" Sep 4 17:23:56.547056 containerd[2104]: time="2024-09-04T17:23:56.546937040Z" level=info msg="StopPodSandbox for \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\" returns successfully" Sep 4 17:23:56.553560 containerd[2104]: time="2024-09-04T17:23:56.553116614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2t9pf,Uid:1378bcfe-e321-41f9-bb2f-e3d1489fb204,Namespace:kube-system,Attempt:1,}" Sep 4 17:23:56.557000 systemd[1]: run-netns-cni\x2d2a125477\x2d39c0\x2d43a0\x2d692e\x2d3cee0528adac.mount: Deactivated successfully. Sep 4 17:23:56.707321 systemd[1]: Started sshd@7-172.31.27.203:22-139.178.68.195:50496.service - OpenSSH per-connection server daemon (139.178.68.195:50496). Sep 4 17:23:56.787436 systemd-networkd[1663]: cali81abff7150f: Gained IPv6LL Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.398 [INFO][5093] k8s.go 608: Cleaning up netns ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.398 [INFO][5093] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" iface="eth0" netns="/var/run/netns/cni-2be9fb3f-3a50-a208-e33d-fc9bea61d06e" Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.399 [INFO][5093] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" iface="eth0" netns="/var/run/netns/cni-2be9fb3f-3a50-a208-e33d-fc9bea61d06e" Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.399 [INFO][5093] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" iface="eth0" netns="/var/run/netns/cni-2be9fb3f-3a50-a208-e33d-fc9bea61d06e" Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.400 [INFO][5093] k8s.go 615: Releasing IP address(es) ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.400 [INFO][5093] utils.go 188: Calico CNI releasing IP address ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.592 [INFO][5110] ipam_plugin.go 417: Releasing address using handleID ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" HandleID="k8s-pod-network.15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.596 [INFO][5110] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.598 [INFO][5110] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.746 [WARNING][5110] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" HandleID="k8s-pod-network.15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.747 [INFO][5110] ipam_plugin.go 445: Releasing address using workloadID ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" HandleID="k8s-pod-network.15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.755 [INFO][5110] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:23:56.811310 containerd[2104]: 2024-09-04 17:23:56.787 [INFO][5093] k8s.go 621: Teardown processing complete. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:23:56.821150 containerd[2104]: time="2024-09-04T17:23:56.820974169Z" level=info msg="TearDown network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\" successfully" Sep 4 17:23:56.821150 containerd[2104]: time="2024-09-04T17:23:56.821031395Z" level=info msg="StopPodSandbox for \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\" returns successfully" Sep 4 17:23:56.823291 systemd[1]: run-netns-cni\x2d2be9fb3f\x2d3a50\x2da208\x2de33d\x2dfc9bea61d06e.mount: Deactivated successfully. Sep 4 17:23:56.827903 containerd[2104]: time="2024-09-04T17:23:56.823707328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plsms,Uid:dbb2c308-b34e-470f-bf61-160922ef3eb4,Namespace:calico-system,Attempt:1,}" Sep 4 17:23:57.006778 sshd[5117]: Accepted publickey for core from 139.178.68.195 port 50496 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:23:57.012291 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:23:57.049831 systemd-logind[2069]: New session 8 of user core. Sep 4 17:23:57.059799 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:23:57.370319 systemd-networkd[1663]: cali50529255abf: Link UP Sep 4 17:23:57.372618 systemd-networkd[1663]: cali50529255abf: Gained carrier Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.068 [INFO][5123] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0 coredns-5dd5756b68- kube-system 1378bcfe-e321-41f9-bb2f-e3d1489fb204 713 0 2024-09-04 17:23:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-203 coredns-5dd5756b68-2t9pf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali50529255abf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Namespace="kube-system" Pod="coredns-5dd5756b68-2t9pf" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.069 [INFO][5123] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Namespace="kube-system" Pod="coredns-5dd5756b68-2t9pf" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.241 [INFO][5150] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" HandleID="k8s-pod-network.14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.265 [INFO][5150] ipam_plugin.go 270: Auto assigning IP ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" HandleID="k8s-pod-network.14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000346cc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-203", "pod":"coredns-5dd5756b68-2t9pf", "timestamp":"2024-09-04 17:23:57.241174048 +0000 UTC"}, Hostname:"ip-172-31-27-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.265 [INFO][5150] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.267 [INFO][5150] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.267 [INFO][5150] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-203' Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.271 [INFO][5150] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" host="ip-172-31-27-203" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.286 [INFO][5150] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-203" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.308 [INFO][5150] ipam.go 489: Trying affinity for 192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.314 [INFO][5150] ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.321 [INFO][5150] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.321 [INFO][5150] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" host="ip-172-31-27-203" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.325 [INFO][5150] ipam.go 1685: Creating new handle: k8s-pod-network.14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806 Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.330 [INFO][5150] ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" host="ip-172-31-27-203" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.352 [INFO][5150] ipam.go 1216: Successfully claimed IPs: [192.168.92.195/26] block=192.168.92.192/26 handle="k8s-pod-network.14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" host="ip-172-31-27-203" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.352 [INFO][5150] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.195/26] handle="k8s-pod-network.14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" host="ip-172-31-27-203" Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.352 [INFO][5150] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:23:57.423907 containerd[2104]: 2024-09-04 17:23:57.352 [INFO][5150] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.92.195/26] IPv6=[] ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" HandleID="k8s-pod-network.14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:57.429441 containerd[2104]: 2024-09-04 17:23:57.362 [INFO][5123] k8s.go 386: Populated endpoint ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Namespace="kube-system" Pod="coredns-5dd5756b68-2t9pf" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1378bcfe-e321-41f9-bb2f-e3d1489fb204", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"", Pod:"coredns-5dd5756b68-2t9pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50529255abf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:23:57.429441 containerd[2104]: 2024-09-04 17:23:57.362 [INFO][5123] k8s.go 387: Calico CNI using IPs: [192.168.92.195/32] ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Namespace="kube-system" Pod="coredns-5dd5756b68-2t9pf" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:57.429441 containerd[2104]: 2024-09-04 17:23:57.362 [INFO][5123] dataplane_linux.go 68: Setting the host side veth name to cali50529255abf ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Namespace="kube-system" Pod="coredns-5dd5756b68-2t9pf" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:57.429441 containerd[2104]: 2024-09-04 17:23:57.367 [INFO][5123] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Namespace="kube-system" Pod="coredns-5dd5756b68-2t9pf" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:57.429441 containerd[2104]: 2024-09-04 17:23:57.368 [INFO][5123] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Namespace="kube-system" Pod="coredns-5dd5756b68-2t9pf" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1378bcfe-e321-41f9-bb2f-e3d1489fb204", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806", Pod:"coredns-5dd5756b68-2t9pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50529255abf", MAC:"92:6b:54:59:06:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:23:57.429441 containerd[2104]: 2024-09-04 17:23:57.402 [INFO][5123] k8s.go 500: Wrote updated endpoint to datastore ContainerID="14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806" Namespace="kube-system" Pod="coredns-5dd5756b68-2t9pf" WorkloadEndpoint="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:23:57.513885 systemd-networkd[1663]: cali96318f71c45: Link UP Sep 4 17:23:57.517211 systemd-networkd[1663]: cali96318f71c45: Gained carrier Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.068 [INFO][5138] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0 csi-node-driver- calico-system dbb2c308-b34e-470f-bf61-160922ef3eb4 714 0 2024-09-04 17:23:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-27-203 csi-node-driver-plsms eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali96318f71c45 [] []}} ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Namespace="calico-system" Pod="csi-node-driver-plsms" WorkloadEndpoint="ip--172--31--27--203-k8s-csi--node--driver--plsms-" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.085 [INFO][5138] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Namespace="calico-system" Pod="csi-node-driver-plsms" WorkloadEndpoint="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.286 [INFO][5151] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" HandleID="k8s-pod-network.ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.310 [INFO][5151] ipam_plugin.go 270: Auto assigning IP ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" HandleID="k8s-pod-network.ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000352f50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-203", "pod":"csi-node-driver-plsms", "timestamp":"2024-09-04 17:23:57.286871723 +0000 UTC"}, Hostname:"ip-172-31-27-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.310 [INFO][5151] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.353 [INFO][5151] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.355 [INFO][5151] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-203' Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.360 [INFO][5151] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" host="ip-172-31-27-203" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.421 [INFO][5151] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-203" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.439 [INFO][5151] ipam.go 489: Trying affinity for 192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.447 [INFO][5151] ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.452 [INFO][5151] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.452 [INFO][5151] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" host="ip-172-31-27-203" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.456 [INFO][5151] ipam.go 1685: Creating new handle: k8s-pod-network.ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7 Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.472 [INFO][5151] ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" host="ip-172-31-27-203" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.486 [INFO][5151] ipam.go 1216: Successfully claimed IPs: [192.168.92.196/26] block=192.168.92.192/26 handle="k8s-pod-network.ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" host="ip-172-31-27-203" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.489 [INFO][5151] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.196/26] handle="k8s-pod-network.ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" host="ip-172-31-27-203" Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.490 [INFO][5151] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:23:57.590006 containerd[2104]: 2024-09-04 17:23:57.490 [INFO][5151] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.92.196/26] IPv6=[] ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" HandleID="k8s-pod-network.ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:57.595205 containerd[2104]: 2024-09-04 17:23:57.505 [INFO][5138] k8s.go 386: Populated endpoint ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Namespace="calico-system" Pod="csi-node-driver-plsms" WorkloadEndpoint="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbb2c308-b34e-470f-bf61-160922ef3eb4", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"", Pod:"csi-node-driver-plsms", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali96318f71c45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:23:57.595205 containerd[2104]: 2024-09-04 17:23:57.506 [INFO][5138] k8s.go 387: Calico CNI using IPs: [192.168.92.196/32] ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Namespace="calico-system" Pod="csi-node-driver-plsms" WorkloadEndpoint="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:57.595205 containerd[2104]: 2024-09-04 17:23:57.506 [INFO][5138] dataplane_linux.go 68: Setting the host side veth name to cali96318f71c45 ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Namespace="calico-system" Pod="csi-node-driver-plsms" WorkloadEndpoint="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:57.595205 containerd[2104]: 2024-09-04 17:23:57.516 [INFO][5138] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Namespace="calico-system" Pod="csi-node-driver-plsms" WorkloadEndpoint="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:57.595205 containerd[2104]: 2024-09-04 17:23:57.539 [INFO][5138] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Namespace="calico-system" Pod="csi-node-driver-plsms" WorkloadEndpoint="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbb2c308-b34e-470f-bf61-160922ef3eb4", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7", Pod:"csi-node-driver-plsms", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali96318f71c45", MAC:"de:a4:6c:e1:53:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:23:57.595205 containerd[2104]: 2024-09-04 17:23:57.570 [INFO][5138] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7" Namespace="calico-system" Pod="csi-node-driver-plsms" WorkloadEndpoint="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:23:57.710717 containerd[2104]: time="2024-09-04T17:23:57.700311683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:57.710717 containerd[2104]: time="2024-09-04T17:23:57.700381515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:57.710717 containerd[2104]: time="2024-09-04T17:23:57.700410313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:57.710717 containerd[2104]: time="2024-09-04T17:23:57.700432700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:57.853996 systemd[1]: run-containerd-runc-k8s.io-14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806-runc.fVHnNN.mount: Deactivated successfully. Sep 4 17:23:57.892821 containerd[2104]: time="2024-09-04T17:23:57.892248442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:23:57.892821 containerd[2104]: time="2024-09-04T17:23:57.892322542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:57.892821 containerd[2104]: time="2024-09-04T17:23:57.892359076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:23:57.892821 containerd[2104]: time="2024-09-04T17:23:57.892498971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:23:57.919813 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:23:57.924420 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:23:57.919847 systemd-resolved[1990]: Flushed all caches. Sep 4 17:23:58.101470 containerd[2104]: time="2024-09-04T17:23:58.101422981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2t9pf,Uid:1378bcfe-e321-41f9-bb2f-e3d1489fb204,Namespace:kube-system,Attempt:1,} returns sandbox id \"14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806\"" Sep 4 17:23:58.108990 containerd[2104]: time="2024-09-04T17:23:58.108806433Z" level=info msg="CreateContainer within sandbox \"14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:23:58.131143 containerd[2104]: time="2024-09-04T17:23:58.130326708Z" level=info msg="CreateContainer within sandbox \"14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2aff4299c8a12a2ad8a2a4652047e056a3137b96924bddbbcaff462f62f6c9b\"" Sep 4 17:23:58.136480 containerd[2104]: time="2024-09-04T17:23:58.133169384Z" level=info msg="StartContainer for \"b2aff4299c8a12a2ad8a2a4652047e056a3137b96924bddbbcaff462f62f6c9b\"" Sep 4 17:23:58.188524 sshd[5117]: pam_unix(sshd:session): session closed for user core Sep 4 17:23:58.199329 systemd[1]: sshd@7-172.31.27.203:22-139.178.68.195:50496.service: Deactivated successfully. Sep 4 17:23:58.213539 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:23:58.223118 systemd-logind[2069]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:23:58.230955 systemd-logind[2069]: Removed session 8. Sep 4 17:23:58.320701 containerd[2104]: time="2024-09-04T17:23:58.320653443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-plsms,Uid:dbb2c308-b34e-470f-bf61-160922ef3eb4,Namespace:calico-system,Attempt:1,} returns sandbox id \"ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7\"" Sep 4 17:23:58.415870 containerd[2104]: time="2024-09-04T17:23:58.414724574Z" level=info msg="StartContainer for \"b2aff4299c8a12a2ad8a2a4652047e056a3137b96924bddbbcaff462f62f6c9b\" returns successfully" Sep 4 17:23:58.724893 systemd[1]: run-containerd-runc-k8s.io-ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7-runc.2XkuKs.mount: Deactivated successfully. Sep 4 17:23:58.878916 systemd-networkd[1663]: cali96318f71c45: Gained IPv6LL Sep 4 17:23:59.262598 systemd-networkd[1663]: cali50529255abf: Gained IPv6LL Sep 4 17:23:59.650639 kubelet[3564]: I0904 17:23:59.648315 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2t9pf" podStartSLOduration=39.648267408 podCreationTimestamp="2024-09-04 17:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:23:58.653518969 +0000 UTC m=+50.711943816" watchObservedRunningTime="2024-09-04 17:23:59.648267408 +0000 UTC m=+51.706692254" Sep 4 17:23:59.828801 containerd[2104]: time="2024-09-04T17:23:59.828186685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:59.832100 containerd[2104]: time="2024-09-04T17:23:59.832021053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:23:59.835857 containerd[2104]: time="2024-09-04T17:23:59.835785177Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:59.847750 containerd[2104]: time="2024-09-04T17:23:59.847068931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:23:59.848372 containerd[2104]: time="2024-09-04T17:23:59.848330254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 4.686086737s" Sep 4 17:23:59.848495 containerd[2104]: time="2024-09-04T17:23:59.848373665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:23:59.852527 containerd[2104]: time="2024-09-04T17:23:59.852485898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:23:59.884973 containerd[2104]: time="2024-09-04T17:23:59.877051385Z" level=info msg="CreateContainer within sandbox \"1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:23:59.913424 containerd[2104]: time="2024-09-04T17:23:59.913319216Z" level=info msg="CreateContainer within sandbox \"1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"68a29ac9daee9320e29c0b6167b45149392ea8582e3440db7f12b29a8629f23c\"" Sep 4 17:23:59.916774 containerd[2104]: time="2024-09-04T17:23:59.916720844Z" level=info msg="StartContainer for \"68a29ac9daee9320e29c0b6167b45149392ea8582e3440db7f12b29a8629f23c\"" Sep 4 17:24:00.309729 containerd[2104]: time="2024-09-04T17:24:00.309684391Z" level=info msg="StartContainer for \"68a29ac9daee9320e29c0b6167b45149392ea8582e3440db7f12b29a8629f23c\" returns successfully" Sep 4 17:24:00.663644 kubelet[3564]: I0904 17:24:00.663263 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b8878b445-h6vkf" podStartSLOduration=28.974452297 podCreationTimestamp="2024-09-04 17:23:27 +0000 UTC" firstStartedPulling="2024-09-04 17:23:55.161719098 +0000 UTC m=+47.220143938" lastFinishedPulling="2024-09-04 17:23:59.848737059 +0000 UTC m=+51.907161894" observedRunningTime="2024-09-04 17:24:00.656047569 +0000 UTC m=+52.714472426" watchObservedRunningTime="2024-09-04 17:24:00.661470253 +0000 UTC m=+52.719895100" Sep 4 17:24:01.749457 ntpd[2058]: Listen normally on 6 vxlan.calico 192.168.92.192:123 Sep 4 17:24:01.794429 ntpd[2058]: 4 Sep 17:24:01 ntpd[2058]: Listen normally on 6 vxlan.calico 192.168.92.192:123 Sep 4 17:24:01.794429 ntpd[2058]: 4 Sep 17:24:01 ntpd[2058]: Listen normally on 7 vxlan.calico [fe80::6437:15ff:fed6:5d7e%4]:123 Sep 4 17:24:01.794429 ntpd[2058]: 4 Sep 17:24:01 ntpd[2058]: Listen normally on 8 cali048a52b4339 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:24:01.794429 ntpd[2058]: 4 Sep 17:24:01 ntpd[2058]: Listen normally on 9 cali81abff7150f [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:24:01.794429 ntpd[2058]: 4 Sep 17:24:01 ntpd[2058]: Listen normally on 10 cali50529255abf [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:24:01.794429 ntpd[2058]: 4 Sep 17:24:01 ntpd[2058]: Listen normally on 11 cali96318f71c45 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:24:01.750947 ntpd[2058]: Listen normally on 7 vxlan.calico [fe80::6437:15ff:fed6:5d7e%4]:123 Sep 4 17:24:01.752949 ntpd[2058]: Listen normally on 8 cali048a52b4339 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:24:01.753145 ntpd[2058]: Listen normally on 9 cali81abff7150f [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:24:01.753496 ntpd[2058]: Listen normally on 10 cali50529255abf [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:24:01.753570 ntpd[2058]: Listen normally on 11 cali96318f71c45 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:24:02.654528 containerd[2104]: time="2024-09-04T17:24:02.654478331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:24:02.666198 containerd[2104]: time="2024-09-04T17:24:02.666125616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:24:02.720260 containerd[2104]: time="2024-09-04T17:24:02.720211193Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:24:02.739060 containerd[2104]: time="2024-09-04T17:24:02.738914756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:24:02.744317 containerd[2104]: time="2024-09-04T17:24:02.742153429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.889608462s" Sep 4 17:24:02.744317 containerd[2104]: time="2024-09-04T17:24:02.742206040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:24:02.781378 containerd[2104]: time="2024-09-04T17:24:02.781338560Z" level=info msg="CreateContainer within sandbox \"ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:24:03.003994 containerd[2104]: time="2024-09-04T17:24:03.003836567Z" level=info msg="CreateContainer within sandbox \"ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"514ff692a3306555ef63805d5e2935495f66ee70a196d3938787c7fe0ff398e7\"" Sep 4 17:24:03.017092 containerd[2104]: time="2024-09-04T17:24:03.016890312Z" level=info msg="StartContainer for \"514ff692a3306555ef63805d5e2935495f66ee70a196d3938787c7fe0ff398e7\"" Sep 4 17:24:03.236835 systemd[1]: Started sshd@8-172.31.27.203:22-139.178.68.195:50500.service - OpenSSH per-connection server daemon (139.178.68.195:50500). Sep 4 17:24:03.390585 systemd[1]: run-containerd-runc-k8s.io-514ff692a3306555ef63805d5e2935495f66ee70a196d3938787c7fe0ff398e7-runc.B1tWoH.mount: Deactivated successfully. Sep 4 17:24:03.541994 containerd[2104]: time="2024-09-04T17:24:03.541932832Z" level=info msg="StartContainer for \"514ff692a3306555ef63805d5e2935495f66ee70a196d3938787c7fe0ff398e7\" returns successfully" Sep 4 17:24:03.546378 containerd[2104]: time="2024-09-04T17:24:03.545721362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:24:03.624038 sshd[5426]: Accepted publickey for core from 139.178.68.195 port 50500 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:03.634569 sshd[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:03.675347 systemd-logind[2069]: New session 9 of user core. Sep 4 17:24:03.680830 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:24:03.870967 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:24:03.873934 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:24:03.871025 systemd-resolved[1990]: Flushed all caches. Sep 4 17:24:04.333949 sshd[5426]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:04.351012 systemd-logind[2069]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:24:04.358321 systemd[1]: sshd@8-172.31.27.203:22-139.178.68.195:50500.service: Deactivated successfully. Sep 4 17:24:04.371470 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:24:04.375183 systemd-logind[2069]: Removed session 9. Sep 4 17:24:05.554393 containerd[2104]: time="2024-09-04T17:24:05.552708384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:24:05.555390 containerd[2104]: time="2024-09-04T17:24:05.554798305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:24:05.557878 containerd[2104]: time="2024-09-04T17:24:05.557334989Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:24:05.564413 containerd[2104]: time="2024-09-04T17:24:05.563139574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:24:05.565315 containerd[2104]: time="2024-09-04T17:24:05.565036576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.019216973s" Sep 4 17:24:05.565315 containerd[2104]: time="2024-09-04T17:24:05.565087205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:24:05.572509 containerd[2104]: time="2024-09-04T17:24:05.572387474Z" level=info msg="CreateContainer within sandbox \"ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:24:05.602380 containerd[2104]: time="2024-09-04T17:24:05.602332980Z" level=info msg="CreateContainer within sandbox \"ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e3bec9ce8cca8775d3296153c4c60331aec0f59daf054462ffe1a9f34bbd7bd3\"" Sep 4 17:24:05.604177 containerd[2104]: time="2024-09-04T17:24:05.604135248Z" level=info msg="StartContainer for \"e3bec9ce8cca8775d3296153c4c60331aec0f59daf054462ffe1a9f34bbd7bd3\"" Sep 4 17:24:05.724358 systemd[1]: run-containerd-runc-k8s.io-e3bec9ce8cca8775d3296153c4c60331aec0f59daf054462ffe1a9f34bbd7bd3-runc.A4X4q3.mount: Deactivated successfully. Sep 4 17:24:05.881676 containerd[2104]: time="2024-09-04T17:24:05.879689627Z" level=info msg="StartContainer for \"e3bec9ce8cca8775d3296153c4c60331aec0f59daf054462ffe1a9f34bbd7bd3\" returns successfully" Sep 4 17:24:06.841556 kubelet[3564]: I0904 17:24:06.841521 3564 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:24:06.842724 kubelet[3564]: I0904 17:24:06.841576 3564 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:24:08.269690 containerd[2104]: time="2024-09-04T17:24:08.269080010Z" level=info msg="StopPodSandbox for \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\"" Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.524 [WARNING][5534] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0", GenerateName:"calico-kube-controllers-6b8878b445-", Namespace:"calico-system", SelfLink:"", UID:"c4d606cc-0f01-4476-aeea-7e3289eab77d", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8878b445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad", Pod:"calico-kube-controllers-6b8878b445-h6vkf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81abff7150f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.528 [INFO][5534] k8s.go 608: Cleaning up netns ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.528 [INFO][5534] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" iface="eth0" netns="" Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.528 [INFO][5534] k8s.go 615: Releasing IP address(es) ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.528 [INFO][5534] utils.go 188: Calico CNI releasing IP address ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.565 [INFO][5540] ipam_plugin.go 417: Releasing address using handleID ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" HandleID="k8s-pod-network.3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.565 [INFO][5540] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.566 [INFO][5540] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.575 [WARNING][5540] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" HandleID="k8s-pod-network.3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.575 [INFO][5540] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" HandleID="k8s-pod-network.3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.577 [INFO][5540] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:24:08.581185 containerd[2104]: 2024-09-04 17:24:08.579 [INFO][5534] k8s.go 621: Teardown processing complete. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:24:08.581185 containerd[2104]: time="2024-09-04T17:24:08.581000089Z" level=info msg="TearDown network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\" successfully" Sep 4 17:24:08.581185 containerd[2104]: time="2024-09-04T17:24:08.581023347Z" level=info msg="StopPodSandbox for \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\" returns successfully" Sep 4 17:24:08.583862 containerd[2104]: time="2024-09-04T17:24:08.583817117Z" level=info msg="RemovePodSandbox for \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\"" Sep 4 17:24:08.583862 containerd[2104]: time="2024-09-04T17:24:08.583855859Z" level=info msg="Forcibly stopping sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\"" Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.631 [WARNING][5558] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0", GenerateName:"calico-kube-controllers-6b8878b445-", Namespace:"calico-system", SelfLink:"", UID:"c4d606cc-0f01-4476-aeea-7e3289eab77d", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b8878b445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"1594dd491206af01c92ee1aca734aef86af6a6083002d4dcfb3fb402417319ad", Pod:"calico-kube-controllers-6b8878b445-h6vkf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81abff7150f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.631 [INFO][5558] k8s.go 608: Cleaning up netns ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.631 [INFO][5558] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" iface="eth0" netns="" Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.632 [INFO][5558] k8s.go 615: Releasing IP address(es) ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.632 [INFO][5558] utils.go 188: Calico CNI releasing IP address ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.705 [INFO][5564] ipam_plugin.go 417: Releasing address using handleID ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" HandleID="k8s-pod-network.3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.705 [INFO][5564] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.705 [INFO][5564] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.713 [WARNING][5564] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" HandleID="k8s-pod-network.3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.713 [INFO][5564] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" HandleID="k8s-pod-network.3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Workload="ip--172--31--27--203-k8s-calico--kube--controllers--6b8878b445--h6vkf-eth0" Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.715 [INFO][5564] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:24:08.718846 containerd[2104]: 2024-09-04 17:24:08.717 [INFO][5558] k8s.go 621: Teardown processing complete. ContainerID="3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0" Sep 4 17:24:08.719607 containerd[2104]: time="2024-09-04T17:24:08.718894493Z" level=info msg="TearDown network for sandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\" successfully" Sep 4 17:24:08.743132 containerd[2104]: time="2024-09-04T17:24:08.743071988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:24:08.743412 containerd[2104]: time="2024-09-04T17:24:08.743182416Z" level=info msg="RemovePodSandbox \"3b91a615bfddb040c259f5e409eac7d51c3687276947197fb8c04074e537fcb0\" returns successfully" Sep 4 17:24:08.743917 containerd[2104]: time="2024-09-04T17:24:08.743869158Z" level=info msg="StopPodSandbox for \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\"" Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.807 [WARNING][5582] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbb2c308-b34e-470f-bf61-160922ef3eb4", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7", Pod:"csi-node-driver-plsms", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali96318f71c45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.808 [INFO][5582] k8s.go 608: Cleaning up netns ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.808 [INFO][5582] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" iface="eth0" netns="" Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.808 [INFO][5582] k8s.go 615: Releasing IP address(es) ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.808 [INFO][5582] utils.go 188: Calico CNI releasing IP address ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.860 [INFO][5588] ipam_plugin.go 417: Releasing address using handleID ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" HandleID="k8s-pod-network.15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.861 [INFO][5588] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.861 [INFO][5588] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.868 [WARNING][5588] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" HandleID="k8s-pod-network.15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.868 [INFO][5588] ipam_plugin.go 445: Releasing address using workloadID ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" HandleID="k8s-pod-network.15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.870 [INFO][5588] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:24:08.873851 containerd[2104]: 2024-09-04 17:24:08.872 [INFO][5582] k8s.go 621: Teardown processing complete. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:24:08.873851 containerd[2104]: time="2024-09-04T17:24:08.873830459Z" level=info msg="TearDown network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\" successfully" Sep 4 17:24:08.875313 containerd[2104]: time="2024-09-04T17:24:08.873862570Z" level=info msg="StopPodSandbox for \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\" returns successfully" Sep 4 17:24:08.875313 containerd[2104]: time="2024-09-04T17:24:08.874849862Z" level=info msg="RemovePodSandbox for \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\"" Sep 4 17:24:08.875313 containerd[2104]: time="2024-09-04T17:24:08.874910300Z" level=info msg="Forcibly stopping sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\"" Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:08.956 [WARNING][5607] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbb2c308-b34e-470f-bf61-160922ef3eb4", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"ff32b55d26968d8ebf7219a2a495a7ffc688dd22f6641e422693392efebbcda7", Pod:"csi-node-driver-plsms", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali96318f71c45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:08.957 [INFO][5607] k8s.go 608: Cleaning up netns ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:08.957 [INFO][5607] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" iface="eth0" netns="" Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:08.957 [INFO][5607] k8s.go 615: Releasing IP address(es) ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:08.957 [INFO][5607] utils.go 188: Calico CNI releasing IP address ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:08.992 [INFO][5613] ipam_plugin.go 417: Releasing address using handleID ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" HandleID="k8s-pod-network.15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:08.992 [INFO][5613] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:08.992 [INFO][5613] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:09.001 [WARNING][5613] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" HandleID="k8s-pod-network.15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:09.004 [INFO][5613] ipam_plugin.go 445: Releasing address using workloadID ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" HandleID="k8s-pod-network.15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Workload="ip--172--31--27--203-k8s-csi--node--driver--plsms-eth0" Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:09.007 [INFO][5613] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:24:09.012028 containerd[2104]: 2024-09-04 17:24:09.009 [INFO][5607] k8s.go 621: Teardown processing complete. ContainerID="15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb" Sep 4 17:24:09.012028 containerd[2104]: time="2024-09-04T17:24:09.011742490Z" level=info msg="TearDown network for sandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\" successfully" Sep 4 17:24:09.017905 containerd[2104]: time="2024-09-04T17:24:09.017867488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:24:09.018207 containerd[2104]: time="2024-09-04T17:24:09.018072650Z" level=info msg="RemovePodSandbox \"15cd46b219a7663036f55a43a5f04e89885b030f5eaa573bbfaad82cb5419cdb\" returns successfully" Sep 4 17:24:09.019192 containerd[2104]: time="2024-09-04T17:24:09.019088328Z" level=info msg="StopPodSandbox for \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\"" Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.120 [WARNING][5631] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1378bcfe-e321-41f9-bb2f-e3d1489fb204", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806", Pod:"coredns-5dd5756b68-2t9pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50529255abf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.121 [INFO][5631] k8s.go 608: Cleaning up netns ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.121 [INFO][5631] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" iface="eth0" netns="" Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.121 [INFO][5631] k8s.go 615: Releasing IP address(es) ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.121 [INFO][5631] utils.go 188: Calico CNI releasing IP address ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.183 [INFO][5637] ipam_plugin.go 417: Releasing address using handleID ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" HandleID="k8s-pod-network.0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.184 [INFO][5637] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.184 [INFO][5637] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.194 [WARNING][5637] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" HandleID="k8s-pod-network.0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.194 [INFO][5637] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" HandleID="k8s-pod-network.0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.199 [INFO][5637] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:24:09.205996 containerd[2104]: 2024-09-04 17:24:09.203 [INFO][5631] k8s.go 621: Teardown processing complete. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:24:09.205996 containerd[2104]: time="2024-09-04T17:24:09.204865070Z" level=info msg="TearDown network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\" successfully" Sep 4 17:24:09.205996 containerd[2104]: time="2024-09-04T17:24:09.204896897Z" level=info msg="StopPodSandbox for \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\" returns successfully" Sep 4 17:24:09.205996 containerd[2104]: time="2024-09-04T17:24:09.205359860Z" level=info msg="RemovePodSandbox for \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\"" Sep 4 17:24:09.205996 containerd[2104]: time="2024-09-04T17:24:09.205392893Z" level=info msg="Forcibly stopping sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\"" Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.272 [WARNING][5655] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1378bcfe-e321-41f9-bb2f-e3d1489fb204", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"14936dc8d57f69516969ca8babadf9e5d163a706cd15773d3b4676766d7f2806", Pod:"coredns-5dd5756b68-2t9pf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50529255abf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.272 [INFO][5655] k8s.go 608: Cleaning up netns ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.273 [INFO][5655] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" iface="eth0" netns="" Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.273 [INFO][5655] k8s.go 615: Releasing IP address(es) ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.273 [INFO][5655] utils.go 188: Calico CNI releasing IP address ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.313 [INFO][5662] ipam_plugin.go 417: Releasing address using handleID ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" HandleID="k8s-pod-network.0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.313 [INFO][5662] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.313 [INFO][5662] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.320 [WARNING][5662] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" HandleID="k8s-pod-network.0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.320 [INFO][5662] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" HandleID="k8s-pod-network.0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--2t9pf-eth0" Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.322 [INFO][5662] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:24:09.336632 containerd[2104]: 2024-09-04 17:24:09.324 [INFO][5655] k8s.go 621: Teardown processing complete. ContainerID="0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16" Sep 4 17:24:09.338510 containerd[2104]: time="2024-09-04T17:24:09.336684060Z" level=info msg="TearDown network for sandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\" successfully" Sep 4 17:24:09.382660 systemd[1]: Started sshd@9-172.31.27.203:22-139.178.68.195:58356.service - OpenSSH per-connection server daemon (139.178.68.195:58356). Sep 4 17:24:09.390851 containerd[2104]: time="2024-09-04T17:24:09.388528110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:24:09.390851 containerd[2104]: time="2024-09-04T17:24:09.388626699Z" level=info msg="RemovePodSandbox \"0ab56b6dd969e16127ba4efda6232a48ef7c03e54bd42a68834bd9ac96c11d16\" returns successfully" Sep 4 17:24:09.393002 containerd[2104]: time="2024-09-04T17:24:09.391449222Z" level=info msg="StopPodSandbox for \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\"" Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.471 [WARNING][5681] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b219a213-82f1-4b33-8672-d779b1685a8a", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1", Pod:"coredns-5dd5756b68-zl2bl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali048a52b4339", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.471 [INFO][5681] k8s.go 608: Cleaning up netns ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.471 [INFO][5681] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" iface="eth0" netns="" Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.471 [INFO][5681] k8s.go 615: Releasing IP address(es) ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.471 [INFO][5681] utils.go 188: Calico CNI releasing IP address ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.506 [INFO][5689] ipam_plugin.go 417: Releasing address using handleID ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" HandleID="k8s-pod-network.dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.506 [INFO][5689] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.506 [INFO][5689] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.516 [WARNING][5689] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" HandleID="k8s-pod-network.dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.516 [INFO][5689] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" HandleID="k8s-pod-network.dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.519 [INFO][5689] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:24:09.523317 containerd[2104]: 2024-09-04 17:24:09.521 [INFO][5681] k8s.go 621: Teardown processing complete. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:24:09.524354 containerd[2104]: time="2024-09-04T17:24:09.523365510Z" level=info msg="TearDown network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\" successfully" Sep 4 17:24:09.524354 containerd[2104]: time="2024-09-04T17:24:09.523409539Z" level=info msg="StopPodSandbox for \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\" returns successfully" Sep 4 17:24:09.525729 containerd[2104]: time="2024-09-04T17:24:09.525051567Z" level=info msg="RemovePodSandbox for \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\"" Sep 4 17:24:09.525729 containerd[2104]: time="2024-09-04T17:24:09.525107179Z" level=info msg="Forcibly stopping sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\"" Sep 4 17:24:09.646081 sshd[5668]: Accepted publickey for core from 139.178.68.195 port 58356 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:09.674990 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.586 [WARNING][5707] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b219a213-82f1-4b33-8672-d779b1685a8a", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"4adc3a8f3f8b479b01a7871a2f48ca6b02d848eb6af4e63e449530d832cd45b1", Pod:"coredns-5dd5756b68-zl2bl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali048a52b4339", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.586 [INFO][5707] k8s.go 608: Cleaning up netns ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.586 [INFO][5707] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" iface="eth0" netns="" Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.587 [INFO][5707] k8s.go 615: Releasing IP address(es) ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.587 [INFO][5707] utils.go 188: Calico CNI releasing IP address ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.621 [INFO][5713] ipam_plugin.go 417: Releasing address using handleID ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" HandleID="k8s-pod-network.dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.621 [INFO][5713] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.621 [INFO][5713] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.654 [WARNING][5713] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" HandleID="k8s-pod-network.dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.654 [INFO][5713] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" HandleID="k8s-pod-network.dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Workload="ip--172--31--27--203-k8s-coredns--5dd5756b68--zl2bl-eth0" Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.662 [INFO][5713] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:24:09.682702 containerd[2104]: 2024-09-04 17:24:09.677 [INFO][5707] k8s.go 621: Teardown processing complete. ContainerID="dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120" Sep 4 17:24:09.684738 containerd[2104]: time="2024-09-04T17:24:09.682826836Z" level=info msg="TearDown network for sandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\" successfully" Sep 4 17:24:09.687250 systemd-logind[2069]: New session 10 of user core. Sep 4 17:24:09.693186 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:24:09.693319 containerd[2104]: time="2024-09-04T17:24:09.689281683Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:24:09.693319 containerd[2104]: time="2024-09-04T17:24:09.689361693Z" level=info msg="RemovePodSandbox \"dfb02d2c9ec1a1e1380353ed96b928946ae2af1f3620a5389211532142fb8120\" returns successfully" Sep 4 17:24:09.827080 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:24:09.822097 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:24:09.822147 systemd-resolved[1990]: Flushed all caches. Sep 4 17:24:10.275467 sshd[5668]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:10.282109 systemd[1]: sshd@9-172.31.27.203:22-139.178.68.195:58356.service: Deactivated successfully. Sep 4 17:24:10.291947 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:24:10.295312 systemd-logind[2069]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:24:10.297087 systemd-logind[2069]: Removed session 10. Sep 4 17:24:13.020455 kubelet[3564]: I0904 17:24:13.020415 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-plsms" podStartSLOduration=38.784246639 podCreationTimestamp="2024-09-04 17:23:27 +0000 UTC" firstStartedPulling="2024-09-04 17:23:58.330125013 +0000 UTC m=+50.388549848" lastFinishedPulling="2024-09-04 17:24:05.566119337 +0000 UTC m=+57.624544175" observedRunningTime="2024-09-04 17:24:06.865368744 +0000 UTC m=+58.923793590" watchObservedRunningTime="2024-09-04 17:24:13.020240966 +0000 UTC m=+65.078665813" Sep 4 17:24:15.307152 systemd[1]: Started sshd@10-172.31.27.203:22-139.178.68.195:58360.service - OpenSSH per-connection server daemon (139.178.68.195:58360). Sep 4 17:24:15.499845 sshd[5783]: Accepted publickey for core from 139.178.68.195 port 58360 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:15.501210 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:15.519928 systemd-logind[2069]: New session 11 of user core. Sep 4 17:24:15.531262 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:24:15.885007 sshd[5783]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:15.893404 systemd[1]: sshd@10-172.31.27.203:22-139.178.68.195:58360.service: Deactivated successfully. Sep 4 17:24:15.916300 systemd-logind[2069]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:24:15.917397 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:24:15.937734 systemd[1]: Started sshd@11-172.31.27.203:22-139.178.68.195:58374.service - OpenSSH per-connection server daemon (139.178.68.195:58374). Sep 4 17:24:15.941129 systemd-logind[2069]: Removed session 11. Sep 4 17:24:16.098976 systemd[1]: run-containerd-runc-k8s.io-68a29ac9daee9320e29c0b6167b45149392ea8582e3440db7f12b29a8629f23c-runc.uOiHaz.mount: Deactivated successfully. Sep 4 17:24:16.163190 sshd[5798]: Accepted publickey for core from 139.178.68.195 port 58374 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:16.167724 sshd[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:16.181989 systemd-logind[2069]: New session 12 of user core. Sep 4 17:24:16.197155 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:24:16.808626 sshd[5798]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:16.838185 systemd[1]: sshd@11-172.31.27.203:22-139.178.68.195:58374.service: Deactivated successfully. Sep 4 17:24:16.851908 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:24:16.855277 systemd-logind[2069]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:24:16.864370 systemd[1]: Started sshd@12-172.31.27.203:22-139.178.68.195:52318.service - OpenSSH per-connection server daemon (139.178.68.195:52318). Sep 4 17:24:16.867024 systemd-logind[2069]: Removed session 12. Sep 4 17:24:17.053334 sshd[5830]: Accepted publickey for core from 139.178.68.195 port 52318 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:17.054152 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:17.066844 systemd-logind[2069]: New session 13 of user core. Sep 4 17:24:17.073153 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:24:17.467099 sshd[5830]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:17.472395 systemd-logind[2069]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:24:17.474241 systemd[1]: sshd@12-172.31.27.203:22-139.178.68.195:52318.service: Deactivated successfully. Sep 4 17:24:17.480701 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:24:17.482855 systemd-logind[2069]: Removed session 13. Sep 4 17:24:22.497165 systemd[1]: Started sshd@13-172.31.27.203:22-139.178.68.195:52330.service - OpenSSH per-connection server daemon (139.178.68.195:52330). Sep 4 17:24:22.675741 sshd[5852]: Accepted publickey for core from 139.178.68.195 port 52330 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:22.678375 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:22.699838 systemd-logind[2069]: New session 14 of user core. Sep 4 17:24:22.711587 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:24:22.978173 sshd[5852]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:22.985476 systemd-logind[2069]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:24:22.986384 systemd[1]: sshd@13-172.31.27.203:22-139.178.68.195:52330.service: Deactivated successfully. Sep 4 17:24:22.992228 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:24:22.993663 systemd-logind[2069]: Removed session 14. Sep 4 17:24:28.028537 systemd[1]: Started sshd@14-172.31.27.203:22-139.178.68.195:52062.service - OpenSSH per-connection server daemon (139.178.68.195:52062). Sep 4 17:24:28.237204 sshd[5872]: Accepted publickey for core from 139.178.68.195 port 52062 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:28.241092 sshd[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:28.277200 systemd-logind[2069]: New session 15 of user core. Sep 4 17:24:28.287167 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:24:28.648185 sshd[5872]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:28.675342 systemd[1]: sshd@14-172.31.27.203:22-139.178.68.195:52062.service: Deactivated successfully. Sep 4 17:24:28.693990 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:24:28.700244 systemd-logind[2069]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:24:28.706312 systemd-logind[2069]: Removed session 15. Sep 4 17:24:33.681023 systemd[1]: Started sshd@15-172.31.27.203:22-139.178.68.195:52064.service - OpenSSH per-connection server daemon (139.178.68.195:52064). Sep 4 17:24:33.897396 sshd[5893]: Accepted publickey for core from 139.178.68.195 port 52064 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:33.902016 sshd[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:33.909193 systemd-logind[2069]: New session 16 of user core. Sep 4 17:24:33.916180 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:24:34.642476 sshd[5893]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:34.659159 systemd[1]: sshd@15-172.31.27.203:22-139.178.68.195:52064.service: Deactivated successfully. Sep 4 17:24:34.674225 systemd-logind[2069]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:24:34.674610 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:24:34.680120 systemd-logind[2069]: Removed session 16. Sep 4 17:24:35.871064 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:24:35.871101 systemd-resolved[1990]: Flushed all caches. Sep 4 17:24:35.873079 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:24:37.918072 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:24:37.918082 systemd-resolved[1990]: Flushed all caches. Sep 4 17:24:37.919795 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:24:39.650181 systemd[1]: Started sshd@16-172.31.27.203:22-139.178.68.195:43306.service - OpenSSH per-connection server daemon (139.178.68.195:43306). Sep 4 17:24:39.827436 sshd[5916]: Accepted publickey for core from 139.178.68.195 port 43306 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:39.829144 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:39.834033 systemd-logind[2069]: New session 17 of user core. Sep 4 17:24:39.839086 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:24:40.051384 sshd[5916]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:40.056223 systemd-logind[2069]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:24:40.059014 systemd[1]: sshd@16-172.31.27.203:22-139.178.68.195:43306.service: Deactivated successfully. Sep 4 17:24:40.061851 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:24:40.065501 systemd-logind[2069]: Removed session 17. Sep 4 17:24:40.108657 systemd[1]: Started sshd@17-172.31.27.203:22-139.178.68.195:43314.service - OpenSSH per-connection server daemon (139.178.68.195:43314). Sep 4 17:24:40.295795 sshd[5930]: Accepted publickey for core from 139.178.68.195 port 43314 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:40.296585 sshd[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:40.310142 systemd-logind[2069]: New session 18 of user core. Sep 4 17:24:40.318376 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:24:41.236986 sshd[5930]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:41.248331 systemd[1]: sshd@17-172.31.27.203:22-139.178.68.195:43314.service: Deactivated successfully. Sep 4 17:24:41.259631 systemd-logind[2069]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:24:41.272198 systemd[1]: Started sshd@18-172.31.27.203:22-139.178.68.195:43328.service - OpenSSH per-connection server daemon (139.178.68.195:43328). Sep 4 17:24:41.272649 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:24:41.276047 systemd-logind[2069]: Removed session 18. Sep 4 17:24:41.457849 sshd[5960]: Accepted publickey for core from 139.178.68.195 port 43328 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:41.459989 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:41.466714 systemd-logind[2069]: New session 19 of user core. Sep 4 17:24:41.473496 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:24:42.918833 sshd[5960]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:42.948482 systemd[1]: Started sshd@19-172.31.27.203:22-139.178.68.195:43340.service - OpenSSH per-connection server daemon (139.178.68.195:43340). Sep 4 17:24:42.949150 systemd[1]: sshd@18-172.31.27.203:22-139.178.68.195:43328.service: Deactivated successfully. Sep 4 17:24:42.984485 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:24:42.994231 systemd-logind[2069]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:24:43.005512 systemd-logind[2069]: Removed session 19. Sep 4 17:24:43.206560 sshd[5991]: Accepted publickey for core from 139.178.68.195 port 43340 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:43.213001 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:43.242894 systemd-logind[2069]: New session 20 of user core. Sep 4 17:24:43.246888 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:24:43.935390 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:24:43.935419 systemd-resolved[1990]: Flushed all caches. Sep 4 17:24:43.936796 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:24:44.541279 sshd[5991]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:44.549615 systemd[1]: sshd@19-172.31.27.203:22-139.178.68.195:43340.service: Deactivated successfully. Sep 4 17:24:44.564889 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:24:44.568443 systemd-logind[2069]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:24:44.582288 systemd[1]: Started sshd@20-172.31.27.203:22-139.178.68.195:43342.service - OpenSSH per-connection server daemon (139.178.68.195:43342). Sep 4 17:24:44.588326 systemd-logind[2069]: Removed session 20. Sep 4 17:24:44.785847 sshd[6017]: Accepted publickey for core from 139.178.68.195 port 43342 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:44.792025 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:44.836837 systemd-logind[2069]: New session 21 of user core. Sep 4 17:24:44.851113 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:24:45.132849 sshd[6017]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:45.138854 systemd-logind[2069]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:24:45.140990 systemd[1]: sshd@20-172.31.27.203:22-139.178.68.195:43342.service: Deactivated successfully. Sep 4 17:24:45.151028 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:24:45.152885 systemd-logind[2069]: Removed session 21. Sep 4 17:24:46.860061 kubelet[3564]: I0904 17:24:46.855877 3564 topology_manager.go:215] "Topology Admit Handler" podUID="98723a5d-36fc-4bd8-95fc-96c4c2f909a3" podNamespace="calico-apiserver" podName="calico-apiserver-8589946494-lkb2d" Sep 4 17:24:47.008399 kubelet[3564]: I0904 17:24:47.007776 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df6bs\" (UniqueName: \"kubernetes.io/projected/98723a5d-36fc-4bd8-95fc-96c4c2f909a3-kube-api-access-df6bs\") pod \"calico-apiserver-8589946494-lkb2d\" (UID: \"98723a5d-36fc-4bd8-95fc-96c4c2f909a3\") " pod="calico-apiserver/calico-apiserver-8589946494-lkb2d" Sep 4 17:24:47.008399 kubelet[3564]: I0904 17:24:47.007889 3564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98723a5d-36fc-4bd8-95fc-96c4c2f909a3-calico-apiserver-certs\") pod \"calico-apiserver-8589946494-lkb2d\" (UID: \"98723a5d-36fc-4bd8-95fc-96c4c2f909a3\") " pod="calico-apiserver/calico-apiserver-8589946494-lkb2d" Sep 4 17:24:47.297965 containerd[2104]: time="2024-09-04T17:24:47.297906073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8589946494-lkb2d,Uid:98723a5d-36fc-4bd8-95fc-96c4c2f909a3,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:24:47.641079 systemd-networkd[1663]: califcfe9160af7: Link UP Sep 4 17:24:47.641992 systemd-networkd[1663]: califcfe9160af7: Gained carrier Sep 4 17:24:47.655992 (udev-worker)[6060]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.489 [INFO][6043] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0 calico-apiserver-8589946494- calico-apiserver 98723a5d-36fc-4bd8-95fc-96c4c2f909a3 1065 0 2024-09-04 17:24:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8589946494 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-203 calico-apiserver-8589946494-lkb2d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califcfe9160af7 [] []}} ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Namespace="calico-apiserver" Pod="calico-apiserver-8589946494-lkb2d" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.490 [INFO][6043] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Namespace="calico-apiserver" Pod="calico-apiserver-8589946494-lkb2d" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.561 [INFO][6053] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" HandleID="k8s-pod-network.59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Workload="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.572 [INFO][6053] ipam_plugin.go 270: Auto assigning IP ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" HandleID="k8s-pod-network.59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Workload="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-203", "pod":"calico-apiserver-8589946494-lkb2d", "timestamp":"2024-09-04 17:24:47.561962869 +0000 UTC"}, Hostname:"ip-172-31-27-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.572 [INFO][6053] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.573 [INFO][6053] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.573 [INFO][6053] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-203' Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.575 [INFO][6053] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" host="ip-172-31-27-203" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.589 [INFO][6053] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-203" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.597 [INFO][6053] ipam.go 489: Trying affinity for 192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.600 [INFO][6053] ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.603 [INFO][6053] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ip-172-31-27-203" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.603 [INFO][6053] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" host="ip-172-31-27-203" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.605 [INFO][6053] ipam.go 1685: Creating new handle: k8s-pod-network.59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.612 [INFO][6053] ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" host="ip-172-31-27-203" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.623 [INFO][6053] ipam.go 1216: Successfully claimed IPs: [192.168.92.197/26] block=192.168.92.192/26 handle="k8s-pod-network.59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" host="ip-172-31-27-203" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.623 [INFO][6053] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.197/26] handle="k8s-pod-network.59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" host="ip-172-31-27-203" Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.623 [INFO][6053] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:24:47.684048 containerd[2104]: 2024-09-04 17:24:47.623 [INFO][6053] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.92.197/26] IPv6=[] ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" HandleID="k8s-pod-network.59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Workload="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" Sep 4 17:24:47.687438 containerd[2104]: 2024-09-04 17:24:47.629 [INFO][6043] k8s.go 386: Populated endpoint ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Namespace="calico-apiserver" Pod="calico-apiserver-8589946494-lkb2d" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0", GenerateName:"calico-apiserver-8589946494-", Namespace:"calico-apiserver", SelfLink:"", UID:"98723a5d-36fc-4bd8-95fc-96c4c2f909a3", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8589946494", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"", Pod:"calico-apiserver-8589946494-lkb2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcfe9160af7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:47.687438 containerd[2104]: 2024-09-04 17:24:47.629 [INFO][6043] k8s.go 387: Calico CNI using IPs: [192.168.92.197/32] ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Namespace="calico-apiserver" Pod="calico-apiserver-8589946494-lkb2d" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" Sep 4 17:24:47.687438 containerd[2104]: 2024-09-04 17:24:47.629 [INFO][6043] dataplane_linux.go 68: Setting the host side veth name to califcfe9160af7 ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Namespace="calico-apiserver" Pod="calico-apiserver-8589946494-lkb2d" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" Sep 4 17:24:47.687438 containerd[2104]: 2024-09-04 17:24:47.635 [INFO][6043] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Namespace="calico-apiserver" Pod="calico-apiserver-8589946494-lkb2d" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" Sep 4 17:24:47.687438 containerd[2104]: 2024-09-04 17:24:47.640 [INFO][6043] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Namespace="calico-apiserver" Pod="calico-apiserver-8589946494-lkb2d" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0", GenerateName:"calico-apiserver-8589946494-", Namespace:"calico-apiserver", SelfLink:"", UID:"98723a5d-36fc-4bd8-95fc-96c4c2f909a3", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 24, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8589946494", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-203", ContainerID:"59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b", Pod:"calico-apiserver-8589946494-lkb2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcfe9160af7", MAC:"be:25:00:4f:7e:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:24:47.687438 containerd[2104]: 2024-09-04 17:24:47.669 [INFO][6043] k8s.go 500: Wrote updated endpoint to datastore ContainerID="59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b" Namespace="calico-apiserver" Pod="calico-apiserver-8589946494-lkb2d" WorkloadEndpoint="ip--172--31--27--203-k8s-calico--apiserver--8589946494--lkb2d-eth0" Sep 4 17:24:47.811893 containerd[2104]: time="2024-09-04T17:24:47.811635882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:24:47.811893 containerd[2104]: time="2024-09-04T17:24:47.811703403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:24:47.811893 containerd[2104]: time="2024-09-04T17:24:47.811725104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:24:47.811893 containerd[2104]: time="2024-09-04T17:24:47.811740666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:24:47.988896 containerd[2104]: time="2024-09-04T17:24:47.986220016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8589946494-lkb2d,Uid:98723a5d-36fc-4bd8-95fc-96c4c2f909a3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b\"" Sep 4 17:24:48.005219 containerd[2104]: time="2024-09-04T17:24:48.004749648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:24:49.118371 systemd-networkd[1663]: califcfe9160af7: Gained IPv6LL Sep 4 17:24:49.886846 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:24:49.886638 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:24:49.886684 systemd-resolved[1990]: Flushed all caches. Sep 4 17:24:50.163240 systemd[1]: Started sshd@21-172.31.27.203:22-139.178.68.195:42600.service - OpenSSH per-connection server daemon (139.178.68.195:42600). Sep 4 17:24:50.380053 sshd[6120]: Accepted publickey for core from 139.178.68.195 port 42600 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:50.384284 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:50.409877 systemd-logind[2069]: New session 22 of user core. Sep 4 17:24:50.415439 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:24:50.901543 sshd[6120]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:50.909472 systemd[1]: sshd@21-172.31.27.203:22-139.178.68.195:42600.service: Deactivated successfully. Sep 4 17:24:50.923050 systemd-logind[2069]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:24:50.923514 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:24:50.932182 systemd-logind[2069]: Removed session 22. Sep 4 17:24:51.749174 ntpd[2058]: Listen normally on 12 califcfe9160af7 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:24:51.751494 ntpd[2058]: 4 Sep 17:24:51 ntpd[2058]: Listen normally on 12 califcfe9160af7 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:24:51.849833 containerd[2104]: time="2024-09-04T17:24:51.849720722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:24:51.885442 containerd[2104]: time="2024-09-04T17:24:51.885391313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.880547328s" Sep 4 17:24:51.886046 containerd[2104]: time="2024-09-04T17:24:51.885748500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:24:51.920607 containerd[2104]: time="2024-09-04T17:24:51.920011017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:24:51.922796 containerd[2104]: time="2024-09-04T17:24:51.921535343Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:24:51.922796 containerd[2104]: time="2024-09-04T17:24:51.922690923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:24:51.934172 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:24:51.934201 systemd-resolved[1990]: Flushed all caches. Sep 4 17:24:51.936791 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:24:52.058442 containerd[2104]: time="2024-09-04T17:24:52.058396183Z" level=info msg="CreateContainer within sandbox \"59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:24:52.178199 containerd[2104]: time="2024-09-04T17:24:52.178143926Z" level=info msg="CreateContainer within sandbox \"59d251d8ecd843fd07b56aab7d8761c66f48a9a9838c2b9a97e589bbaefea73b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"175ef6359b8b423d9a4542a32ce0d6d503220670e680bfd9fde5997baec12e41\"" Sep 4 17:24:52.198884 containerd[2104]: time="2024-09-04T17:24:52.198747491Z" level=info msg="StartContainer for \"175ef6359b8b423d9a4542a32ce0d6d503220670e680bfd9fde5997baec12e41\"" Sep 4 17:24:52.553460 containerd[2104]: time="2024-09-04T17:24:52.553401930Z" level=info msg="StartContainer for \"175ef6359b8b423d9a4542a32ce0d6d503220670e680bfd9fde5997baec12e41\" returns successfully" Sep 4 17:24:53.320154 kubelet[3564]: I0904 17:24:53.319793 3564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8589946494-lkb2d" podStartSLOduration=3.346847415 podCreationTimestamp="2024-09-04 17:24:46 +0000 UTC" firstStartedPulling="2024-09-04 17:24:47.993379617 +0000 UTC m=+100.051804447" lastFinishedPulling="2024-09-04 17:24:51.917904268 +0000 UTC m=+103.976329106" observedRunningTime="2024-09-04 17:24:53.270502913 +0000 UTC m=+105.328927759" watchObservedRunningTime="2024-09-04 17:24:53.271372074 +0000 UTC m=+105.329796919" Sep 4 17:24:53.982051 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:24:53.982059 systemd-resolved[1990]: Flushed all caches. Sep 4 17:24:53.983787 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:24:55.933135 systemd[1]: Started sshd@22-172.31.27.203:22-139.178.68.195:42614.service - OpenSSH per-connection server daemon (139.178.68.195:42614). Sep 4 17:24:56.180380 sshd[6191]: Accepted publickey for core from 139.178.68.195 port 42614 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:24:56.184122 sshd[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:24:56.193804 systemd-logind[2069]: New session 23 of user core. Sep 4 17:24:56.198525 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:24:56.710939 sshd[6191]: pam_unix(sshd:session): session closed for user core Sep 4 17:24:56.715180 systemd[1]: sshd@22-172.31.27.203:22-139.178.68.195:42614.service: Deactivated successfully. Sep 4 17:24:56.720481 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:24:56.720642 systemd-logind[2069]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:24:56.723165 systemd-logind[2069]: Removed session 23. Sep 4 17:24:59.872881 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:24:59.872892 systemd-resolved[1990]: Flushed all caches. Sep 4 17:24:59.875545 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:25:01.848044 systemd[1]: Started sshd@23-172.31.27.203:22-139.178.68.195:47644.service - OpenSSH per-connection server daemon (139.178.68.195:47644). Sep 4 17:25:02.188398 sshd[6210]: Accepted publickey for core from 139.178.68.195 port 47644 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:25:02.192213 sshd[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:25:02.214023 systemd-logind[2069]: New session 24 of user core. Sep 4 17:25:02.220318 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:25:02.659141 sshd[6210]: pam_unix(sshd:session): session closed for user core Sep 4 17:25:02.675843 systemd[1]: sshd@23-172.31.27.203:22-139.178.68.195:47644.service: Deactivated successfully. Sep 4 17:25:02.690931 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:25:02.693637 systemd-logind[2069]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:25:02.696508 systemd-logind[2069]: Removed session 24. Sep 4 17:25:07.696783 systemd[1]: Started sshd@24-172.31.27.203:22-139.178.68.195:58048.service - OpenSSH per-connection server daemon (139.178.68.195:58048). Sep 4 17:25:07.918520 sshd[6230]: Accepted publickey for core from 139.178.68.195 port 58048 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:25:07.918962 sshd[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:25:07.925408 systemd-logind[2069]: New session 25 of user core. Sep 4 17:25:07.931290 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:25:08.276621 sshd[6230]: pam_unix(sshd:session): session closed for user core Sep 4 17:25:08.285814 systemd-logind[2069]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:25:08.286642 systemd[1]: sshd@24-172.31.27.203:22-139.178.68.195:58048.service: Deactivated successfully. Sep 4 17:25:08.292879 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:25:08.294580 systemd-logind[2069]: Removed session 25. Sep 4 17:25:13.305548 systemd[1]: Started sshd@25-172.31.27.203:22-139.178.68.195:58054.service - OpenSSH per-connection server daemon (139.178.68.195:58054). Sep 4 17:25:13.564153 sshd[6291]: Accepted publickey for core from 139.178.68.195 port 58054 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:25:13.573142 sshd[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:25:13.585227 systemd-logind[2069]: New session 26 of user core. Sep 4 17:25:13.591532 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:25:14.005829 sshd[6291]: pam_unix(sshd:session): session closed for user core Sep 4 17:25:14.016451 systemd[1]: sshd@25-172.31.27.203:22-139.178.68.195:58054.service: Deactivated successfully. Sep 4 17:25:14.022673 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:25:14.023983 systemd-logind[2069]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:25:14.025405 systemd-logind[2069]: Removed session 26. Sep 4 17:25:15.870199 systemd-resolved[1990]: Under memory pressure, flushing caches. Sep 4 17:25:15.870243 systemd-resolved[1990]: Flushed all caches. Sep 4 17:25:15.871793 systemd-journald[1575]: Under memory pressure, flushing caches. Sep 4 17:25:16.116055 systemd[1]: run-containerd-runc-k8s.io-68a29ac9daee9320e29c0b6167b45149392ea8582e3440db7f12b29a8629f23c-runc.GJpJZ2.mount: Deactivated successfully. Sep 4 17:25:19.034424 systemd[1]: Started sshd@26-172.31.27.203:22-139.178.68.195:55274.service - OpenSSH per-connection server daemon (139.178.68.195:55274). Sep 4 17:25:19.204846 sshd[6336]: Accepted publickey for core from 139.178.68.195 port 55274 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:25:19.206987 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:25:19.222723 systemd-logind[2069]: New session 27 of user core. Sep 4 17:25:19.234808 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:25:19.648293 sshd[6336]: pam_unix(sshd:session): session closed for user core Sep 4 17:25:19.662971 systemd[1]: sshd@26-172.31.27.203:22-139.178.68.195:55274.service: Deactivated successfully. Sep 4 17:25:19.673818 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:25:19.678587 systemd-logind[2069]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:25:19.681128 systemd-logind[2069]: Removed session 27. Sep 4 17:25:33.664171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52983a7d0ed6e92230e4c834308aa807eb0c32a621394e17cf7ce798ad4a3b04-rootfs.mount: Deactivated successfully. Sep 4 17:25:33.686486 containerd[2104]: time="2024-09-04T17:25:33.657574198Z" level=info msg="shim disconnected" id=52983a7d0ed6e92230e4c834308aa807eb0c32a621394e17cf7ce798ad4a3b04 namespace=k8s.io Sep 4 17:25:33.687130 containerd[2104]: time="2024-09-04T17:25:33.686488154Z" level=warning msg="cleaning up after shim disconnected" id=52983a7d0ed6e92230e4c834308aa807eb0c32a621394e17cf7ce798ad4a3b04 namespace=k8s.io Sep 4 17:25:33.687130 containerd[2104]: time="2024-09-04T17:25:33.686510150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:25:34.437003 kubelet[3564]: I0904 17:25:34.436956 3564 scope.go:117] "RemoveContainer" containerID="52983a7d0ed6e92230e4c834308aa807eb0c32a621394e17cf7ce798ad4a3b04" Sep 4 17:25:34.486387 containerd[2104]: time="2024-09-04T17:25:34.486294081Z" level=info msg="CreateContainer within sandbox \"6d4a725ab9e55b2e06865b95f03647145d0fd826a028b09356f44c76cd1703ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:25:34.538429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380364726.mount: Deactivated successfully. Sep 4 17:25:34.545840 containerd[2104]: time="2024-09-04T17:25:34.545795227Z" level=info msg="CreateContainer within sandbox \"6d4a725ab9e55b2e06865b95f03647145d0fd826a028b09356f44c76cd1703ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cb43d1620330b1c27db946aef9798a0732576b29d0675253c1ce94c388220566\"" Sep 4 17:25:34.546112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430110295.mount: Deactivated successfully. Sep 4 17:25:34.549485 containerd[2104]: time="2024-09-04T17:25:34.549442775Z" level=info msg="StartContainer for \"cb43d1620330b1c27db946aef9798a0732576b29d0675253c1ce94c388220566\"" Sep 4 17:25:34.717019 containerd[2104]: time="2024-09-04T17:25:34.716753194Z" level=info msg="StartContainer for \"cb43d1620330b1c27db946aef9798a0732576b29d0675253c1ce94c388220566\" returns successfully" Sep 4 17:25:35.023486 containerd[2104]: time="2024-09-04T17:25:35.023215352Z" level=info msg="shim disconnected" id=ab614db05721fcda4688cafc636f3f85b21e3ff6f3f1228d7289f55200970f40 namespace=k8s.io Sep 4 17:25:35.023486 containerd[2104]: time="2024-09-04T17:25:35.023282208Z" level=warning msg="cleaning up after shim disconnected" id=ab614db05721fcda4688cafc636f3f85b21e3ff6f3f1228d7289f55200970f40 namespace=k8s.io Sep 4 17:25:35.023486 containerd[2104]: time="2024-09-04T17:25:35.023295928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:25:35.035253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab614db05721fcda4688cafc636f3f85b21e3ff6f3f1228d7289f55200970f40-rootfs.mount: Deactivated successfully. Sep 4 17:25:35.419071 kubelet[3564]: I0904 17:25:35.418963 3564 scope.go:117] "RemoveContainer" containerID="ab614db05721fcda4688cafc636f3f85b21e3ff6f3f1228d7289f55200970f40" Sep 4 17:25:35.429527 containerd[2104]: time="2024-09-04T17:25:35.429362596Z" level=info msg="CreateContainer within sandbox \"66f20e738be9989ea115fde2fff2769b26bc29ae3099923fdc4751fb9349ebf3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:25:35.453034 containerd[2104]: time="2024-09-04T17:25:35.452977414Z" level=info msg="CreateContainer within sandbox \"66f20e738be9989ea115fde2fff2769b26bc29ae3099923fdc4751fb9349ebf3\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"bb3df13b8aa4d48a89d0e4d4adc4269e0fc8d93ce821ef2ec3ba6c07a4b5542e\"" Sep 4 17:25:35.456394 containerd[2104]: time="2024-09-04T17:25:35.456345968Z" level=info msg="StartContainer for \"bb3df13b8aa4d48a89d0e4d4adc4269e0fc8d93ce821ef2ec3ba6c07a4b5542e\"" Sep 4 17:25:35.581469 containerd[2104]: time="2024-09-04T17:25:35.581412500Z" level=info msg="StartContainer for \"bb3df13b8aa4d48a89d0e4d4adc4269e0fc8d93ce821ef2ec3ba6c07a4b5542e\" returns successfully" Sep 4 17:25:35.664153 systemd[1]: run-containerd-runc-k8s.io-bb3df13b8aa4d48a89d0e4d4adc4269e0fc8d93ce821ef2ec3ba6c07a4b5542e-runc.0WDeos.mount: Deactivated successfully. Sep 4 17:25:39.976232 containerd[2104]: time="2024-09-04T17:25:39.976151128Z" level=info msg="shim disconnected" id=2ac7e6c7a3c50ec34cfb9c3a6c27e58ac830b351b23a649601c6700b8f9a0d7c namespace=k8s.io Sep 4 17:25:39.979968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ac7e6c7a3c50ec34cfb9c3a6c27e58ac830b351b23a649601c6700b8f9a0d7c-rootfs.mount: Deactivated successfully. Sep 4 17:25:39.983454 containerd[2104]: time="2024-09-04T17:25:39.981145490Z" level=warning msg="cleaning up after shim disconnected" id=2ac7e6c7a3c50ec34cfb9c3a6c27e58ac830b351b23a649601c6700b8f9a0d7c namespace=k8s.io Sep 4 17:25:39.983454 containerd[2104]: time="2024-09-04T17:25:39.981187360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:25:39.994692 containerd[2104]: time="2024-09-04T17:25:39.993621419Z" level=error msg="collecting metrics for 2ac7e6c7a3c50ec34cfb9c3a6c27e58ac830b351b23a649601c6700b8f9a0d7c" error="ttrpc: closed: unknown" Sep 4 17:25:40.447450 kubelet[3564]: I0904 17:25:40.446729 3564 scope.go:117] "RemoveContainer" containerID="2ac7e6c7a3c50ec34cfb9c3a6c27e58ac830b351b23a649601c6700b8f9a0d7c" Sep 4 17:25:40.465610 containerd[2104]: time="2024-09-04T17:25:40.464782268Z" level=info msg="CreateContainer within sandbox \"60c7838edc571cc7968a3ca74e78498176a798226a111fc910328f5b93193c19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:25:40.497128 containerd[2104]: time="2024-09-04T17:25:40.497074456Z" level=info msg="CreateContainer within sandbox \"60c7838edc571cc7968a3ca74e78498176a798226a111fc910328f5b93193c19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"593d91852cc767ae75a0c3726e2616ff0abd9825a49ca636f265947f10fecbbf\"" Sep 4 17:25:40.498039 containerd[2104]: time="2024-09-04T17:25:40.498008353Z" level=info msg="StartContainer for \"593d91852cc767ae75a0c3726e2616ff0abd9825a49ca636f265947f10fecbbf\"" Sep 4 17:25:40.614553 containerd[2104]: time="2024-09-04T17:25:40.614495939Z" level=info msg="StartContainer for \"593d91852cc767ae75a0c3726e2616ff0abd9825a49ca636f265947f10fecbbf\" returns successfully" Sep 4 17:25:41.999497 kubelet[3564]: E0904 17:25:41.999432 3564 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-203?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:25:52.000234 kubelet[3564]: E0904 17:25:52.000011 3564 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-203?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"