Oct 9 07:16:43.084386 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:16:43.084429 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:16:43.084446 kernel: BIOS-provided physical RAM map: Oct 9 07:16:43.084459 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:16:43.084471 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:16:43.084484 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:16:43.084502 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Oct 9 07:16:43.084515 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Oct 9 07:16:43.084528 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Oct 9 07:16:43.084541 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:16:43.084554 kernel: NX (Execute Disable) protection: active Oct 9 07:16:43.084590 kernel: APIC: Static calls initialized Oct 9 07:16:43.084601 kernel: SMBIOS 2.7 present. Oct 9 07:16:43.084612 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Oct 9 07:16:43.084629 kernel: Hypervisor detected: KVM Oct 9 07:16:43.084641 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:16:43.084653 kernel: kvm-clock: using sched offset of 6046192544 cycles Oct 9 07:16:43.084667 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:16:43.084680 kernel: tsc: Detected 2499.994 MHz processor Oct 9 07:16:43.084692 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:16:43.084705 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:16:43.084721 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Oct 9 07:16:43.084735 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:16:43.084747 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:16:43.084760 kernel: Using GB pages for direct mapping Oct 9 07:16:43.084774 kernel: ACPI: Early table checksum verification disabled Oct 9 07:16:43.084787 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Oct 9 07:16:43.084801 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Oct 9 07:16:43.084814 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 9 07:16:43.084827 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Oct 9 07:16:43.084843 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Oct 9 07:16:43.084856 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 9 07:16:43.084869 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 9 07:16:43.084882 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Oct 9 07:16:43.084896 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 9 07:16:43.084909 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Oct 9 07:16:43.084922 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Oct 9 07:16:43.084936 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 9 07:16:43.084952 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Oct 9 07:16:43.084966 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Oct 9 07:16:43.084985 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Oct 9 07:16:43.084999 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Oct 9 07:16:43.085013 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Oct 9 07:16:43.085028 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Oct 9 07:16:43.085046 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Oct 9 07:16:43.085060 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Oct 9 07:16:43.085075 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Oct 9 07:16:43.085232 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Oct 9 07:16:43.085249 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 07:16:43.085263 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 07:16:43.085277 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Oct 9 07:16:43.085291 kernel: NUMA: Initialized distance table, cnt=1 Oct 9 07:16:43.085396 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Oct 9 07:16:43.085419 kernel: Zone ranges: Oct 9 07:16:43.085434 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:16:43.085448 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Oct 9 07:16:43.085463 kernel: Normal empty Oct 9 07:16:43.085477 kernel: Movable zone start for each node Oct 9 07:16:43.085491 kernel: Early memory node ranges Oct 9 07:16:43.085596 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:16:43.085615 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Oct 9 07:16:43.085631 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Oct 9 07:16:43.085652 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:16:43.085668 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:16:43.085683 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Oct 9 07:16:43.085699 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 9 07:16:43.085714 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:16:43.085730 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Oct 9 07:16:43.085745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:16:43.085761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:16:43.085777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:16:43.085792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:16:43.085811 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:16:43.085827 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:16:43.085842 kernel: TSC deadline timer available Oct 9 07:16:43.085856 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 07:16:43.085872 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:16:43.085887 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Oct 9 07:16:43.085902 kernel: Booting paravirtualized kernel on KVM Oct 9 07:16:43.085918 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:16:43.085934 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 07:16:43.085952 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 07:16:43.085968 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 07:16:43.085983 kernel: pcpu-alloc: [0] 0 1 Oct 9 07:16:43.085998 kernel: kvm-guest: PV spinlocks enabled Oct 9 07:16:43.086014 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 07:16:43.086031 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:16:43.086047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:16:43.086123 kernel: random: crng init done Oct 9 07:16:43.086143 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:16:43.086158 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:16:43.086173 kernel: Fallback order for Node 0: 0 Oct 9 07:16:43.086189 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Oct 9 07:16:43.086204 kernel: Policy zone: DMA32 Oct 9 07:16:43.086220 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:16:43.086235 kernel: Memory: 1926200K/2057760K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 131300K reserved, 0K cma-reserved) Oct 9 07:16:43.086251 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 07:16:43.086270 kernel: Kernel/User page tables isolation: enabled Oct 9 07:16:43.086285 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:16:43.086300 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:16:43.086351 kernel: Dynamic Preempt: voluntary Oct 9 07:16:43.086366 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:16:43.086383 kernel: rcu: RCU event tracing is enabled. Oct 9 07:16:43.086399 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 07:16:43.086835 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:16:43.086851 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:16:43.086866 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:16:43.086886 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:16:43.086902 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 07:16:43.086917 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 07:16:43.086933 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:16:43.086949 kernel: Console: colour VGA+ 80x25 Oct 9 07:16:43.086964 kernel: printk: console [ttyS0] enabled Oct 9 07:16:43.086980 kernel: ACPI: Core revision 20230628 Oct 9 07:16:43.086995 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Oct 9 07:16:43.087011 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:16:43.087029 kernel: x2apic enabled Oct 9 07:16:43.087044 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:16:43.087072 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Oct 9 07:16:43.087091 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Oct 9 07:16:43.087107 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 9 07:16:43.087123 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 9 07:16:43.087139 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:16:43.087155 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:16:43.087171 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:16:43.087187 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:16:43.087204 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 9 07:16:43.087220 kernel: RETBleed: Vulnerable Oct 9 07:16:43.087240 kernel: Speculative Store Bypass: Vulnerable Oct 9 07:16:43.087256 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 07:16:43.087272 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 07:16:43.087288 kernel: GDS: Unknown: Dependent on hypervisor status Oct 9 07:16:43.087304 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:16:43.087321 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:16:43.087338 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:16:43.087357 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Oct 9 07:16:43.087374 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Oct 9 07:16:43.087390 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 9 07:16:43.087406 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 9 07:16:43.087423 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 9 07:16:43.087439 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Oct 9 07:16:43.087455 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:16:43.087472 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Oct 9 07:16:43.087488 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Oct 9 07:16:43.087504 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Oct 9 07:16:43.087520 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Oct 9 07:16:43.087540 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Oct 9 07:16:43.087556 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Oct 9 07:16:43.087588 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Oct 9 07:16:43.087603 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:16:43.087617 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:16:43.087631 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:16:43.087646 kernel: SELinux: Initializing. Oct 9 07:16:43.087661 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:16:43.087676 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:16:43.087691 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 9 07:16:43.087706 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:16:43.087724 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:16:43.087740 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:16:43.087755 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 9 07:16:43.087770 kernel: signal: max sigframe size: 3632 Oct 9 07:16:43.087873 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:16:43.087890 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:16:43.087906 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 07:16:43.087921 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:16:43.087936 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:16:43.087956 kernel: .... node #0, CPUs: #1 Oct 9 07:16:43.087973 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 9 07:16:43.087999 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 9 07:16:43.088015 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:16:43.088030 kernel: smpboot: Max logical packages: 1 Oct 9 07:16:43.088045 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Oct 9 07:16:43.088061 kernel: devtmpfs: initialized Oct 9 07:16:43.088076 kernel: x86/mm: Memory block size: 128MB Oct 9 07:16:43.088095 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:16:43.088111 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 07:16:43.088127 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:16:43.088143 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:16:43.088159 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:16:43.088175 kernel: audit: type=2000 audit(1728458202.453:1): state=initialized audit_enabled=0 res=1 Oct 9 07:16:43.088190 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:16:43.088206 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:16:43.088221 kernel: cpuidle: using governor menu Oct 9 07:16:43.088238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:16:43.088252 kernel: dca service started, version 1.12.1 Oct 9 07:16:43.088269 kernel: PCI: Using configuration type 1 for base access Oct 9 07:16:43.088287 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:16:43.088304 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 07:16:43.088322 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 07:16:43.088340 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:16:43.088358 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:16:43.088376 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:16:43.088451 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:16:43.088470 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:16:43.088489 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:16:43.088507 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 9 07:16:43.088525 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:16:43.088543 kernel: ACPI: Interpreter enabled Oct 9 07:16:43.088586 kernel: ACPI: PM: (supports S0 S5) Oct 9 07:16:43.088603 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:16:43.088620 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:16:43.088641 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:16:43.088658 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 9 07:16:43.088675 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:16:43.088916 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:16:43.089065 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 07:16:43.089275 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 07:16:43.089299 kernel: acpiphp: Slot [3] registered Oct 9 07:16:43.089317 kernel: acpiphp: Slot [4] registered Oct 9 07:16:43.089339 kernel: acpiphp: Slot [5] registered Oct 9 07:16:43.089357 kernel: acpiphp: Slot [6] registered Oct 9 07:16:43.089374 kernel: acpiphp: Slot [7] registered Oct 9 07:16:43.089392 kernel: acpiphp: Slot [8] registered Oct 9 07:16:43.089408 kernel: acpiphp: Slot [9] registered Oct 9 07:16:43.089425 kernel: acpiphp: Slot [10] registered Oct 9 07:16:43.089439 kernel: acpiphp: Slot [11] registered Oct 9 07:16:43.089454 kernel: acpiphp: Slot [12] registered Oct 9 07:16:43.089471 kernel: acpiphp: Slot [13] registered Oct 9 07:16:43.089491 kernel: acpiphp: Slot [14] registered Oct 9 07:16:43.089508 kernel: acpiphp: Slot [15] registered Oct 9 07:16:43.089524 kernel: acpiphp: Slot [16] registered Oct 9 07:16:43.089541 kernel: acpiphp: Slot [17] registered Oct 9 07:16:43.089571 kernel: acpiphp: Slot [18] registered Oct 9 07:16:43.089584 kernel: acpiphp: Slot [19] registered Oct 9 07:16:43.089596 kernel: acpiphp: Slot [20] registered Oct 9 07:16:43.089609 kernel: acpiphp: Slot [21] registered Oct 9 07:16:43.089621 kernel: acpiphp: Slot [22] registered Oct 9 07:16:43.089635 kernel: acpiphp: Slot [23] registered Oct 9 07:16:43.089653 kernel: acpiphp: Slot [24] registered Oct 9 07:16:43.089666 kernel: acpiphp: Slot [25] registered Oct 9 07:16:43.089679 kernel: acpiphp: Slot [26] registered Oct 9 07:16:43.089692 kernel: acpiphp: Slot [27] registered Oct 9 07:16:43.089707 kernel: acpiphp: Slot [28] registered Oct 9 07:16:43.089723 kernel: acpiphp: Slot [29] registered Oct 9 07:16:43.089738 kernel: acpiphp: Slot [30] registered Oct 9 07:16:43.089753 kernel: acpiphp: Slot [31] registered Oct 9 07:16:43.089769 kernel: PCI host bridge to bus 0000:00 Oct 9 07:16:43.091028 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:16:43.091166 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:16:43.091288 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:16:43.091408 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 07:16:43.091527 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:16:43.092112 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 07:16:43.092540 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 07:16:43.092846 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Oct 9 07:16:43.093063 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 9 07:16:43.093267 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Oct 9 07:16:43.093463 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Oct 9 07:16:43.093631 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Oct 9 07:16:43.093833 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Oct 9 07:16:43.093979 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Oct 9 07:16:43.094111 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Oct 9 07:16:43.094245 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Oct 9 07:16:43.094462 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs Oct 9 07:16:43.094635 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Oct 9 07:16:43.094849 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Oct 9 07:16:43.094985 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 07:16:43.095127 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:16:43.095274 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 9 07:16:43.095413 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Oct 9 07:16:43.095595 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 9 07:16:43.095740 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Oct 9 07:16:43.095762 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:16:43.095780 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:16:43.095803 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:16:43.095820 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:16:43.095897 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 07:16:43.095915 kernel: iommu: Default domain type: Translated Oct 9 07:16:43.095931 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:16:43.095948 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:16:43.095965 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:16:43.095990 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:16:43.096007 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Oct 9 07:16:43.096245 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Oct 9 07:16:43.096395 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Oct 9 07:16:43.096535 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:16:43.096556 kernel: vgaarb: loaded Oct 9 07:16:43.096611 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Oct 9 07:16:43.096627 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Oct 9 07:16:43.096642 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:16:43.096657 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:16:43.096676 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:16:43.096689 kernel: pnp: PnP ACPI init Oct 9 07:16:43.096704 kernel: pnp: PnP ACPI: found 5 devices Oct 9 07:16:43.096718 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:16:43.096734 kernel: NET: Registered PF_INET protocol family Oct 9 07:16:43.096749 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:16:43.096761 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:16:43.096776 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:16:43.096791 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:16:43.096858 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:16:43.096875 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:16:43.096890 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:16:43.096906 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:16:43.097139 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:16:43.097158 kernel: NET: Registered PF_XDP protocol family Oct 9 07:16:43.097313 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:16:43.097442 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:16:43.097677 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:16:43.097807 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 07:16:43.097954 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 07:16:43.097977 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:16:43.097995 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 07:16:43.098013 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Oct 9 07:16:43.098031 kernel: clocksource: Switched to clocksource tsc Oct 9 07:16:43.098373 kernel: Initialise system trusted keyrings Oct 9 07:16:43.098395 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:16:43.098465 kernel: Key type asymmetric registered Oct 9 07:16:43.098483 kernel: Asymmetric key parser 'x509' registered Oct 9 07:16:43.098501 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:16:43.098518 kernel: io scheduler mq-deadline registered Oct 9 07:16:43.098535 kernel: io scheduler kyber registered Oct 9 07:16:43.098553 kernel: io scheduler bfq registered Oct 9 07:16:43.098608 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:16:43.098621 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:16:43.098635 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:16:43.098654 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:16:43.098668 kernel: i8042: Warning: Keylock active Oct 9 07:16:43.098682 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:16:43.100145 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:16:43.100320 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 9 07:16:43.100454 kernel: rtc_cmos 00:00: registered as rtc0 Oct 9 07:16:43.100745 kernel: rtc_cmos 00:00: setting system clock to 2024-10-09T07:16:42 UTC (1728458202) Oct 9 07:16:43.100955 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 9 07:16:43.100983 kernel: intel_pstate: CPU model not supported Oct 9 07:16:43.101027 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:16:43.101043 kernel: Segment Routing with IPv6 Oct 9 07:16:43.101059 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:16:43.101150 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:16:43.101197 kernel: Key type dns_resolver registered Oct 9 07:16:43.101213 kernel: IPI shorthand broadcast: enabled Oct 9 07:16:43.101230 kernel: sched_clock: Marking stable (770002623, 294453048)->(1166865169, -102409498) Oct 9 07:16:43.101245 kernel: registered taskstats version 1 Oct 9 07:16:43.101291 kernel: Loading compiled-in X.509 certificates Oct 9 07:16:43.101308 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:16:43.101324 kernel: Key type .fscrypt registered Oct 9 07:16:43.101364 kernel: Key type fscrypt-provisioning registered Oct 9 07:16:43.101381 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:16:43.101396 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:16:43.101412 kernel: ima: No architecture policies found Oct 9 07:16:43.101453 kernel: clk: Disabling unused clocks Oct 9 07:16:43.101473 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:16:43.101489 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:16:43.101583 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:16:43.101629 kernel: Run /init as init process Oct 9 07:16:43.101647 kernel: with arguments: Oct 9 07:16:43.101662 kernel: /init Oct 9 07:16:43.101677 kernel: with environment: Oct 9 07:16:43.101816 kernel: HOME=/ Oct 9 07:16:43.101833 kernel: TERM=linux Oct 9 07:16:43.101873 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:16:43.101904 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:16:43.101937 systemd[1]: Detected virtualization amazon. Oct 9 07:16:43.101957 systemd[1]: Detected architecture x86-64. Oct 9 07:16:43.101974 systemd[1]: Running in initrd. Oct 9 07:16:43.101994 systemd[1]: No hostname configured, using default hostname. Oct 9 07:16:43.102011 systemd[1]: Hostname set to . Oct 9 07:16:43.102030 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:16:43.102047 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:16:43.102065 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:16:43.102082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:16:43.102101 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:16:43.102119 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:16:43.102140 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:16:43.102158 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:16:43.102178 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:16:43.102196 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:16:43.102213 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:16:43.102231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:16:43.102248 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:16:43.102269 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:16:43.102287 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:16:43.102305 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:16:43.102323 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:16:43.102340 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:16:43.102357 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:16:43.102379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:16:43.102399 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:16:43.102472 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:16:43.102494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:16:43.102513 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:16:43.102536 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:16:43.102554 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:16:43.102587 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:16:43.102605 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:16:43.102623 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:16:43.102648 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:16:43.102666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:16:43.102684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:16:43.102736 systemd-journald[178]: Collecting audit messages is disabled. Oct 9 07:16:43.102778 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:16:43.102797 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:16:43.102815 systemd-journald[178]: Journal started Oct 9 07:16:43.103273 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2f0750772eff63635a35e10f197ab9) is 4.8M, max 38.6M, 33.8M free. Oct 9 07:16:43.108164 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:16:43.111322 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:16:43.124070 systemd-modules-load[179]: Inserted module 'overlay' Oct 9 07:16:43.134856 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:16:43.166777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:16:43.285252 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:16:43.285311 kernel: Bridge firewalling registered Oct 9 07:16:43.182057 systemd-modules-load[179]: Inserted module 'br_netfilter' Oct 9 07:16:43.286333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:16:43.292970 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:16:43.296069 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:16:43.310530 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:16:43.314078 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:16:43.319753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:16:43.322845 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:16:43.341997 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:16:43.350955 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:16:43.361822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:16:43.365786 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:16:43.379871 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:16:43.406658 dracut-cmdline[215]: dracut-dracut-053 Oct 9 07:16:43.410713 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:16:43.428793 systemd-resolved[207]: Positive Trust Anchors: Oct 9 07:16:43.428815 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:16:43.428865 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:16:43.442261 systemd-resolved[207]: Defaulting to hostname 'linux'. Oct 9 07:16:43.444659 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:16:43.446130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:16:43.502588 kernel: SCSI subsystem initialized Oct 9 07:16:43.514602 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:16:43.527587 kernel: iscsi: registered transport (tcp) Oct 9 07:16:43.555703 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:16:43.555878 kernel: QLogic iSCSI HBA Driver Oct 9 07:16:43.597270 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:16:43.604762 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:16:43.635729 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:16:43.635811 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:16:43.635832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:16:43.681621 kernel: raid6: avx512x4 gen() 17084 MB/s Oct 9 07:16:43.698622 kernel: raid6: avx512x2 gen() 17343 MB/s Oct 9 07:16:43.715614 kernel: raid6: avx512x1 gen() 17846 MB/s Oct 9 07:16:43.732624 kernel: raid6: avx2x4 gen() 16951 MB/s Oct 9 07:16:43.749613 kernel: raid6: avx2x2 gen() 16896 MB/s Oct 9 07:16:43.766619 kernel: raid6: avx2x1 gen() 13463 MB/s Oct 9 07:16:43.766693 kernel: raid6: using algorithm avx512x1 gen() 17846 MB/s Oct 9 07:16:43.783603 kernel: raid6: .... xor() 20932 MB/s, rmw enabled Oct 9 07:16:43.783675 kernel: raid6: using avx512x2 recovery algorithm Oct 9 07:16:43.810586 kernel: xor: automatically using best checksumming function avx Oct 9 07:16:44.020584 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:16:44.032104 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:16:44.037876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:16:44.065526 systemd-udevd[397]: Using default interface naming scheme 'v255'. Oct 9 07:16:44.071155 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:16:44.083811 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:16:44.100452 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Oct 9 07:16:44.135653 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:16:44.143790 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:16:44.201274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:16:44.208009 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:16:44.238850 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:16:44.242303 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:16:44.244677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:16:44.246205 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:16:44.255367 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:16:44.285935 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:16:44.306861 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 9 07:16:44.307126 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 9 07:16:44.313583 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Oct 9 07:16:44.323625 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:5a:45:2e:e5:75 Oct 9 07:16:44.323880 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:16:44.330501 (udev-worker)[442]: Network interface NamePolicy= disabled on kernel command line. Oct 9 07:16:44.368664 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:16:44.377694 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:16:44.368852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:16:44.370948 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:16:44.372640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:16:44.385679 kernel: AES CTR mode by8 optimization enabled Oct 9 07:16:44.372857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:16:44.374121 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:16:44.386344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:16:44.393587 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 9 07:16:44.393841 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 07:16:44.407579 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 9 07:16:44.412105 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:16:44.412167 kernel: GPT:9289727 != 16777215 Oct 9 07:16:44.412187 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:16:44.412205 kernel: GPT:9289727 != 16777215 Oct 9 07:16:44.412576 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:16:44.412614 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 07:16:44.537619 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (452) Oct 9 07:16:44.554589 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (451) Oct 9 07:16:44.578601 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 9 07:16:44.582139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:16:44.610784 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:16:44.666912 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Oct 9 07:16:44.670660 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 9 07:16:44.673172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:16:44.686132 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 9 07:16:44.695043 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 9 07:16:44.702825 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:16:44.716102 disk-uuid[624]: Primary Header is updated. Oct 9 07:16:44.716102 disk-uuid[624]: Secondary Entries is updated. Oct 9 07:16:44.716102 disk-uuid[624]: Secondary Header is updated. Oct 9 07:16:44.720623 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 07:16:44.739720 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 07:16:44.752084 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 07:16:45.757887 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 9 07:16:45.759541 disk-uuid[625]: The operation has completed successfully. Oct 9 07:16:45.949485 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:16:45.949658 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:16:45.983766 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:16:45.990329 sh[966]: Success Oct 9 07:16:46.030179 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 07:16:46.128041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:16:46.144738 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:16:46.147723 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:16:46.185921 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:16:46.185985 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:16:46.186006 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:16:46.186783 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:16:46.187953 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:16:46.273760 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 9 07:16:46.294843 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:16:46.298039 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:16:46.314849 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:16:46.325961 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:16:46.339584 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:16:46.339650 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:16:46.340892 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 9 07:16:46.346584 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 9 07:16:46.363825 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:16:46.363369 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:16:46.371582 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:16:46.384081 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:16:46.462040 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:16:46.467831 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:16:46.534251 systemd-networkd[1170]: lo: Link UP Oct 9 07:16:46.535013 systemd-networkd[1170]: lo: Gained carrier Oct 9 07:16:46.548533 systemd-networkd[1170]: Enumeration completed Oct 9 07:16:46.548806 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:16:46.549250 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:16:46.549255 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:16:46.556419 systemd[1]: Reached target network.target - Network. Oct 9 07:16:46.572001 systemd-networkd[1170]: eth0: Link UP Oct 9 07:16:46.572010 systemd-networkd[1170]: eth0: Gained carrier Oct 9 07:16:46.572022 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:16:46.593904 systemd-networkd[1170]: eth0: DHCPv4 address 172.31.23.194/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 9 07:16:46.794024 ignition[1079]: Ignition 2.18.0 Oct 9 07:16:46.794458 ignition[1079]: Stage: fetch-offline Oct 9 07:16:46.794801 ignition[1079]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:16:46.794813 ignition[1079]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 07:16:46.798364 ignition[1079]: Ignition finished successfully Oct 9 07:16:46.812183 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:16:46.826930 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:16:46.884246 ignition[1182]: Ignition 2.18.0 Oct 9 07:16:46.884303 ignition[1182]: Stage: fetch Oct 9 07:16:46.884873 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:16:46.884888 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 07:16:46.885057 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 07:16:46.898943 ignition[1182]: PUT result: OK Oct 9 07:16:46.909897 ignition[1182]: parsed url from cmdline: "" Oct 9 07:16:46.909910 ignition[1182]: no config URL provided Oct 9 07:16:46.909920 ignition[1182]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:16:46.909936 ignition[1182]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:16:46.909962 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 07:16:46.913617 ignition[1182]: PUT result: OK Oct 9 07:16:46.913696 ignition[1182]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 9 07:16:46.921547 ignition[1182]: GET result: OK Oct 9 07:16:46.921659 ignition[1182]: parsing config with SHA512: 634d7db7cfdf9db8b21bd05ea680e86617d8b48d6604ada9615de46627edebbadecfc8d81a25a10f7e17a50b41a39ef2587ed2ab4e5d1d029203607a4d0146c3 Oct 9 07:16:46.935282 unknown[1182]: fetched base config from "system" Oct 9 07:16:46.935437 unknown[1182]: fetched base config from "system" Oct 9 07:16:46.936222 ignition[1182]: fetch: fetch complete Oct 9 07:16:46.935446 unknown[1182]: fetched user config from "aws" Oct 9 07:16:46.936230 ignition[1182]: fetch: fetch passed Oct 9 07:16:46.936290 ignition[1182]: Ignition finished successfully Oct 9 07:16:46.941356 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:16:46.951748 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:16:46.969622 ignition[1189]: Ignition 2.18.0 Oct 9 07:16:46.969637 ignition[1189]: Stage: kargs Oct 9 07:16:46.971124 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:16:46.971143 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 07:16:46.972153 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 07:16:46.976363 ignition[1189]: PUT result: OK Oct 9 07:16:46.980142 ignition[1189]: kargs: kargs passed Oct 9 07:16:46.980227 ignition[1189]: Ignition finished successfully Oct 9 07:16:46.984060 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:16:46.991986 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:16:47.026841 ignition[1196]: Ignition 2.18.0 Oct 9 07:16:47.026864 ignition[1196]: Stage: disks Oct 9 07:16:47.028334 ignition[1196]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:16:47.028348 ignition[1196]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 07:16:47.028663 ignition[1196]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 07:16:47.034270 ignition[1196]: PUT result: OK Oct 9 07:16:47.037569 ignition[1196]: disks: disks passed Oct 9 07:16:47.037643 ignition[1196]: Ignition finished successfully Oct 9 07:16:47.040163 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:16:47.040724 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:16:47.050584 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:16:47.053548 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:16:47.053655 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:16:47.056913 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:16:47.063742 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:16:47.110208 systemd-fsck[1205]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:16:47.113834 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:16:47.119892 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:16:47.244584 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:16:47.245318 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:16:47.249720 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:16:47.275781 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:16:47.278737 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:16:47.284609 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 07:16:47.285768 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:16:47.285806 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:16:47.294227 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:16:47.302042 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:16:47.314143 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1224) Oct 9 07:16:47.316938 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:16:47.317008 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:16:47.317031 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 9 07:16:47.323933 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 9 07:16:47.324522 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:16:47.588334 initrd-setup-root[1248]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:16:47.602290 initrd-setup-root[1255]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:16:47.611474 initrd-setup-root[1262]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:16:47.620543 initrd-setup-root[1269]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:16:47.773697 systemd-networkd[1170]: eth0: Gained IPv6LL Oct 9 07:16:47.850925 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:16:47.858738 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:16:47.864200 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:16:47.872507 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:16:47.875672 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:16:47.912444 ignition[1342]: INFO : Ignition 2.18.0 Oct 9 07:16:47.914841 ignition[1342]: INFO : Stage: mount Oct 9 07:16:47.915923 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:16:47.915923 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 07:16:47.919497 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 07:16:47.923091 ignition[1342]: INFO : PUT result: OK Oct 9 07:16:47.928326 ignition[1342]: INFO : mount: mount passed Oct 9 07:16:47.928326 ignition[1342]: INFO : Ignition finished successfully Oct 9 07:16:47.930608 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:16:47.938756 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:16:47.953841 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:16:48.252101 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:16:48.280583 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1354) Oct 9 07:16:48.280645 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:16:48.282729 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:16:48.282796 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 9 07:16:48.287966 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 9 07:16:48.290717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:16:48.339420 ignition[1371]: INFO : Ignition 2.18.0 Oct 9 07:16:48.339420 ignition[1371]: INFO : Stage: files Oct 9 07:16:48.344490 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:16:48.344490 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 07:16:48.354978 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 07:16:48.357861 ignition[1371]: INFO : PUT result: OK Oct 9 07:16:48.367885 ignition[1371]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:16:48.373551 ignition[1371]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:16:48.373551 ignition[1371]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:16:48.399887 ignition[1371]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:16:48.406687 ignition[1371]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:16:48.408795 unknown[1371]: wrote ssh authorized keys file for user: core Oct 9 07:16:48.410577 ignition[1371]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:16:48.413822 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:16:48.416824 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:16:48.504878 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 07:16:48.664715 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:16:48.668456 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:16:48.670753 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:16:48.670753 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:16:48.675188 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:16:48.675188 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:16:48.679409 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:16:48.679409 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:16:48.679409 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:16:48.686379 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:16:48.686379 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:16:48.686379 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:16:48.694275 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:16:48.694275 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:16:48.694275 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 07:16:49.184137 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 07:16:50.023329 ignition[1371]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:16:50.023329 ignition[1371]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 07:16:50.032660 ignition[1371]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:16:50.035046 ignition[1371]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:16:50.035046 ignition[1371]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 07:16:50.035046 ignition[1371]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:16:50.035046 ignition[1371]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:16:50.035046 ignition[1371]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:16:50.035046 ignition[1371]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:16:50.035046 ignition[1371]: INFO : files: files passed Oct 9 07:16:50.035046 ignition[1371]: INFO : Ignition finished successfully Oct 9 07:16:50.043940 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:16:50.068309 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:16:50.086283 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:16:50.121177 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:16:50.121405 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:16:50.134645 initrd-setup-root-after-ignition[1400]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:16:50.134645 initrd-setup-root-after-ignition[1400]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:16:50.138967 initrd-setup-root-after-ignition[1404]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:16:50.143084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:16:50.146925 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:16:50.152797 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:16:50.186090 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:16:50.186240 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:16:50.189436 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:16:50.192181 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:16:50.195272 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:16:50.199885 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:16:50.238957 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:16:50.246789 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:16:50.265260 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:16:50.265650 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:16:50.268570 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:16:50.270185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:16:50.270355 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:16:50.272152 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:16:50.273005 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:16:50.273689 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:16:50.273918 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:16:50.274918 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:16:50.275127 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:16:50.275335 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:16:50.275755 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:16:50.276300 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:16:50.276719 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:16:50.277590 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:16:50.277893 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:16:50.279261 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:16:50.279513 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:16:50.279631 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:16:50.290921 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:16:50.292405 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:16:50.292534 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:16:50.295485 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:16:50.295828 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:16:50.297489 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:16:50.297638 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:16:50.307226 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:16:50.310909 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:16:50.325773 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:16:50.326475 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:16:50.337476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:16:50.337712 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:16:50.368663 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:16:50.369137 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:16:50.408384 ignition[1424]: INFO : Ignition 2.18.0 Oct 9 07:16:50.408384 ignition[1424]: INFO : Stage: umount Oct 9 07:16:50.415459 ignition[1424]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:16:50.415459 ignition[1424]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 9 07:16:50.415459 ignition[1424]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 9 07:16:50.421742 ignition[1424]: INFO : PUT result: OK Oct 9 07:16:50.423366 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:16:50.426497 ignition[1424]: INFO : umount: umount passed Oct 9 07:16:50.426497 ignition[1424]: INFO : Ignition finished successfully Oct 9 07:16:50.427973 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:16:50.428134 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:16:50.432180 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:16:50.432342 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:16:50.434619 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:16:50.434766 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:16:50.437243 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:16:50.437288 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:16:50.441611 systemd[1]: Stopped target network.target - Network. Oct 9 07:16:50.443523 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:16:50.445667 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:16:50.448777 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:16:50.451337 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:16:50.456644 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:16:50.456775 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:16:50.460952 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:16:50.462848 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:16:50.462911 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:16:50.467689 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:16:50.467756 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:16:50.470053 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:16:50.470129 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:16:50.473594 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:16:50.475067 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:16:50.476104 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:16:50.478077 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:16:50.480538 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:16:50.480667 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:16:50.485455 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:16:50.485531 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:16:50.496666 systemd-networkd[1170]: eth0: DHCPv6 lease lost Oct 9 07:16:50.503164 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:16:50.503479 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:16:50.505317 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:16:50.505519 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:16:50.511414 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:16:50.511484 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:16:50.518696 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:16:50.519870 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:16:50.520034 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:16:50.521546 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:16:50.521621 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:16:50.522901 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:16:50.522959 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:16:50.525550 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:16:50.525723 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:16:50.530040 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:16:50.550855 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:16:50.550981 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:16:50.554513 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:16:50.555908 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:16:50.558763 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:16:50.558847 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:16:50.561928 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:16:50.561980 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:16:50.566711 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:16:50.568129 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:16:50.570751 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:16:50.572135 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:16:50.574639 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:16:50.574722 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:16:50.583750 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:16:50.585860 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:16:50.585940 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:16:50.587873 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 07:16:50.587941 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:16:50.594333 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:16:50.594429 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:16:50.600824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:16:50.600944 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:16:50.618191 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:16:50.618373 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:16:50.623853 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:16:50.639825 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:16:50.672367 systemd[1]: Switching root. Oct 9 07:16:50.717059 systemd-journald[178]: Journal stopped Oct 9 07:16:54.006718 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Oct 9 07:16:54.006823 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:16:54.006852 kernel: SELinux: policy capability open_perms=1 Oct 9 07:16:54.006874 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:16:54.006896 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:16:54.007018 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:16:54.007050 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:16:54.007073 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:16:54.007095 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:16:54.007116 kernel: audit: type=1403 audit(1728458212.501:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:16:54.007150 systemd[1]: Successfully loaded SELinux policy in 88.155ms. Oct 9 07:16:54.007199 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.885ms. Oct 9 07:16:54.007227 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:16:54.007259 systemd[1]: Detected virtualization amazon. Oct 9 07:16:54.007286 systemd[1]: Detected architecture x86-64. Oct 9 07:16:54.007311 systemd[1]: Detected first boot. Oct 9 07:16:54.007337 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:16:54.007361 zram_generator::config[1467]: No configuration found. Oct 9 07:16:54.007394 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:16:54.007418 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 07:16:54.007441 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 07:16:54.007464 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 07:16:54.007496 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:16:54.007522 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:16:54.007546 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:16:54.007599 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:16:54.007625 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:16:54.007650 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:16:54.007673 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:16:54.007699 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:16:54.007724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:16:54.007754 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:16:54.007778 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:16:54.007802 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:16:54.007827 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:16:54.007849 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:16:54.007874 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:16:54.007898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:16:54.007924 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 07:16:54.007947 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 07:16:54.007984 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 07:16:54.008008 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:16:54.008030 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:16:54.008055 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:16:54.008079 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:16:54.008105 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:16:54.008129 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:16:54.008156 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:16:54.008182 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:16:54.008206 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:16:54.008228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:16:54.008252 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:16:54.008279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:16:54.008303 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:16:54.008325 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:16:54.008350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:16:54.008385 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:16:54.008408 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:16:54.008433 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:16:54.008459 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:16:54.008484 systemd[1]: Reached target machines.target - Containers. Oct 9 07:16:54.008509 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:16:54.008532 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:16:54.025341 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:16:54.025404 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:16:54.025431 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:16:54.025450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:16:54.025469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:16:54.025498 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:16:54.025516 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:16:54.025535 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:16:54.025553 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 07:16:54.027134 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 07:16:54.027172 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 07:16:54.027193 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 07:16:54.027214 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:16:54.027234 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:16:54.027254 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:16:54.027274 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:16:54.027295 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:16:54.027315 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 07:16:54.027336 systemd[1]: Stopped verity-setup.service. Oct 9 07:16:54.027360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:16:54.027381 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:16:54.027402 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:16:54.027421 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:16:54.027442 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:16:54.027463 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:16:54.027484 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:16:54.027508 kernel: fuse: init (API version 7.39) Oct 9 07:16:54.027531 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:16:54.027551 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:16:54.027588 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:16:54.027609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:16:54.027628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:16:54.027650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:16:54.027680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:16:54.027705 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:16:54.027731 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:16:54.027759 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:16:54.027783 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:16:54.027812 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:16:54.027838 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:16:54.027864 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:16:54.027933 systemd-journald[1541]: Collecting audit messages is disabled. Oct 9 07:16:54.027973 kernel: loop: module loaded Oct 9 07:16:54.028019 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:16:54.028046 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:16:54.028075 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:16:54.028102 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:16:54.028126 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:16:54.028151 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:16:54.042324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:16:54.042366 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:16:54.042388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:16:54.042413 systemd-journald[1541]: Journal started Oct 9 07:16:54.042472 systemd-journald[1541]: Runtime Journal (/run/log/journal/ec2f0750772eff63635a35e10f197ab9) is 4.8M, max 38.6M, 33.8M free. Oct 9 07:16:54.059594 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:16:53.510057 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:16:53.544342 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 9 07:16:53.545906 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 07:16:54.075856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:16:54.075927 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:16:54.084642 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:16:54.090614 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:16:54.093834 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:16:54.095523 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:16:54.095750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:16:54.097105 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:16:54.099062 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:16:54.101503 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:16:54.151037 kernel: ACPI: bus type drm_connector registered Oct 9 07:16:54.154043 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:16:54.154576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:16:54.184782 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:16:54.186198 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:16:54.191649 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:16:54.195797 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:16:54.199261 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:16:54.217655 kernel: loop0: detected capacity change from 0 to 80568 Oct 9 07:16:54.211056 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:16:54.218833 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:16:54.223608 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:16:54.255046 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:16:54.256863 systemd-journald[1541]: Time spent on flushing to /var/log/journal/ec2f0750772eff63635a35e10f197ab9 is 46.303ms for 969 entries. Oct 9 07:16:54.256863 systemd-journald[1541]: System Journal (/var/log/journal/ec2f0750772eff63635a35e10f197ab9) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:16:54.333787 systemd-journald[1541]: Received client request to flush runtime journal. Oct 9 07:16:54.333860 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:16:54.263363 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Oct 9 07:16:54.263386 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Oct 9 07:16:54.268293 udevadm[1602]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 07:16:54.276016 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:16:54.278378 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:16:54.294117 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:16:54.301900 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:16:54.336648 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:16:54.349594 kernel: loop1: detected capacity change from 0 to 60984 Oct 9 07:16:54.375438 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:16:54.386713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:16:54.414979 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Oct 9 07:16:54.415010 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Oct 9 07:16:54.432601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:16:54.442768 kernel: loop2: detected capacity change from 0 to 211296 Oct 9 07:16:54.499993 kernel: loop3: detected capacity change from 0 to 139904 Oct 9 07:16:54.609742 kernel: loop4: detected capacity change from 0 to 80568 Oct 9 07:16:54.628589 kernel: loop5: detected capacity change from 0 to 60984 Oct 9 07:16:54.638599 kernel: loop6: detected capacity change from 0 to 211296 Oct 9 07:16:54.658587 kernel: loop7: detected capacity change from 0 to 139904 Oct 9 07:16:54.686790 (sd-merge)[1620]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Oct 9 07:16:54.687596 (sd-merge)[1620]: Merged extensions into '/usr'. Oct 9 07:16:54.697383 systemd[1]: Reloading requested from client PID 1574 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:16:54.697524 systemd[1]: Reloading... Oct 9 07:16:54.809625 zram_generator::config[1641]: No configuration found. Oct 9 07:16:55.153222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:16:55.282837 systemd[1]: Reloading finished in 584 ms. Oct 9 07:16:55.308889 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:16:55.321909 systemd[1]: Starting ensure-sysext.service... Oct 9 07:16:55.333797 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:16:55.350621 systemd[1]: Reloading requested from client PID 1692 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:16:55.350648 systemd[1]: Reloading... Oct 9 07:16:55.401927 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:16:55.402691 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:16:55.404096 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:16:55.404526 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Oct 9 07:16:55.404897 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Oct 9 07:16:55.409837 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:16:55.409853 systemd-tmpfiles[1693]: Skipping /boot Oct 9 07:16:55.440897 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:16:55.440972 systemd-tmpfiles[1693]: Skipping /boot Oct 9 07:16:55.527587 zram_generator::config[1719]: No configuration found. Oct 9 07:16:55.683922 ldconfig[1570]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:16:55.714777 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:16:55.784885 systemd[1]: Reloading finished in 433 ms. Oct 9 07:16:55.803567 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:16:55.809138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:16:55.829963 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:16:55.901794 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:16:55.922848 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:16:55.936814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:16:55.941849 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:16:55.974969 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:16:55.990893 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:16:55.992494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:16:56.005038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:16:56.011505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:16:56.025547 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:16:56.026943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:16:56.027205 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:16:56.033580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:16:56.033920 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:16:56.037098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:16:56.040618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:16:56.041458 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:16:56.042713 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:16:56.062534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:16:56.064075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:16:56.071465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:16:56.081151 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:16:56.082789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:16:56.083166 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:16:56.089845 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:16:56.091404 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:16:56.100543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:16:56.100764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:16:56.125673 systemd[1]: Finished ensure-sysext.service. Oct 9 07:16:56.135601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:16:56.144973 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:16:56.170146 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:16:56.170684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:16:56.187251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:16:56.188499 augenrules[1802]: No rules Oct 9 07:16:56.188372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:16:56.191028 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:16:56.191701 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:16:56.195165 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:16:56.202225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:16:56.221203 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:16:56.223411 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:16:56.225805 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:16:56.232146 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:16:56.248156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:16:56.254210 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:16:56.300616 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:16:56.318768 systemd-udevd[1816]: Using default interface naming scheme 'v255'. Oct 9 07:16:56.354851 systemd-resolved[1780]: Positive Trust Anchors: Oct 9 07:16:56.354869 systemd-resolved[1780]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:16:56.354944 systemd-resolved[1780]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:16:56.360336 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:16:56.364682 systemd-resolved[1780]: Defaulting to hostname 'linux'. Oct 9 07:16:56.375752 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:16:56.379710 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:16:56.386193 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:16:56.502732 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 07:16:56.516112 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1834) Oct 9 07:16:56.530345 systemd-networkd[1823]: lo: Link UP Oct 9 07:16:56.531767 systemd-networkd[1823]: lo: Gained carrier Oct 9 07:16:56.535535 systemd-networkd[1823]: Enumeration completed Oct 9 07:16:56.535744 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:16:56.538277 systemd[1]: Reached target network.target - Network. Oct 9 07:16:56.540220 (udev-worker)[1830]: Network interface NamePolicy= disabled on kernel command line. Oct 9 07:16:56.547841 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:16:56.629822 systemd-networkd[1823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:16:56.630165 systemd-networkd[1823]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:16:56.636385 systemd-networkd[1823]: eth0: Link UP Oct 9 07:16:56.637747 systemd-networkd[1823]: eth0: Gained carrier Oct 9 07:16:56.637780 systemd-networkd[1823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:16:56.648854 systemd-networkd[1823]: eth0: DHCPv4 address 172.31.23.194/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 9 07:16:56.657589 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:16:56.661806 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Oct 9 07:16:56.667630 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1827) Oct 9 07:16:56.689623 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:16:56.694652 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Oct 9 07:16:56.705687 kernel: ACPI: button: Sleep Button [SLPF] Oct 9 07:16:56.712589 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Oct 9 07:16:56.835172 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:16:56.853979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:16:56.903119 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 9 07:16:56.913814 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:16:56.914546 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:16:56.918761 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:16:56.933096 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:16:56.950578 lvm[1936]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:16:56.981475 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:16:56.982426 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:16:56.991089 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:16:57.001250 lvm[1942]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:16:57.035374 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:16:57.115333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:16:57.117506 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:16:57.118889 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:16:57.120375 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:16:57.121915 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:16:57.123421 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:16:57.127135 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:16:57.128618 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:16:57.128651 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:16:57.129664 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:16:57.131773 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:16:57.135473 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:16:57.143177 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:16:57.145349 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:16:57.147129 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:16:57.148540 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:16:57.149972 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:16:57.150008 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:16:57.151390 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:16:57.156914 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:16:57.161989 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:16:57.165851 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:16:57.172060 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:16:57.173950 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:16:57.185918 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:16:57.193699 systemd[1]: Started ntpd.service - Network Time Service. Oct 9 07:16:57.200027 jq[1950]: false Oct 9 07:16:57.207576 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:16:57.218089 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 9 07:16:57.223849 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:16:57.229252 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:16:57.254935 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:16:57.268587 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:16:57.269435 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:16:57.289859 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:16:57.303498 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:16:57.309368 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:16:57.310826 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:16:57.333221 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:16:57.333503 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:16:57.340339 extend-filesystems[1951]: Found loop4 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found loop5 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found loop6 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found loop7 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found nvme0n1 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found nvme0n1p1 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found nvme0n1p2 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found nvme0n1p3 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found usr Oct 9 07:16:57.342658 extend-filesystems[1951]: Found nvme0n1p4 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found nvme0n1p6 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found nvme0n1p7 Oct 9 07:16:57.342658 extend-filesystems[1951]: Found nvme0n1p9 Oct 9 07:16:57.342658 extend-filesystems[1951]: Checking size of /dev/nvme0n1p9 Oct 9 07:16:57.377112 jq[1967]: true Oct 9 07:16:57.381272 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:16:57.382002 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:16:57.420111 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:16:57.437944 extend-filesystems[1951]: Resized partition /dev/nvme0n1p9 Oct 9 07:16:57.459165 extend-filesystems[1994]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:16:57.468787 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 9 07:16:57.480339 (ntainerd)[1982]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:16:57.484970 jq[1981]: true Oct 9 07:16:57.508743 update_engine[1965]: I1009 07:16:57.508394 1965 main.cc:92] Flatcar Update Engine starting Oct 9 07:16:57.513637 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 9 07:16:57.540914 dbus-daemon[1949]: [system] SELinux support is enabled Oct 9 07:16:57.541421 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:16:57.550633 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:16:57.550676 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:16:57.553033 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:16:57.553516 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:16:57.556945 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:45:55 UTC 2024 (1): Starting Oct 9 07:16:57.558185 tar[1974]: linux-amd64/helm Oct 9 07:16:57.558388 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:45:55 UTC 2024 (1): Starting Oct 9 07:16:57.558388 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 9 07:16:57.558388 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: ---------------------------------------------------- Oct 9 07:16:57.558388 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Oct 9 07:16:57.558388 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 9 07:16:57.558388 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: corporation. Support and training for ntp-4 are Oct 9 07:16:57.558388 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: available at https://www.nwtime.org/support Oct 9 07:16:57.558388 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: ---------------------------------------------------- Oct 9 07:16:57.556973 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 9 07:16:57.556984 ntpd[1953]: ---------------------------------------------------- Oct 9 07:16:57.556994 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Oct 9 07:16:57.557003 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 9 07:16:57.557013 ntpd[1953]: corporation. Support and training for ntp-4 are Oct 9 07:16:57.557022 ntpd[1953]: available at https://www.nwtime.org/support Oct 9 07:16:57.557033 ntpd[1953]: ---------------------------------------------------- Oct 9 07:16:57.568755 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 9 07:16:57.563290 ntpd[1953]: proto: precision = 0.079 usec (-24) Oct 9 07:16:57.568965 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: proto: precision = 0.079 usec (-24) Oct 9 07:16:57.568965 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: basedate set to 2024-09-26 Oct 9 07:16:57.568965 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: gps base set to 2024-09-29 (week 2334) Oct 9 07:16:57.565459 ntpd[1953]: basedate set to 2024-09-26 Oct 9 07:16:57.676945 coreos-metadata[1948]: Oct 09 07:16:57.666 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 9 07:16:57.676945 coreos-metadata[1948]: Oct 09 07:16:57.671 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 9 07:16:57.676945 coreos-metadata[1948]: Oct 09 07:16:57.676 INFO Fetch successful Oct 9 07:16:57.676945 coreos-metadata[1948]: Oct 09 07:16:57.676 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: Listen normally on 3 eth0 172.31.23.194:123 Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: Listen normally on 4 lo [::1]:123 Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: bind(21) AF_INET6 fe80::45a:45ff:fe2e:e575%2#123 flags 0x11 failed: Cannot assign requested address Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: unable to create socket on eth0 (5) for fe80::45a:45ff:fe2e:e575%2#123 Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: failed to init interface for address fe80::45a:45ff:fe2e:e575%2 Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 9 07:16:57.677385 ntpd[1953]: 9 Oct 07:16:57 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 9 07:16:57.685107 update_engine[1965]: I1009 07:16:57.593932 1965 update_check_scheduler.cc:74] Next update check in 6m2s Oct 9 07:16:57.565480 ntpd[1953]: gps base set to 2024-09-29 (week 2334) Oct 9 07:16:57.685211 coreos-metadata[1948]: Oct 09 07:16:57.679 INFO Fetch successful Oct 9 07:16:57.685211 coreos-metadata[1948]: Oct 09 07:16:57.679 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 9 07:16:57.685211 coreos-metadata[1948]: Oct 09 07:16:57.681 INFO Fetch successful Oct 9 07:16:57.685211 coreos-metadata[1948]: Oct 09 07:16:57.681 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 9 07:16:57.685211 coreos-metadata[1948]: Oct 09 07:16:57.682 INFO Fetch successful Oct 9 07:16:57.685211 coreos-metadata[1948]: Oct 09 07:16:57.682 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 9 07:16:57.570948 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Oct 9 07:16:57.685492 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 9 07:16:57.690410 coreos-metadata[1948]: Oct 09 07:16:57.686 INFO Fetch failed with 404: resource not found Oct 9 07:16:57.690410 coreos-metadata[1948]: Oct 09 07:16:57.686 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 9 07:16:57.690410 coreos-metadata[1948]: Oct 09 07:16:57.687 INFO Fetch successful Oct 9 07:16:57.690410 coreos-metadata[1948]: Oct 09 07:16:57.687 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 9 07:16:57.571004 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 9 07:16:57.697829 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1824) Oct 9 07:16:57.689969 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:16:57.571202 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Oct 9 07:16:57.698062 extend-filesystems[1994]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 9 07:16:57.698062 extend-filesystems[1994]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 07:16:57.698062 extend-filesystems[1994]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 9 07:16:57.709751 coreos-metadata[1948]: Oct 09 07:16:57.693 INFO Fetch successful Oct 9 07:16:57.709751 coreos-metadata[1948]: Oct 09 07:16:57.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 9 07:16:57.709751 coreos-metadata[1948]: Oct 09 07:16:57.695 INFO Fetch successful Oct 9 07:16:57.709751 coreos-metadata[1948]: Oct 09 07:16:57.695 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 9 07:16:57.709751 coreos-metadata[1948]: Oct 09 07:16:57.697 INFO Fetch successful Oct 9 07:16:57.709751 coreos-metadata[1948]: Oct 09 07:16:57.697 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 9 07:16:57.709751 coreos-metadata[1948]: Oct 09 07:16:57.703 INFO Fetch successful Oct 9 07:16:57.571238 ntpd[1953]: Listen normally on 3 eth0 172.31.23.194:123 Oct 9 07:16:57.698751 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:16:57.710154 extend-filesystems[1951]: Resized filesystem in /dev/nvme0n1p9 Oct 9 07:16:57.571364 ntpd[1953]: Listen normally on 4 lo [::1]:123 Oct 9 07:16:57.705010 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:16:57.733274 bash[2023]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:16:57.571417 ntpd[1953]: bind(21) AF_INET6 fe80::45a:45ff:fe2e:e575%2#123 flags 0x11 failed: Cannot assign requested address Oct 9 07:16:57.705236 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:16:57.571441 ntpd[1953]: unable to create socket on eth0 (5) for fe80::45a:45ff:fe2e:e575%2#123 Oct 9 07:16:57.726582 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:16:57.571456 ntpd[1953]: failed to init interface for address fe80::45a:45ff:fe2e:e575%2 Oct 9 07:16:57.571487 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Oct 9 07:16:57.574730 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 9 07:16:57.574765 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 9 07:16:57.581864 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1823 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 9 07:16:57.752276 systemd[1]: Starting sshkeys.service... Oct 9 07:16:57.767528 systemd-logind[1961]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:16:57.767922 systemd-logind[1961]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 9 07:16:57.768018 systemd-logind[1961]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:16:57.768313 systemd-logind[1961]: New seat seat0. Oct 9 07:16:57.769260 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:16:57.821876 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:16:57.823946 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:16:57.862312 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:16:57.874767 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:16:57.884900 systemd-networkd[1823]: eth0: Gained IPv6LL Oct 9 07:16:57.936070 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:16:57.942230 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:16:57.952941 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 9 07:16:57.971109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:16:57.980871 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:16:58.040933 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 9 07:16:58.041222 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 9 07:16:58.059807 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2030 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 9 07:16:58.071496 coreos-metadata[2045]: Oct 09 07:16:58.071 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 9 07:16:58.072651 coreos-metadata[2045]: Oct 09 07:16:58.072 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 9 07:16:58.073355 coreos-metadata[2045]: Oct 09 07:16:58.073 INFO Fetch successful Oct 9 07:16:58.073476 coreos-metadata[2045]: Oct 09 07:16:58.073 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 9 07:16:58.081608 systemd[1]: Starting polkit.service - Authorization Manager... Oct 9 07:16:58.085888 coreos-metadata[2045]: Oct 09 07:16:58.084 INFO Fetch successful Oct 9 07:16:58.124318 unknown[2045]: wrote ssh authorized keys file for user: core Oct 9 07:16:58.142717 polkitd[2087]: Started polkitd version 121 Oct 9 07:16:58.217314 polkitd[2087]: Loading rules from directory /etc/polkit-1/rules.d Oct 9 07:16:58.233841 polkitd[2087]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 9 07:16:58.249208 polkitd[2087]: Finished loading, compiling and executing 2 rules Oct 9 07:16:58.258202 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 9 07:16:58.260975 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:16:58.263190 systemd[1]: Started polkit.service - Authorization Manager. Oct 9 07:16:58.266228 polkitd[2087]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 9 07:16:58.297507 update-ssh-keys[2098]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:16:58.298263 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:16:58.314860 systemd[1]: Finished sshkeys.service. Oct 9 07:16:58.415924 amazon-ssm-agent[2067]: Initializing new seelog logger Oct 9 07:16:58.426595 amazon-ssm-agent[2067]: New Seelog Logger Creation Complete Oct 9 07:16:58.426595 amazon-ssm-agent[2067]: 2024/10/09 07:16:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 07:16:58.426595 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 07:16:58.426595 amazon-ssm-agent[2067]: 2024/10/09 07:16:58 processing appconfig overrides Oct 9 07:16:58.439676 systemd-hostnamed[2030]: Hostname set to (transient) Oct 9 07:16:58.443997 amazon-ssm-agent[2067]: 2024/10/09 07:16:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 07:16:58.443997 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 07:16:58.443997 amazon-ssm-agent[2067]: 2024/10/09 07:16:58 processing appconfig overrides Oct 9 07:16:58.443997 amazon-ssm-agent[2067]: 2024/10/09 07:16:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 07:16:58.443997 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 07:16:58.443997 amazon-ssm-agent[2067]: 2024/10/09 07:16:58 processing appconfig overrides Oct 9 07:16:58.445633 systemd-resolved[1780]: System hostname changed to 'ip-172-31-23-194'. Oct 9 07:16:58.468614 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO Proxy environment variables: Oct 9 07:16:58.473325 amazon-ssm-agent[2067]: 2024/10/09 07:16:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 07:16:58.473325 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 9 07:16:58.473325 amazon-ssm-agent[2067]: 2024/10/09 07:16:58 processing appconfig overrides Oct 9 07:16:58.551280 locksmithd[2031]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:16:58.566807 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO http_proxy: Oct 9 07:16:58.668176 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO no_proxy: Oct 9 07:16:58.732583 containerd[1982]: time="2024-10-09T07:16:58.731973521Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:16:58.777540 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO https_proxy: Oct 9 07:16:58.875498 containerd[1982]: time="2024-10-09T07:16:58.875386530Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:16:58.879770 containerd[1982]: time="2024-10-09T07:16:58.879493605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:16:58.880124 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO Checking if agent identity type OnPrem can be assumed Oct 9 07:16:58.887467 containerd[1982]: time="2024-10-09T07:16:58.887412003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:16:58.887826 containerd[1982]: time="2024-10-09T07:16:58.887802969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.888291972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.888321221Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.888412604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.888472485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.888489088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.888580409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.888825963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.888850134Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.888867053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.889084742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:16:58.889446 containerd[1982]: time="2024-10-09T07:16:58.889110703Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:16:58.889903 containerd[1982]: time="2024-10-09T07:16:58.889269165Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:16:58.889903 containerd[1982]: time="2024-10-09T07:16:58.889289190Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.905648525Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.905701716Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.905722074Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.905768203Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.905793425Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.905809342Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.905829203Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.906005952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.906027968Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.906048386Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.906068604Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.906091510Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.906118853Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:16:58.907951 containerd[1982]: time="2024-10-09T07:16:58.906139392Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906158773Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906179960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906207204Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906227260Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906245446Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906371839Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906719740Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906772578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906815644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906863849Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906934023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906953696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906972157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.908522 containerd[1982]: time="2024-10-09T07:16:58.906989256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907007978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907027289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907044973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907062777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907083233Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907255892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907276754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907294422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907314288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907334297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907355506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907373749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.909455 containerd[1982]: time="2024-10-09T07:16:58.907391664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:16:58.910007 containerd[1982]: time="2024-10-09T07:16:58.907770686Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:16:58.910007 containerd[1982]: time="2024-10-09T07:16:58.907859104Z" level=info msg="Connect containerd service" Oct 9 07:16:58.910007 containerd[1982]: time="2024-10-09T07:16:58.907901227Z" level=info msg="using legacy CRI server" Oct 9 07:16:58.910007 containerd[1982]: time="2024-10-09T07:16:58.907911163Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:16:58.918154 containerd[1982]: time="2024-10-09T07:16:58.908037423Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:16:58.926935 containerd[1982]: time="2024-10-09T07:16:58.926873156Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:16:58.927254 containerd[1982]: time="2024-10-09T07:16:58.926960651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:16:58.927254 containerd[1982]: time="2024-10-09T07:16:58.926996282Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:16:58.927254 containerd[1982]: time="2024-10-09T07:16:58.927018013Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:16:58.927254 containerd[1982]: time="2024-10-09T07:16:58.927042537Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:16:58.928139 containerd[1982]: time="2024-10-09T07:16:58.927440616Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:16:58.928139 containerd[1982]: time="2024-10-09T07:16:58.927511353Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:16:58.929951 containerd[1982]: time="2024-10-09T07:16:58.929290188Z" level=info msg="Start subscribing containerd event" Oct 9 07:16:58.929951 containerd[1982]: time="2024-10-09T07:16:58.929436878Z" level=info msg="Start recovering state" Oct 9 07:16:58.929951 containerd[1982]: time="2024-10-09T07:16:58.929545854Z" level=info msg="Start event monitor" Oct 9 07:16:58.929951 containerd[1982]: time="2024-10-09T07:16:58.929611034Z" level=info msg="Start snapshots syncer" Oct 9 07:16:58.929951 containerd[1982]: time="2024-10-09T07:16:58.929632701Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:16:58.929951 containerd[1982]: time="2024-10-09T07:16:58.929650350Z" level=info msg="Start streaming server" Oct 9 07:16:58.929951 containerd[1982]: time="2024-10-09T07:16:58.929733621Z" level=info msg="containerd successfully booted in 0.211198s" Oct 9 07:16:58.929844 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:16:59.002450 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO Checking if agent identity type EC2 can be assumed Oct 9 07:16:59.102905 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO Agent will take identity from EC2 Oct 9 07:16:59.201637 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 9 07:16:59.303498 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 9 07:16:59.416418 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 9 07:16:59.517643 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Oct 9 07:16:59.620138 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Oct 9 07:16:59.719683 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO [amazon-ssm-agent] Starting Core Agent Oct 9 07:16:59.827374 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration Oct 9 07:16:59.896622 tar[1974]: linux-amd64/LICENSE Oct 9 07:16:59.896622 tar[1974]: linux-amd64/README.md Oct 9 07:16:59.929613 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO [Registrar] Starting registrar module Oct 9 07:16:59.944880 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:16:59.979149 sshd_keygen[1983]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:17:00.022162 amazon-ssm-agent[2067]: 2024-10-09 07:16:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Oct 9 07:17:00.022162 amazon-ssm-agent[2067]: 2024-10-09 07:16:59 INFO [EC2Identity] EC2 registration was successful. Oct 9 07:17:00.022162 amazon-ssm-agent[2067]: 2024-10-09 07:16:59 INFO [CredentialRefresher] credentialRefresher has started Oct 9 07:17:00.022162 amazon-ssm-agent[2067]: 2024-10-09 07:16:59 INFO [CredentialRefresher] Starting credentials refresher loop Oct 9 07:17:00.022162 amazon-ssm-agent[2067]: 2024-10-09 07:17:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 9 07:17:00.029052 amazon-ssm-agent[2067]: 2024-10-09 07:17:00 INFO [CredentialRefresher] Next credential rotation will be in 31.174995383016668 minutes Oct 9 07:17:00.030335 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:17:00.041085 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:17:00.059213 systemd[1]: Started sshd@0-172.31.23.194:22-139.178.89.65:49994.service - OpenSSH per-connection server daemon (139.178.89.65:49994). Oct 9 07:17:00.065440 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:17:00.066772 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:17:00.077183 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:17:00.117423 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:17:00.138989 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:17:00.172666 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:17:00.174641 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:17:00.541406 sshd[2184]: Accepted publickey for core from 139.178.89.65 port 49994 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:17:00.548457 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:00.558595 ntpd[1953]: Listen normally on 6 eth0 [fe80::45a:45ff:fe2e:e575%2]:123 Oct 9 07:17:00.564127 ntpd[1953]: 9 Oct 07:17:00 ntpd[1953]: Listen normally on 6 eth0 [fe80::45a:45ff:fe2e:e575%2]:123 Oct 9 07:17:00.604320 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:17:00.630085 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:17:00.696734 systemd-logind[1961]: New session 1 of user core. Oct 9 07:17:00.765051 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:17:00.799075 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:17:00.848695 (systemd)[2195]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:01.059802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:01.081457 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:17:01.176078 amazon-ssm-agent[2067]: 2024-10-09 07:17:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 9 07:17:01.188308 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:17:01.291699 amazon-ssm-agent[2067]: 2024-10-09 07:17:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2208) started Oct 9 07:17:01.382761 amazon-ssm-agent[2067]: 2024-10-09 07:17:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 9 07:17:01.487518 systemd[2195]: Queued start job for default target default.target. Oct 9 07:17:01.504615 systemd[2195]: Created slice app.slice - User Application Slice. Oct 9 07:17:01.504663 systemd[2195]: Reached target paths.target - Paths. Oct 9 07:17:01.504684 systemd[2195]: Reached target timers.target - Timers. Oct 9 07:17:01.518921 systemd[2195]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:17:01.583273 systemd[2195]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:17:01.583495 systemd[2195]: Reached target sockets.target - Sockets. Oct 9 07:17:01.583519 systemd[2195]: Reached target basic.target - Basic System. Oct 9 07:17:01.588826 systemd[2195]: Reached target default.target - Main User Target. Oct 9 07:17:01.588896 systemd[2195]: Startup finished in 711ms. Oct 9 07:17:01.589267 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:17:01.604230 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:17:01.605831 systemd[1]: Startup finished in 980ms (kernel) + 9.678s (initrd) + 9.189s (userspace) = 19.847s. Oct 9 07:17:01.990810 systemd[1]: Started sshd@1-172.31.23.194:22-139.178.89.65:49996.service - OpenSSH per-connection server daemon (139.178.89.65:49996). Oct 9 07:17:02.270550 sshd[2232]: Accepted publickey for core from 139.178.89.65 port 49996 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:17:02.273100 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:02.281667 systemd-logind[1961]: New session 2 of user core. Oct 9 07:17:02.291009 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:17:02.431823 sshd[2232]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:02.435842 systemd[1]: sshd@1-172.31.23.194:22-139.178.89.65:49996.service: Deactivated successfully. Oct 9 07:17:02.440201 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:17:02.442695 systemd-logind[1961]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:17:02.444518 systemd-logind[1961]: Removed session 2. Oct 9 07:17:02.474152 systemd[1]: Started sshd@2-172.31.23.194:22-139.178.89.65:50006.service - OpenSSH per-connection server daemon (139.178.89.65:50006). Oct 9 07:17:02.657473 sshd[2239]: Accepted publickey for core from 139.178.89.65 port 50006 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:17:02.658139 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:02.664868 systemd-logind[1961]: New session 3 of user core. Oct 9 07:17:02.669815 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:17:02.790499 sshd[2239]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:02.797784 systemd-logind[1961]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:17:02.799965 systemd[1]: sshd@2-172.31.23.194:22-139.178.89.65:50006.service: Deactivated successfully. Oct 9 07:17:02.804069 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:17:02.806088 systemd-logind[1961]: Removed session 3. Oct 9 07:17:02.824112 systemd[1]: Started sshd@3-172.31.23.194:22-139.178.89.65:50012.service - OpenSSH per-connection server daemon (139.178.89.65:50012). Oct 9 07:17:02.837983 kubelet[2206]: E1009 07:17:02.837902 2206 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:17:02.842343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:17:02.842544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:17:02.843362 systemd[1]: kubelet.service: Consumed 1.128s CPU time. Oct 9 07:17:03.001071 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 50012 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:17:03.002540 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:03.007531 systemd-logind[1961]: New session 4 of user core. Oct 9 07:17:03.021178 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:17:03.150131 sshd[2247]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:03.154067 systemd[1]: sshd@3-172.31.23.194:22-139.178.89.65:50012.service: Deactivated successfully. Oct 9 07:17:03.160679 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:17:03.162996 systemd-logind[1961]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:17:03.168138 systemd-logind[1961]: Removed session 4. Oct 9 07:17:03.183758 systemd[1]: Started sshd@4-172.31.23.194:22-139.178.89.65:50020.service - OpenSSH per-connection server daemon (139.178.89.65:50020). Oct 9 07:17:03.365158 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 50020 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:17:03.366983 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:03.373037 systemd-logind[1961]: New session 5 of user core. Oct 9 07:17:03.380823 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:17:03.520026 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:17:03.520914 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:17:03.532245 sudo[2258]: pam_unix(sudo:session): session closed for user root Oct 9 07:17:03.555252 sshd[2255]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:03.570972 systemd[1]: sshd@4-172.31.23.194:22-139.178.89.65:50020.service: Deactivated successfully. Oct 9 07:17:03.578055 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:17:03.579133 systemd-logind[1961]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:17:03.600969 systemd[1]: Started sshd@5-172.31.23.194:22-139.178.89.65:50026.service - OpenSSH per-connection server daemon (139.178.89.65:50026). Oct 9 07:17:03.602995 systemd-logind[1961]: Removed session 5. Oct 9 07:17:03.772916 sshd[2263]: Accepted publickey for core from 139.178.89.65 port 50026 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:17:03.774476 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:03.791784 systemd-logind[1961]: New session 6 of user core. Oct 9 07:17:03.808843 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:17:03.919041 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:17:03.919699 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:17:03.945544 sudo[2267]: pam_unix(sudo:session): session closed for user root Oct 9 07:17:03.973077 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:17:03.976989 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:17:04.009004 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:17:04.012790 auditctl[2270]: No rules Oct 9 07:17:04.013203 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:17:04.013428 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:17:04.039651 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:17:04.114297 augenrules[2288]: No rules Oct 9 07:17:04.117120 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:17:04.118553 sudo[2266]: pam_unix(sudo:session): session closed for user root Oct 9 07:17:04.145446 sshd[2263]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:04.151999 systemd[1]: sshd@5-172.31.23.194:22-139.178.89.65:50026.service: Deactivated successfully. Oct 9 07:17:04.155512 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:17:04.158348 systemd-logind[1961]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:17:04.160233 systemd-logind[1961]: Removed session 6. Oct 9 07:17:04.185995 systemd[1]: Started sshd@6-172.31.23.194:22-139.178.89.65:51942.service - OpenSSH per-connection server daemon (139.178.89.65:51942). Oct 9 07:17:04.366165 sshd[2296]: Accepted publickey for core from 139.178.89.65 port 51942 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:17:04.375154 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:04.394985 systemd-logind[1961]: New session 7 of user core. Oct 9 07:17:04.401914 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:17:04.500906 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:17:04.501429 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:17:05.596457 systemd-resolved[1780]: Clock change detected. Flushing caches. Oct 9 07:17:05.764955 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:17:05.767576 (dockerd)[2308]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:17:06.387501 dockerd[2308]: time="2024-10-09T07:17:06.387434337Z" level=info msg="Starting up" Oct 9 07:17:06.984487 dockerd[2308]: time="2024-10-09T07:17:06.984439775Z" level=info msg="Loading containers: start." Oct 9 07:17:07.171387 kernel: Initializing XFRM netlink socket Oct 9 07:17:07.215512 (udev-worker)[2321]: Network interface NamePolicy= disabled on kernel command line. Oct 9 07:17:07.299126 systemd-networkd[1823]: docker0: Link UP Oct 9 07:17:07.330985 dockerd[2308]: time="2024-10-09T07:17:07.330947231Z" level=info msg="Loading containers: done." Oct 9 07:17:07.458568 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3501521223-merged.mount: Deactivated successfully. Oct 9 07:17:07.478787 dockerd[2308]: time="2024-10-09T07:17:07.478672064Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:17:07.479300 dockerd[2308]: time="2024-10-09T07:17:07.479066928Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:17:07.479300 dockerd[2308]: time="2024-10-09T07:17:07.479203304Z" level=info msg="Daemon has completed initialization" Oct 9 07:17:07.556667 dockerd[2308]: time="2024-10-09T07:17:07.556086369Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:17:07.557740 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:17:09.024198 containerd[1982]: time="2024-10-09T07:17:09.023372863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 07:17:09.772196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725932839.mount: Deactivated successfully. Oct 9 07:17:13.370253 containerd[1982]: time="2024-10-09T07:17:13.370198269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:13.371846 containerd[1982]: time="2024-10-09T07:17:13.371796795Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 9 07:17:13.374619 containerd[1982]: time="2024-10-09T07:17:13.373240994Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:13.376537 containerd[1982]: time="2024-10-09T07:17:13.376504565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:13.377897 containerd[1982]: time="2024-10-09T07:17:13.377760236Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 4.354348801s" Oct 9 07:17:13.378040 containerd[1982]: time="2024-10-09T07:17:13.378017131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 07:17:13.415817 containerd[1982]: time="2024-10-09T07:17:13.415779807Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 07:17:13.948806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:17:13.964017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:15.700292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:15.713980 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:17:16.039305 kubelet[2507]: E1009 07:17:16.039123 2507 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:17:16.046232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:17:16.046871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:17:17.710276 containerd[1982]: time="2024-10-09T07:17:17.705462437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:17.718756 containerd[1982]: time="2024-10-09T07:17:17.718170150Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 9 07:17:17.735386 containerd[1982]: time="2024-10-09T07:17:17.735292531Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:17.746611 containerd[1982]: time="2024-10-09T07:17:17.746523258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:17.748875 containerd[1982]: time="2024-10-09T07:17:17.748711864Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 4.332889117s" Oct 9 07:17:17.748875 containerd[1982]: time="2024-10-09T07:17:17.748766727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 07:17:17.790499 containerd[1982]: time="2024-10-09T07:17:17.790461220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 07:17:20.656856 containerd[1982]: time="2024-10-09T07:17:20.656658644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:20.699671 containerd[1982]: time="2024-10-09T07:17:20.699577494Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 9 07:17:20.753722 containerd[1982]: time="2024-10-09T07:17:20.753632813Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:20.823665 containerd[1982]: time="2024-10-09T07:17:20.822527938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:20.839309 containerd[1982]: time="2024-10-09T07:17:20.827991657Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 3.03748429s" Oct 9 07:17:20.839309 containerd[1982]: time="2024-10-09T07:17:20.828474126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 07:17:20.883472 containerd[1982]: time="2024-10-09T07:17:20.883425757Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 07:17:22.801048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount812765886.mount: Deactivated successfully. Oct 9 07:17:23.652135 containerd[1982]: time="2024-10-09T07:17:23.652083293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:23.653634 containerd[1982]: time="2024-10-09T07:17:23.653483071Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 9 07:17:23.655636 containerd[1982]: time="2024-10-09T07:17:23.655280907Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:23.660968 containerd[1982]: time="2024-10-09T07:17:23.660912676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:23.662063 containerd[1982]: time="2024-10-09T07:17:23.662022528Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 2.778544858s" Oct 9 07:17:23.662342 containerd[1982]: time="2024-10-09T07:17:23.662227546Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 07:17:23.698941 containerd[1982]: time="2024-10-09T07:17:23.698903198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:17:24.390427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937776248.mount: Deactivated successfully. Oct 9 07:17:26.199850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:17:26.212789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:26.705038 containerd[1982]: time="2024-10-09T07:17:26.704976342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:26.741628 containerd[1982]: time="2024-10-09T07:17:26.741551211Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:17:26.756378 containerd[1982]: time="2024-10-09T07:17:26.756300376Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:26.787166 containerd[1982]: time="2024-10-09T07:17:26.787071754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:26.793774 containerd[1982]: time="2024-10-09T07:17:26.793220349Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.094273503s" Oct 9 07:17:26.793774 containerd[1982]: time="2024-10-09T07:17:26.793281005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:17:26.857467 containerd[1982]: time="2024-10-09T07:17:26.857395140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:17:28.086312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4246668032.mount: Deactivated successfully. Oct 9 07:17:28.100387 containerd[1982]: time="2024-10-09T07:17:28.098633291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:28.101535 containerd[1982]: time="2024-10-09T07:17:28.101476436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 07:17:28.103659 containerd[1982]: time="2024-10-09T07:17:28.103614412Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:28.115835 containerd[1982]: time="2024-10-09T07:17:28.115672556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.258226885s" Oct 9 07:17:28.115835 containerd[1982]: time="2024-10-09T07:17:28.115722821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:17:28.116499 containerd[1982]: time="2024-10-09T07:17:28.116230963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:28.147228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:28.159902 (kubelet)[2607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:17:28.184520 containerd[1982]: time="2024-10-09T07:17:28.184456806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 07:17:28.234457 kubelet[2607]: E1009 07:17:28.234397 2607 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:17:28.237802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:17:28.238005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:17:28.933396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount781722234.mount: Deactivated successfully. Oct 9 07:17:29.517956 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 9 07:17:33.648647 containerd[1982]: time="2024-10-09T07:17:33.648583450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:33.650382 containerd[1982]: time="2024-10-09T07:17:33.650206512Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 9 07:17:33.652426 containerd[1982]: time="2024-10-09T07:17:33.652387724Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:33.656457 containerd[1982]: time="2024-10-09T07:17:33.656128582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:33.659151 containerd[1982]: time="2024-10-09T07:17:33.657475269Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.472633664s" Oct 9 07:17:33.659151 containerd[1982]: time="2024-10-09T07:17:33.657525067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 07:17:37.450435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:37.458734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:37.493508 systemd[1]: Reloading requested from client PID 2732 ('systemctl') (unit session-7.scope)... Oct 9 07:17:37.493531 systemd[1]: Reloading... Oct 9 07:17:37.609384 zram_generator::config[2768]: No configuration found. Oct 9 07:17:37.755036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:17:37.866506 systemd[1]: Reloading finished in 372 ms. Oct 9 07:17:37.936612 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:17:37.936745 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:17:37.937307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:37.941760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:38.592911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:38.604468 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:17:38.683086 kubelet[2827]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:17:38.683507 kubelet[2827]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:17:38.683553 kubelet[2827]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:17:38.686560 kubelet[2827]: I1009 07:17:38.686513 2827 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:17:39.445192 kubelet[2827]: I1009 07:17:39.444842 2827 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:17:39.445192 kubelet[2827]: I1009 07:17:39.445186 2827 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:17:39.445650 kubelet[2827]: I1009 07:17:39.445625 2827 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:17:39.481390 kubelet[2827]: E1009 07:17:39.481089 2827 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:39.481390 kubelet[2827]: I1009 07:17:39.481153 2827 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:17:39.504547 kubelet[2827]: I1009 07:17:39.504504 2827 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:17:39.506574 kubelet[2827]: I1009 07:17:39.506512 2827 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:17:39.509717 kubelet[2827]: I1009 07:17:39.509675 2827 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:17:39.509717 kubelet[2827]: I1009 07:17:39.509720 2827 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:17:39.510030 kubelet[2827]: I1009 07:17:39.509735 2827 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:17:39.510030 kubelet[2827]: I1009 07:17:39.509939 2827 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:17:39.510113 kubelet[2827]: I1009 07:17:39.510084 2827 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:17:39.510113 kubelet[2827]: I1009 07:17:39.510104 2827 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:17:39.511795 kubelet[2827]: I1009 07:17:39.510993 2827 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:17:39.511795 kubelet[2827]: I1009 07:17:39.511027 2827 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:17:39.515103 kubelet[2827]: W1009 07:17:39.514912 2827 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-194&limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:39.515211 kubelet[2827]: E1009 07:17:39.515124 2827 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-194&limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:39.517370 kubelet[2827]: W1009 07:17:39.517121 2827 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:39.517370 kubelet[2827]: E1009 07:17:39.517194 2827 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:39.519029 kubelet[2827]: I1009 07:17:39.518667 2827 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:17:39.527708 kubelet[2827]: I1009 07:17:39.527381 2827 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:17:39.531380 kubelet[2827]: W1009 07:17:39.530971 2827 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:17:39.532632 kubelet[2827]: I1009 07:17:39.532601 2827 server.go:1256] "Started kubelet" Oct 9 07:17:39.533429 kubelet[2827]: I1009 07:17:39.533396 2827 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:17:39.534686 kubelet[2827]: I1009 07:17:39.534668 2827 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:17:39.542167 kubelet[2827]: I1009 07:17:39.542134 2827 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:17:39.544379 kubelet[2827]: I1009 07:17:39.544334 2827 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:17:39.544583 kubelet[2827]: I1009 07:17:39.544560 2827 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:17:39.548835 kubelet[2827]: E1009 07:17:39.548760 2827 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.194:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.194:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-194.17fcb7a118863a3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-194,UID:ip-172-31-23-194,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-194,},FirstTimestamp:2024-10-09 07:17:39.532495421 +0000 UTC m=+0.909851497,LastTimestamp:2024-10-09 07:17:39.532495421 +0000 UTC m=+0.909851497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-194,}" Oct 9 07:17:39.554397 kubelet[2827]: I1009 07:17:39.553419 2827 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:17:39.554397 kubelet[2827]: I1009 07:17:39.554088 2827 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:17:39.554397 kubelet[2827]: I1009 07:17:39.554172 2827 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:17:39.555108 kubelet[2827]: W1009 07:17:39.554990 2827 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:39.555291 kubelet[2827]: E1009 07:17:39.555227 2827 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:39.555675 kubelet[2827]: E1009 07:17:39.555378 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-194?timeout=10s\": dial tcp 172.31.23.194:6443: connect: connection refused" interval="200ms" Oct 9 07:17:39.558919 kubelet[2827]: I1009 07:17:39.558889 2827 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:17:39.562777 kubelet[2827]: E1009 07:17:39.561761 2827 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:17:39.562777 kubelet[2827]: I1009 07:17:39.562195 2827 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:17:39.562777 kubelet[2827]: I1009 07:17:39.562207 2827 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:17:39.587435 kubelet[2827]: I1009 07:17:39.587402 2827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:17:39.601512 kubelet[2827]: I1009 07:17:39.601486 2827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:17:39.601815 kubelet[2827]: I1009 07:17:39.601799 2827 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:17:39.602029 kubelet[2827]: I1009 07:17:39.602015 2827 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:17:39.602170 kubelet[2827]: E1009 07:17:39.602158 2827 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:17:39.602805 kubelet[2827]: I1009 07:17:39.602785 2827 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:17:39.603163 kubelet[2827]: I1009 07:17:39.603143 2827 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:17:39.603596 kubelet[2827]: I1009 07:17:39.603583 2827 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:17:39.603908 kubelet[2827]: W1009 07:17:39.603482 2827 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:39.604039 kubelet[2827]: E1009 07:17:39.604027 2827 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:39.636186 kubelet[2827]: I1009 07:17:39.636149 2827 policy_none.go:49] "None policy: Start" Oct 9 07:17:39.638790 kubelet[2827]: I1009 07:17:39.638718 2827 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:17:39.638790 kubelet[2827]: I1009 07:17:39.638776 2827 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:17:39.656847 kubelet[2827]: I1009 07:17:39.656747 2827 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-194" Oct 9 07:17:39.764470 kubelet[2827]: E1009 07:17:39.657233 2827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.194:6443/api/v1/nodes\": dial tcp 172.31.23.194:6443: connect: connection refused" node="ip-172-31-23-194" Oct 9 07:17:39.764470 kubelet[2827]: E1009 07:17:39.702686 2827 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:17:39.764470 kubelet[2827]: E1009 07:17:39.756333 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-194?timeout=10s\": dial tcp 172.31.23.194:6443: connect: connection refused" interval="400ms" Oct 9 07:17:39.776277 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 07:17:39.793752 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 07:17:39.799313 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 07:17:39.806671 kubelet[2827]: I1009 07:17:39.806637 2827 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:17:39.807013 kubelet[2827]: I1009 07:17:39.806997 2827 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:17:39.811671 kubelet[2827]: E1009 07:17:39.810776 2827 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-194\" not found" Oct 9 07:17:39.860267 kubelet[2827]: I1009 07:17:39.860232 2827 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-194" Oct 9 07:17:39.860662 kubelet[2827]: E1009 07:17:39.860640 2827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.194:6443/api/v1/nodes\": dial tcp 172.31.23.194:6443: connect: connection refused" node="ip-172-31-23-194" Oct 9 07:17:39.902884 kubelet[2827]: I1009 07:17:39.902842 2827 topology_manager.go:215] "Topology Admit Handler" podUID="91fa0255f64f73703d45b8a8376bd0bb" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-194" Oct 9 07:17:39.906678 kubelet[2827]: I1009 07:17:39.904504 2827 topology_manager.go:215] "Topology Admit Handler" podUID="1b9c641fcd54715d2be940b27cb749e7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:39.906775 kubelet[2827]: I1009 07:17:39.906686 2827 topology_manager.go:215] "Topology Admit Handler" podUID="a41402c85490604520859a91199318e7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-194" Oct 9 07:17:39.917043 systemd[1]: Created slice kubepods-burstable-pod91fa0255f64f73703d45b8a8376bd0bb.slice - libcontainer container kubepods-burstable-pod91fa0255f64f73703d45b8a8376bd0bb.slice. Oct 9 07:17:39.938711 systemd[1]: Created slice kubepods-burstable-pod1b9c641fcd54715d2be940b27cb749e7.slice - libcontainer container kubepods-burstable-pod1b9c641fcd54715d2be940b27cb749e7.slice. Oct 9 07:17:39.951230 systemd[1]: Created slice kubepods-burstable-poda41402c85490604520859a91199318e7.slice - libcontainer container kubepods-burstable-poda41402c85490604520859a91199318e7.slice. Oct 9 07:17:39.955648 kubelet[2827]: I1009 07:17:39.955616 2827 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:39.955811 kubelet[2827]: I1009 07:17:39.955662 2827 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a41402c85490604520859a91199318e7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-194\" (UID: \"a41402c85490604520859a91199318e7\") " pod="kube-system/kube-scheduler-ip-172-31-23-194" Oct 9 07:17:39.955811 kubelet[2827]: I1009 07:17:39.955694 2827 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91fa0255f64f73703d45b8a8376bd0bb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-194\" (UID: \"91fa0255f64f73703d45b8a8376bd0bb\") " pod="kube-system/kube-apiserver-ip-172-31-23-194" Oct 9 07:17:39.955811 kubelet[2827]: I1009 07:17:39.955722 2827 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:39.955811 kubelet[2827]: I1009 07:17:39.955752 2827 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:39.955811 kubelet[2827]: I1009 07:17:39.955782 2827 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:39.956014 kubelet[2827]: I1009 07:17:39.955812 2827 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:39.956014 kubelet[2827]: I1009 07:17:39.955842 2827 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91fa0255f64f73703d45b8a8376bd0bb-ca-certs\") pod \"kube-apiserver-ip-172-31-23-194\" (UID: \"91fa0255f64f73703d45b8a8376bd0bb\") " pod="kube-system/kube-apiserver-ip-172-31-23-194" Oct 9 07:17:39.956014 kubelet[2827]: I1009 07:17:39.955872 2827 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91fa0255f64f73703d45b8a8376bd0bb-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-194\" (UID: \"91fa0255f64f73703d45b8a8376bd0bb\") " pod="kube-system/kube-apiserver-ip-172-31-23-194" Oct 9 07:17:40.157568 kubelet[2827]: E1009 07:17:40.157458 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-194?timeout=10s\": dial tcp 172.31.23.194:6443: connect: connection refused" interval="800ms" Oct 9 07:17:40.236831 containerd[1982]: time="2024-10-09T07:17:40.236782323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-194,Uid:91fa0255f64f73703d45b8a8376bd0bb,Namespace:kube-system,Attempt:0,}" Oct 9 07:17:40.249168 containerd[1982]: time="2024-10-09T07:17:40.249124748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-194,Uid:1b9c641fcd54715d2be940b27cb749e7,Namespace:kube-system,Attempt:0,}" Oct 9 07:17:40.255043 containerd[1982]: time="2024-10-09T07:17:40.254987762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-194,Uid:a41402c85490604520859a91199318e7,Namespace:kube-system,Attempt:0,}" Oct 9 07:17:40.267461 kubelet[2827]: I1009 07:17:40.266549 2827 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-194" Oct 9 07:17:40.270381 kubelet[2827]: E1009 07:17:40.267981 2827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.194:6443/api/v1/nodes\": dial tcp 172.31.23.194:6443: connect: connection refused" node="ip-172-31-23-194" Oct 9 07:17:40.767075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781135436.mount: Deactivated successfully. Oct 9 07:17:40.777767 containerd[1982]: time="2024-10-09T07:17:40.777711464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:40.779075 containerd[1982]: time="2024-10-09T07:17:40.778802571Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:17:40.780194 containerd[1982]: time="2024-10-09T07:17:40.780157587Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:40.781457 containerd[1982]: time="2024-10-09T07:17:40.781422461Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:40.782888 containerd[1982]: time="2024-10-09T07:17:40.782834953Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:17:40.784879 containerd[1982]: time="2024-10-09T07:17:40.784840946Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:40.786315 containerd[1982]: time="2024-10-09T07:17:40.786205735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:17:40.789375 containerd[1982]: time="2024-10-09T07:17:40.789265447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:40.790474 containerd[1982]: time="2024-10-09T07:17:40.790342594Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.242681ms" Oct 9 07:17:40.791724 containerd[1982]: time="2024-10-09T07:17:40.791686839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 542.45829ms" Oct 9 07:17:40.800509 containerd[1982]: time="2024-10-09T07:17:40.800461408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 563.548717ms" Oct 9 07:17:40.811907 kubelet[2827]: W1009 07:17:40.811817 2827 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:40.811907 kubelet[2827]: E1009 07:17:40.811866 2827 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:40.958042 kubelet[2827]: E1009 07:17:40.958008 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-194?timeout=10s\": dial tcp 172.31.23.194:6443: connect: connection refused" interval="1.6s" Oct 9 07:17:40.979991 kubelet[2827]: W1009 07:17:40.979936 2827 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-194&limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:40.979991 kubelet[2827]: E1009 07:17:40.979996 2827 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-194&limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:41.066529 containerd[1982]: time="2024-10-09T07:17:41.064923665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:41.066529 containerd[1982]: time="2024-10-09T07:17:41.064996129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:41.066529 containerd[1982]: time="2024-10-09T07:17:41.065025067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:41.066529 containerd[1982]: time="2024-10-09T07:17:41.065046698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:41.074946 kubelet[2827]: I1009 07:17:41.073605 2827 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-194" Oct 9 07:17:41.074946 kubelet[2827]: E1009 07:17:41.074370 2827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.194:6443/api/v1/nodes\": dial tcp 172.31.23.194:6443: connect: connection refused" node="ip-172-31-23-194" Oct 9 07:17:41.080140 kubelet[2827]: W1009 07:17:41.079580 2827 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:41.080140 kubelet[2827]: E1009 07:17:41.079621 2827 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:41.082616 containerd[1982]: time="2024-10-09T07:17:41.081265652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:41.082616 containerd[1982]: time="2024-10-09T07:17:41.081323432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:41.082616 containerd[1982]: time="2024-10-09T07:17:41.081344520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:41.085552 containerd[1982]: time="2024-10-09T07:17:41.085428555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:41.090401 containerd[1982]: time="2024-10-09T07:17:41.089943376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:41.090401 containerd[1982]: time="2024-10-09T07:17:41.090105775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:41.090401 containerd[1982]: time="2024-10-09T07:17:41.090139413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:41.090401 containerd[1982]: time="2024-10-09T07:17:41.090162030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:41.126884 systemd[1]: Started cri-containerd-e38576039482affc6cc6d511a4effc15172294753bce703ca83bdd83639d0661.scope - libcontainer container e38576039482affc6cc6d511a4effc15172294753bce703ca83bdd83639d0661. Oct 9 07:17:41.138110 systemd[1]: Started cri-containerd-6538f6717180c7fd54bed7635f94e3f213f8cef482efcb4139e8e1da104cfef9.scope - libcontainer container 6538f6717180c7fd54bed7635f94e3f213f8cef482efcb4139e8e1da104cfef9. Oct 9 07:17:41.142506 systemd[1]: Started cri-containerd-b0145828e7566af3598fc244b9106d69fff78266b34c0842659ef2db008273b4.scope - libcontainer container b0145828e7566af3598fc244b9106d69fff78266b34c0842659ef2db008273b4. Oct 9 07:17:41.192524 kubelet[2827]: W1009 07:17:41.192486 2827 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:41.192854 kubelet[2827]: E1009 07:17:41.192807 2827 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:41.244482 containerd[1982]: time="2024-10-09T07:17:41.243794289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-194,Uid:91fa0255f64f73703d45b8a8376bd0bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e38576039482affc6cc6d511a4effc15172294753bce703ca83bdd83639d0661\"" Oct 9 07:17:41.261577 containerd[1982]: time="2024-10-09T07:17:41.260966221Z" level=info msg="CreateContainer within sandbox \"e38576039482affc6cc6d511a4effc15172294753bce703ca83bdd83639d0661\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:17:41.286850 containerd[1982]: time="2024-10-09T07:17:41.286638166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-194,Uid:a41402c85490604520859a91199318e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"6538f6717180c7fd54bed7635f94e3f213f8cef482efcb4139e8e1da104cfef9\"" Oct 9 07:17:41.293173 containerd[1982]: time="2024-10-09T07:17:41.292967876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-194,Uid:1b9c641fcd54715d2be940b27cb749e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0145828e7566af3598fc244b9106d69fff78266b34c0842659ef2db008273b4\"" Oct 9 07:17:41.297590 containerd[1982]: time="2024-10-09T07:17:41.297375159Z" level=info msg="CreateContainer within sandbox \"e38576039482affc6cc6d511a4effc15172294753bce703ca83bdd83639d0661\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0d938ae6882fd8f80deac0540aad65f028f89a0e7a0133e583775cff8de20c76\"" Oct 9 07:17:41.301007 containerd[1982]: time="2024-10-09T07:17:41.300540622Z" level=info msg="StartContainer for \"0d938ae6882fd8f80deac0540aad65f028f89a0e7a0133e583775cff8de20c76\"" Oct 9 07:17:41.306161 containerd[1982]: time="2024-10-09T07:17:41.305793669Z" level=info msg="CreateContainer within sandbox \"6538f6717180c7fd54bed7635f94e3f213f8cef482efcb4139e8e1da104cfef9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:17:41.306161 containerd[1982]: time="2024-10-09T07:17:41.306118810Z" level=info msg="CreateContainer within sandbox \"b0145828e7566af3598fc244b9106d69fff78266b34c0842659ef2db008273b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:17:41.343059 containerd[1982]: time="2024-10-09T07:17:41.342940887Z" level=info msg="CreateContainer within sandbox \"6538f6717180c7fd54bed7635f94e3f213f8cef482efcb4139e8e1da104cfef9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e\"" Oct 9 07:17:41.349182 containerd[1982]: time="2024-10-09T07:17:41.348791128Z" level=info msg="StartContainer for \"e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e\"" Oct 9 07:17:41.349394 containerd[1982]: time="2024-10-09T07:17:41.349332093Z" level=info msg="CreateContainer within sandbox \"b0145828e7566af3598fc244b9106d69fff78266b34c0842659ef2db008273b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e\"" Oct 9 07:17:41.349756 containerd[1982]: time="2024-10-09T07:17:41.349733217Z" level=info msg="StartContainer for \"fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e\"" Oct 9 07:17:41.351482 systemd[1]: Started cri-containerd-0d938ae6882fd8f80deac0540aad65f028f89a0e7a0133e583775cff8de20c76.scope - libcontainer container 0d938ae6882fd8f80deac0540aad65f028f89a0e7a0133e583775cff8de20c76. Oct 9 07:17:41.408099 systemd[1]: Started cri-containerd-e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e.scope - libcontainer container e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e. Oct 9 07:17:41.435584 systemd[1]: Started cri-containerd-fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e.scope - libcontainer container fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e. Oct 9 07:17:41.456040 containerd[1982]: time="2024-10-09T07:17:41.455993359Z" level=info msg="StartContainer for \"0d938ae6882fd8f80deac0540aad65f028f89a0e7a0133e583775cff8de20c76\" returns successfully" Oct 9 07:17:41.558587 containerd[1982]: time="2024-10-09T07:17:41.558489479Z" level=info msg="StartContainer for \"e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e\" returns successfully" Oct 9 07:17:41.571190 containerd[1982]: time="2024-10-09T07:17:41.571136138Z" level=info msg="StartContainer for \"fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e\" returns successfully" Oct 9 07:17:41.598693 kubelet[2827]: E1009 07:17:41.598569 2827 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.194:6443: connect: connection refused Oct 9 07:17:42.677940 kubelet[2827]: I1009 07:17:42.677907 2827 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-194" Oct 9 07:17:43.799626 update_engine[1965]: I1009 07:17:43.799475 1965 update_attempter.cc:509] Updating boot flags... Oct 9 07:17:43.937124 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3111) Oct 9 07:17:44.367398 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3102) Oct 9 07:17:45.378131 kubelet[2827]: E1009 07:17:45.378061 2827 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-194\" not found" node="ip-172-31-23-194" Oct 9 07:17:45.469516 kubelet[2827]: I1009 07:17:45.469317 2827 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-194" Oct 9 07:17:45.521373 kubelet[2827]: I1009 07:17:45.520306 2827 apiserver.go:52] "Watching apiserver" Oct 9 07:17:45.536954 kubelet[2827]: E1009 07:17:45.536204 2827 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-194\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-194" Oct 9 07:17:45.556936 kubelet[2827]: I1009 07:17:45.556899 2827 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:17:48.283345 systemd[1]: Reloading requested from client PID 3280 ('systemctl') (unit session-7.scope)... Oct 9 07:17:48.283810 systemd[1]: Reloading... Oct 9 07:17:48.407383 zram_generator::config[3318]: No configuration found. Oct 9 07:17:48.587310 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:17:48.700235 systemd[1]: Reloading finished in 416 ms. Oct 9 07:17:48.743754 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:48.762968 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:17:48.763272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:48.763333 systemd[1]: kubelet.service: Consumed 1.180s CPU time, 111.5M memory peak, 0B memory swap peak. Oct 9 07:17:48.772371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:49.034159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:49.044941 (kubelet)[3375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:17:49.144012 kubelet[3375]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:17:49.144012 kubelet[3375]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:17:49.144012 kubelet[3375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:17:49.144012 kubelet[3375]: I1009 07:17:49.143925 3375 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:17:49.153028 kubelet[3375]: I1009 07:17:49.152995 3375 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:17:49.153028 kubelet[3375]: I1009 07:17:49.153022 3375 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:17:49.153343 kubelet[3375]: I1009 07:17:49.153319 3375 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:17:49.156262 kubelet[3375]: I1009 07:17:49.156230 3375 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:17:49.167336 kubelet[3375]: I1009 07:17:49.166937 3375 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:17:49.178465 kubelet[3375]: I1009 07:17:49.178416 3375 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:17:49.178922 kubelet[3375]: I1009 07:17:49.178848 3375 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:17:49.179311 kubelet[3375]: I1009 07:17:49.179295 3375 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:17:49.179508 kubelet[3375]: I1009 07:17:49.179431 3375 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:17:49.179508 kubelet[3375]: I1009 07:17:49.179452 3375 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:17:49.179508 kubelet[3375]: I1009 07:17:49.179492 3375 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:17:49.179633 kubelet[3375]: I1009 07:17:49.179615 3375 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:17:49.179633 kubelet[3375]: I1009 07:17:49.179632 3375 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:17:49.179712 kubelet[3375]: I1009 07:17:49.179671 3375 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:17:49.179752 kubelet[3375]: I1009 07:17:49.179723 3375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:17:49.186633 kubelet[3375]: I1009 07:17:49.183555 3375 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:17:49.186633 kubelet[3375]: I1009 07:17:49.183977 3375 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:17:49.203297 kubelet[3375]: I1009 07:17:49.203261 3375 server.go:1256] "Started kubelet" Oct 9 07:17:49.206176 kubelet[3375]: I1009 07:17:49.206143 3375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:17:49.245070 kubelet[3375]: I1009 07:17:49.241992 3375 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:17:49.245070 kubelet[3375]: I1009 07:17:49.244082 3375 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:17:49.250522 kubelet[3375]: I1009 07:17:49.247809 3375 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:17:49.250522 kubelet[3375]: I1009 07:17:49.248049 3375 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:17:49.254218 kubelet[3375]: I1009 07:17:49.254186 3375 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:17:49.256741 kubelet[3375]: I1009 07:17:49.256714 3375 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:17:49.256889 kubelet[3375]: I1009 07:17:49.256876 3375 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:17:49.267836 kubelet[3375]: I1009 07:17:49.267801 3375 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:17:49.268009 kubelet[3375]: I1009 07:17:49.267921 3375 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:17:49.270290 kubelet[3375]: I1009 07:17:49.270266 3375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:17:49.274041 kubelet[3375]: I1009 07:17:49.273873 3375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:17:49.274682 kubelet[3375]: I1009 07:17:49.274664 3375 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:17:49.274822 kubelet[3375]: I1009 07:17:49.274796 3375 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:17:49.274963 kubelet[3375]: E1009 07:17:49.274953 3375 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:17:49.278006 kubelet[3375]: I1009 07:17:49.277979 3375 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:17:49.373150 kubelet[3375]: I1009 07:17:49.373031 3375 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-194" Oct 9 07:17:49.375695 kubelet[3375]: E1009 07:17:49.375668 3375 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:17:49.378943 kubelet[3375]: I1009 07:17:49.378819 3375 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:17:49.379171 kubelet[3375]: I1009 07:17:49.379159 3375 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:17:49.379256 kubelet[3375]: I1009 07:17:49.379249 3375 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:17:49.379609 kubelet[3375]: I1009 07:17:49.379589 3375 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:17:49.379743 kubelet[3375]: I1009 07:17:49.379733 3375 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:17:49.379832 kubelet[3375]: I1009 07:17:49.379824 3375 policy_none.go:49] "None policy: Start" Oct 9 07:17:49.380947 kubelet[3375]: I1009 07:17:49.380932 3375 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:17:49.381557 kubelet[3375]: I1009 07:17:49.381543 3375 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:17:49.382805 kubelet[3375]: I1009 07:17:49.382779 3375 state_mem.go:75] "Updated machine memory state" Oct 9 07:17:49.386573 kubelet[3375]: I1009 07:17:49.386545 3375 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-194" Oct 9 07:17:49.386684 kubelet[3375]: I1009 07:17:49.386647 3375 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-194" Oct 9 07:17:49.401162 kubelet[3375]: I1009 07:17:49.401137 3375 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:17:49.401686 kubelet[3375]: I1009 07:17:49.401664 3375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:17:49.577457 kubelet[3375]: I1009 07:17:49.576363 3375 topology_manager.go:215] "Topology Admit Handler" podUID="91fa0255f64f73703d45b8a8376bd0bb" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-194" Oct 9 07:17:49.577457 kubelet[3375]: I1009 07:17:49.576493 3375 topology_manager.go:215] "Topology Admit Handler" podUID="1b9c641fcd54715d2be940b27cb749e7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:49.577457 kubelet[3375]: I1009 07:17:49.576546 3375 topology_manager.go:215] "Topology Admit Handler" podUID="a41402c85490604520859a91199318e7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-194" Oct 9 07:17:49.666771 kubelet[3375]: I1009 07:17:49.666590 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91fa0255f64f73703d45b8a8376bd0bb-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-194\" (UID: \"91fa0255f64f73703d45b8a8376bd0bb\") " pod="kube-system/kube-apiserver-ip-172-31-23-194" Oct 9 07:17:49.666995 kubelet[3375]: I1009 07:17:49.666977 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91fa0255f64f73703d45b8a8376bd0bb-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-194\" (UID: \"91fa0255f64f73703d45b8a8376bd0bb\") " pod="kube-system/kube-apiserver-ip-172-31-23-194" Oct 9 07:17:49.667663 kubelet[3375]: I1009 07:17:49.667638 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:49.667862 kubelet[3375]: I1009 07:17:49.667851 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:49.668235 kubelet[3375]: I1009 07:17:49.667980 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91fa0255f64f73703d45b8a8376bd0bb-ca-certs\") pod \"kube-apiserver-ip-172-31-23-194\" (UID: \"91fa0255f64f73703d45b8a8376bd0bb\") " pod="kube-system/kube-apiserver-ip-172-31-23-194" Oct 9 07:17:49.668563 kubelet[3375]: I1009 07:17:49.668311 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a41402c85490604520859a91199318e7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-194\" (UID: \"a41402c85490604520859a91199318e7\") " pod="kube-system/kube-scheduler-ip-172-31-23-194" Oct 9 07:17:49.668563 kubelet[3375]: I1009 07:17:49.668373 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:49.668563 kubelet[3375]: I1009 07:17:49.668409 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:49.668563 kubelet[3375]: I1009 07:17:49.668446 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b9c641fcd54715d2be940b27cb749e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-194\" (UID: \"1b9c641fcd54715d2be940b27cb749e7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-194" Oct 9 07:17:50.192584 kubelet[3375]: I1009 07:17:50.192518 3375 apiserver.go:52] "Watching apiserver" Oct 9 07:17:50.256997 kubelet[3375]: I1009 07:17:50.256958 3375 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:17:50.377299 kubelet[3375]: I1009 07:17:50.377151 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-194" podStartSLOduration=1.377101782 podStartE2EDuration="1.377101782s" podCreationTimestamp="2024-10-09 07:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:17:50.376985942 +0000 UTC m=+1.324072395" watchObservedRunningTime="2024-10-09 07:17:50.377101782 +0000 UTC m=+1.324188214" Oct 9 07:17:50.447401 kubelet[3375]: I1009 07:17:50.447153 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-194" podStartSLOduration=1.447106187 podStartE2EDuration="1.447106187s" podCreationTimestamp="2024-10-09 07:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:17:50.41896614 +0000 UTC m=+1.366052572" watchObservedRunningTime="2024-10-09 07:17:50.447106187 +0000 UTC m=+1.394192623" Oct 9 07:17:50.518675 kubelet[3375]: I1009 07:17:50.518428 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-194" podStartSLOduration=1.518374322 podStartE2EDuration="1.518374322s" podCreationTimestamp="2024-10-09 07:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:17:50.474324966 +0000 UTC m=+1.421411388" watchObservedRunningTime="2024-10-09 07:17:50.518374322 +0000 UTC m=+1.465460754" Oct 9 07:17:55.306306 sudo[2299]: pam_unix(sudo:session): session closed for user root Oct 9 07:17:55.335386 sshd[2296]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:55.344631 systemd-logind[1961]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:17:55.345130 systemd[1]: sshd@6-172.31.23.194:22-139.178.89.65:51942.service: Deactivated successfully. Oct 9 07:17:55.350567 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:17:55.351188 systemd[1]: session-7.scope: Consumed 5.116s CPU time, 135.1M memory peak, 0B memory swap peak. Oct 9 07:17:55.360437 systemd-logind[1961]: Removed session 7. Oct 9 07:18:00.925414 kubelet[3375]: I1009 07:18:00.925384 3375 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:18:00.925994 containerd[1982]: time="2024-10-09T07:18:00.925787051Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:18:00.926319 kubelet[3375]: I1009 07:18:00.925991 3375 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:18:01.985881 kubelet[3375]: I1009 07:18:01.985823 3375 topology_manager.go:215] "Topology Admit Handler" podUID="8f03cf81-090c-45fd-a000-c27f9e074619" podNamespace="kube-system" podName="kube-proxy-qsc5z" Oct 9 07:18:02.094597 systemd[1]: Created slice kubepods-besteffort-pod8f03cf81_090c_45fd_a000_c27f9e074619.slice - libcontainer container kubepods-besteffort-pod8f03cf81_090c_45fd_a000_c27f9e074619.slice. Oct 9 07:18:02.117388 kubelet[3375]: I1009 07:18:02.117314 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f03cf81-090c-45fd-a000-c27f9e074619-kube-proxy\") pod \"kube-proxy-qsc5z\" (UID: \"8f03cf81-090c-45fd-a000-c27f9e074619\") " pod="kube-system/kube-proxy-qsc5z" Oct 9 07:18:02.118215 kubelet[3375]: I1009 07:18:02.117967 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f03cf81-090c-45fd-a000-c27f9e074619-lib-modules\") pod \"kube-proxy-qsc5z\" (UID: \"8f03cf81-090c-45fd-a000-c27f9e074619\") " pod="kube-system/kube-proxy-qsc5z" Oct 9 07:18:02.118215 kubelet[3375]: I1009 07:18:02.118059 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f03cf81-090c-45fd-a000-c27f9e074619-xtables-lock\") pod \"kube-proxy-qsc5z\" (UID: \"8f03cf81-090c-45fd-a000-c27f9e074619\") " pod="kube-system/kube-proxy-qsc5z" Oct 9 07:18:02.118215 kubelet[3375]: I1009 07:18:02.118107 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kncw5\" (UniqueName: \"kubernetes.io/projected/8f03cf81-090c-45fd-a000-c27f9e074619-kube-api-access-kncw5\") pod \"kube-proxy-qsc5z\" (UID: \"8f03cf81-090c-45fd-a000-c27f9e074619\") " pod="kube-system/kube-proxy-qsc5z" Oct 9 07:18:02.300043 kubelet[3375]: I1009 07:18:02.299891 3375 topology_manager.go:215] "Topology Admit Handler" podUID="0333edb8-9ac5-4522-8219-1dc564d44639" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-ghf89" Oct 9 07:18:02.311894 systemd[1]: Created slice kubepods-besteffort-pod0333edb8_9ac5_4522_8219_1dc564d44639.slice - libcontainer container kubepods-besteffort-pod0333edb8_9ac5_4522_8219_1dc564d44639.slice. Oct 9 07:18:02.429545 kubelet[3375]: I1009 07:18:02.429487 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0333edb8-9ac5-4522-8219-1dc564d44639-var-lib-calico\") pod \"tigera-operator-5d56685c77-ghf89\" (UID: \"0333edb8-9ac5-4522-8219-1dc564d44639\") " pod="tigera-operator/tigera-operator-5d56685c77-ghf89" Oct 9 07:18:02.429545 kubelet[3375]: I1009 07:18:02.429554 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scpb8\" (UniqueName: \"kubernetes.io/projected/0333edb8-9ac5-4522-8219-1dc564d44639-kube-api-access-scpb8\") pod \"tigera-operator-5d56685c77-ghf89\" (UID: \"0333edb8-9ac5-4522-8219-1dc564d44639\") " pod="tigera-operator/tigera-operator-5d56685c77-ghf89" Oct 9 07:18:02.464336 containerd[1982]: time="2024-10-09T07:18:02.464288802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qsc5z,Uid:8f03cf81-090c-45fd-a000-c27f9e074619,Namespace:kube-system,Attempt:0,}" Oct 9 07:18:02.569382 containerd[1982]: time="2024-10-09T07:18:02.567732420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:02.569382 containerd[1982]: time="2024-10-09T07:18:02.567855031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:02.569382 containerd[1982]: time="2024-10-09T07:18:02.567892758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:02.570069 containerd[1982]: time="2024-10-09T07:18:02.567915271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:02.713000 systemd[1]: Started cri-containerd-ae756a66248445a7cb569f11627d552a42950eadfae50d880af9e486139be499.scope - libcontainer container ae756a66248445a7cb569f11627d552a42950eadfae50d880af9e486139be499. Oct 9 07:18:02.804438 containerd[1982]: time="2024-10-09T07:18:02.798480946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qsc5z,Uid:8f03cf81-090c-45fd-a000-c27f9e074619,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae756a66248445a7cb569f11627d552a42950eadfae50d880af9e486139be499\"" Oct 9 07:18:02.815765 containerd[1982]: time="2024-10-09T07:18:02.815719078Z" level=info msg="CreateContainer within sandbox \"ae756a66248445a7cb569f11627d552a42950eadfae50d880af9e486139be499\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:18:02.862116 containerd[1982]: time="2024-10-09T07:18:02.862008958Z" level=info msg="CreateContainer within sandbox \"ae756a66248445a7cb569f11627d552a42950eadfae50d880af9e486139be499\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bbd7545136c7f15725634dfbb4350540f74aec11690bdb74f9e01aa17b4885f3\"" Oct 9 07:18:02.863653 containerd[1982]: time="2024-10-09T07:18:02.863587326Z" level=info msg="StartContainer for \"bbd7545136c7f15725634dfbb4350540f74aec11690bdb74f9e01aa17b4885f3\"" Oct 9 07:18:02.923683 containerd[1982]: time="2024-10-09T07:18:02.921378989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-ghf89,Uid:0333edb8-9ac5-4522-8219-1dc564d44639,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:18:02.943746 systemd[1]: Started cri-containerd-bbd7545136c7f15725634dfbb4350540f74aec11690bdb74f9e01aa17b4885f3.scope - libcontainer container bbd7545136c7f15725634dfbb4350540f74aec11690bdb74f9e01aa17b4885f3. Oct 9 07:18:03.042268 containerd[1982]: time="2024-10-09T07:18:03.036431776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:03.042268 containerd[1982]: time="2024-10-09T07:18:03.036514446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:03.042268 containerd[1982]: time="2024-10-09T07:18:03.036546327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:03.042268 containerd[1982]: time="2024-10-09T07:18:03.036565392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:03.136298 containerd[1982]: time="2024-10-09T07:18:03.136036267Z" level=info msg="StartContainer for \"bbd7545136c7f15725634dfbb4350540f74aec11690bdb74f9e01aa17b4885f3\" returns successfully" Oct 9 07:18:03.174677 systemd[1]: Started cri-containerd-c4d36eecc69f37770f52f5c1ff6e0d7da6ea57ff6b7d3a63cbd83750d9834593.scope - libcontainer container c4d36eecc69f37770f52f5c1ff6e0d7da6ea57ff6b7d3a63cbd83750d9834593. Oct 9 07:18:03.301490 containerd[1982]: time="2024-10-09T07:18:03.301270598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-ghf89,Uid:0333edb8-9ac5-4522-8219-1dc564d44639,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c4d36eecc69f37770f52f5c1ff6e0d7da6ea57ff6b7d3a63cbd83750d9834593\"" Oct 9 07:18:03.304581 containerd[1982]: time="2024-10-09T07:18:03.303622421Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:18:05.267993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145607766.mount: Deactivated successfully. Oct 9 07:18:06.212775 containerd[1982]: time="2024-10-09T07:18:06.212727611Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:06.214210 containerd[1982]: time="2024-10-09T07:18:06.214062940Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136541" Oct 9 07:18:06.216643 containerd[1982]: time="2024-10-09T07:18:06.215531172Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:06.218761 containerd[1982]: time="2024-10-09T07:18:06.218724778Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:06.219904 containerd[1982]: time="2024-10-09T07:18:06.219862699Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.91619521s" Oct 9 07:18:06.220004 containerd[1982]: time="2024-10-09T07:18:06.219910375Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:18:06.223090 containerd[1982]: time="2024-10-09T07:18:06.223055251Z" level=info msg="CreateContainer within sandbox \"c4d36eecc69f37770f52f5c1ff6e0d7da6ea57ff6b7d3a63cbd83750d9834593\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:18:06.251507 containerd[1982]: time="2024-10-09T07:18:06.251448381Z" level=info msg="CreateContainer within sandbox \"c4d36eecc69f37770f52f5c1ff6e0d7da6ea57ff6b7d3a63cbd83750d9834593\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53\"" Oct 9 07:18:06.252373 containerd[1982]: time="2024-10-09T07:18:06.252240908Z" level=info msg="StartContainer for \"b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53\"" Oct 9 07:18:06.292578 systemd[1]: Started cri-containerd-b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53.scope - libcontainer container b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53. Oct 9 07:18:06.336635 containerd[1982]: time="2024-10-09T07:18:06.336583417Z" level=info msg="StartContainer for \"b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53\" returns successfully" Oct 9 07:18:06.449344 kubelet[3375]: I1009 07:18:06.448542 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qsc5z" podStartSLOduration=5.448485358 podStartE2EDuration="5.448485358s" podCreationTimestamp="2024-10-09 07:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:18:03.453742588 +0000 UTC m=+14.400829019" watchObservedRunningTime="2024-10-09 07:18:06.448485358 +0000 UTC m=+17.395571788" Oct 9 07:18:09.323474 kubelet[3375]: I1009 07:18:09.323436 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-ghf89" podStartSLOduration=4.405973472 podStartE2EDuration="7.323382579s" podCreationTimestamp="2024-10-09 07:18:02 +0000 UTC" firstStartedPulling="2024-10-09 07:18:03.302855582 +0000 UTC m=+14.249942003" lastFinishedPulling="2024-10-09 07:18:06.220264697 +0000 UTC m=+17.167351110" observedRunningTime="2024-10-09 07:18:06.444116765 +0000 UTC m=+17.391203197" watchObservedRunningTime="2024-10-09 07:18:09.323382579 +0000 UTC m=+20.270469007" Oct 9 07:18:09.843443 kubelet[3375]: I1009 07:18:09.843202 3375 topology_manager.go:215] "Topology Admit Handler" podUID="4c5e077e-ecd9-4941-a2ee-0359a3087b05" podNamespace="calico-system" podName="calico-typha-b6b797f6f-jm56v" Oct 9 07:18:09.911854 systemd[1]: Created slice kubepods-besteffort-pod4c5e077e_ecd9_4941_a2ee_0359a3087b05.slice - libcontainer container kubepods-besteffort-pod4c5e077e_ecd9_4941_a2ee_0359a3087b05.slice. Oct 9 07:18:09.928674 kubelet[3375]: I1009 07:18:09.928633 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c5e077e-ecd9-4941-a2ee-0359a3087b05-tigera-ca-bundle\") pod \"calico-typha-b6b797f6f-jm56v\" (UID: \"4c5e077e-ecd9-4941-a2ee-0359a3087b05\") " pod="calico-system/calico-typha-b6b797f6f-jm56v" Oct 9 07:18:09.935296 kubelet[3375]: I1009 07:18:09.935240 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4c5e077e-ecd9-4941-a2ee-0359a3087b05-typha-certs\") pod \"calico-typha-b6b797f6f-jm56v\" (UID: \"4c5e077e-ecd9-4941-a2ee-0359a3087b05\") " pod="calico-system/calico-typha-b6b797f6f-jm56v" Oct 9 07:18:09.935496 kubelet[3375]: I1009 07:18:09.935325 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62lph\" (UniqueName: \"kubernetes.io/projected/4c5e077e-ecd9-4941-a2ee-0359a3087b05-kube-api-access-62lph\") pod \"calico-typha-b6b797f6f-jm56v\" (UID: \"4c5e077e-ecd9-4941-a2ee-0359a3087b05\") " pod="calico-system/calico-typha-b6b797f6f-jm56v" Oct 9 07:18:10.007893 kubelet[3375]: I1009 07:18:10.007850 3375 topology_manager.go:215] "Topology Admit Handler" podUID="acafa9bc-8bfa-4cbd-a543-fe1dabc32950" podNamespace="calico-system" podName="calico-node-58n7j" Oct 9 07:18:10.026704 systemd[1]: Created slice kubepods-besteffort-podacafa9bc_8bfa_4cbd_a543_fe1dabc32950.slice - libcontainer container kubepods-besteffort-podacafa9bc_8bfa_4cbd_a543_fe1dabc32950.slice. Oct 9 07:18:10.144296 kubelet[3375]: I1009 07:18:10.139192 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-policysync\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.144296 kubelet[3375]: I1009 07:18:10.139243 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-var-run-calico\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.144296 kubelet[3375]: I1009 07:18:10.139281 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-tigera-ca-bundle\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.144296 kubelet[3375]: I1009 07:18:10.139311 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-cni-net-dir\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.144296 kubelet[3375]: I1009 07:18:10.139339 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-cni-log-dir\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.144624 kubelet[3375]: I1009 07:18:10.139384 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-node-certs\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.144624 kubelet[3375]: I1009 07:18:10.139414 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-var-lib-calico\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.144624 kubelet[3375]: I1009 07:18:10.139447 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx4pt\" (UniqueName: \"kubernetes.io/projected/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-kube-api-access-bx4pt\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.144624 kubelet[3375]: I1009 07:18:10.139474 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-xtables-lock\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.144624 kubelet[3375]: I1009 07:18:10.144091 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-cni-bin-dir\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.145601 kubelet[3375]: I1009 07:18:10.145562 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-lib-modules\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.145711 kubelet[3375]: I1009 07:18:10.145609 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/acafa9bc-8bfa-4cbd-a543-fe1dabc32950-flexvol-driver-host\") pod \"calico-node-58n7j\" (UID: \"acafa9bc-8bfa-4cbd-a543-fe1dabc32950\") " pod="calico-system/calico-node-58n7j" Oct 9 07:18:10.172828 kubelet[3375]: I1009 07:18:10.172775 3375 topology_manager.go:215] "Topology Admit Handler" podUID="7a7754c0-14d0-4084-9006-948f71afe7d1" podNamespace="calico-system" podName="csi-node-driver-rxzfq" Oct 9 07:18:10.183758 kubelet[3375]: E1009 07:18:10.183024 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxzfq" podUID="7a7754c0-14d0-4084-9006-948f71afe7d1" Oct 9 07:18:10.253988 containerd[1982]: time="2024-10-09T07:18:10.253921304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b6b797f6f-jm56v,Uid:4c5e077e-ecd9-4941-a2ee-0359a3087b05,Namespace:calico-system,Attempt:0,}" Oct 9 07:18:10.260146 kubelet[3375]: E1009 07:18:10.260077 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.260830 kubelet[3375]: W1009 07:18:10.260116 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.265514 kubelet[3375]: E1009 07:18:10.265473 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.268768 kubelet[3375]: E1009 07:18:10.268430 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.268768 kubelet[3375]: W1009 07:18:10.268452 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.268768 kubelet[3375]: E1009 07:18:10.268482 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.271805 kubelet[3375]: E1009 07:18:10.271705 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.271805 kubelet[3375]: W1009 07:18:10.271744 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.272192 kubelet[3375]: E1009 07:18:10.271782 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.273770 kubelet[3375]: E1009 07:18:10.273728 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.273770 kubelet[3375]: W1009 07:18:10.273745 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.275594 kubelet[3375]: E1009 07:18:10.273775 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.275594 kubelet[3375]: E1009 07:18:10.274084 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.275594 kubelet[3375]: W1009 07:18:10.274095 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.275594 kubelet[3375]: E1009 07:18:10.274128 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.275594 kubelet[3375]: E1009 07:18:10.274680 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.275594 kubelet[3375]: W1009 07:18:10.274691 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.275594 kubelet[3375]: E1009 07:18:10.274777 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.275594 kubelet[3375]: E1009 07:18:10.274950 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.275594 kubelet[3375]: W1009 07:18:10.274959 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.275594 kubelet[3375]: E1009 07:18:10.275108 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.277436 kubelet[3375]: E1009 07:18:10.275254 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.277436 kubelet[3375]: W1009 07:18:10.275264 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.277436 kubelet[3375]: E1009 07:18:10.275392 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.277436 kubelet[3375]: E1009 07:18:10.275628 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.277436 kubelet[3375]: W1009 07:18:10.275638 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.277436 kubelet[3375]: E1009 07:18:10.275659 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.277436 kubelet[3375]: E1009 07:18:10.275897 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.277436 kubelet[3375]: W1009 07:18:10.275907 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.277436 kubelet[3375]: E1009 07:18:10.275933 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.277436 kubelet[3375]: E1009 07:18:10.276297 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.279484 kubelet[3375]: W1009 07:18:10.276307 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.279484 kubelet[3375]: E1009 07:18:10.276330 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.279484 kubelet[3375]: E1009 07:18:10.276652 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.279484 kubelet[3375]: W1009 07:18:10.276666 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.279484 kubelet[3375]: E1009 07:18:10.276803 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.279484 kubelet[3375]: E1009 07:18:10.277523 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.279484 kubelet[3375]: W1009 07:18:10.277534 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.279484 kubelet[3375]: E1009 07:18:10.277556 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.279484 kubelet[3375]: E1009 07:18:10.277786 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.279484 kubelet[3375]: W1009 07:18:10.277796 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.280345 kubelet[3375]: E1009 07:18:10.277824 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.280345 kubelet[3375]: E1009 07:18:10.278140 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.280345 kubelet[3375]: W1009 07:18:10.278151 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.280345 kubelet[3375]: E1009 07:18:10.278241 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.280345 kubelet[3375]: E1009 07:18:10.278757 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.280345 kubelet[3375]: W1009 07:18:10.278767 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.280345 kubelet[3375]: E1009 07:18:10.278787 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.280345 kubelet[3375]: E1009 07:18:10.279199 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.280345 kubelet[3375]: W1009 07:18:10.279211 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.280345 kubelet[3375]: E1009 07:18:10.279228 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.281587 kubelet[3375]: E1009 07:18:10.279553 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.281587 kubelet[3375]: W1009 07:18:10.279564 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.281587 kubelet[3375]: E1009 07:18:10.279581 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.281587 kubelet[3375]: E1009 07:18:10.280236 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.281587 kubelet[3375]: W1009 07:18:10.280262 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.281587 kubelet[3375]: E1009 07:18:10.280280 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.281587 kubelet[3375]: E1009 07:18:10.280588 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.281587 kubelet[3375]: W1009 07:18:10.280598 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.281587 kubelet[3375]: E1009 07:18:10.280616 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.281587 kubelet[3375]: E1009 07:18:10.280811 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.291551 kubelet[3375]: W1009 07:18:10.280820 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.291551 kubelet[3375]: E1009 07:18:10.280834 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.291551 kubelet[3375]: E1009 07:18:10.282410 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.291551 kubelet[3375]: W1009 07:18:10.282422 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.291551 kubelet[3375]: E1009 07:18:10.282440 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.291551 kubelet[3375]: E1009 07:18:10.282673 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.291551 kubelet[3375]: W1009 07:18:10.282683 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.291551 kubelet[3375]: E1009 07:18:10.282698 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.291551 kubelet[3375]: E1009 07:18:10.282967 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.291551 kubelet[3375]: W1009 07:18:10.282979 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.292068 kubelet[3375]: E1009 07:18:10.282995 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.292068 kubelet[3375]: E1009 07:18:10.283199 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.292068 kubelet[3375]: W1009 07:18:10.283207 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.292068 kubelet[3375]: E1009 07:18:10.283220 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.292068 kubelet[3375]: E1009 07:18:10.283503 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.292068 kubelet[3375]: W1009 07:18:10.283513 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.292068 kubelet[3375]: E1009 07:18:10.283529 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.292068 kubelet[3375]: E1009 07:18:10.283710 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.292068 kubelet[3375]: W1009 07:18:10.283718 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.292068 kubelet[3375]: E1009 07:18:10.283733 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.292506 kubelet[3375]: E1009 07:18:10.283909 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.292506 kubelet[3375]: W1009 07:18:10.283917 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.292506 kubelet[3375]: E1009 07:18:10.283931 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.292506 kubelet[3375]: E1009 07:18:10.284511 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.292506 kubelet[3375]: W1009 07:18:10.284522 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.292506 kubelet[3375]: E1009 07:18:10.284542 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.327392 containerd[1982]: time="2024-10-09T07:18:10.322000995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:10.327392 containerd[1982]: time="2024-10-09T07:18:10.326092527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:10.327687 kubelet[3375]: E1009 07:18:10.326327 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.327687 kubelet[3375]: W1009 07:18:10.326405 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.327687 kubelet[3375]: E1009 07:18:10.326506 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.328241 containerd[1982]: time="2024-10-09T07:18:10.327659650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:10.328241 containerd[1982]: time="2024-10-09T07:18:10.327731990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:10.337824 containerd[1982]: time="2024-10-09T07:18:10.333107652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-58n7j,Uid:acafa9bc-8bfa-4cbd-a543-fe1dabc32950,Namespace:calico-system,Attempt:0,}" Oct 9 07:18:10.347706 kubelet[3375]: E1009 07:18:10.347672 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.347706 kubelet[3375]: W1009 07:18:10.347697 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.351265 kubelet[3375]: E1009 07:18:10.347727 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.351265 kubelet[3375]: I1009 07:18:10.348268 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7a7754c0-14d0-4084-9006-948f71afe7d1-varrun\") pod \"csi-node-driver-rxzfq\" (UID: \"7a7754c0-14d0-4084-9006-948f71afe7d1\") " pod="calico-system/csi-node-driver-rxzfq" Oct 9 07:18:10.351265 kubelet[3375]: E1009 07:18:10.349191 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.351265 kubelet[3375]: W1009 07:18:10.349204 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.351265 kubelet[3375]: E1009 07:18:10.349265 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.351265 kubelet[3375]: E1009 07:18:10.350152 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.351265 kubelet[3375]: W1009 07:18:10.350163 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.351265 kubelet[3375]: E1009 07:18:10.350179 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.351709 kubelet[3375]: I1009 07:18:10.350455 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdzsf\" (UniqueName: \"kubernetes.io/projected/7a7754c0-14d0-4084-9006-948f71afe7d1-kube-api-access-pdzsf\") pod \"csi-node-driver-rxzfq\" (UID: \"7a7754c0-14d0-4084-9006-948f71afe7d1\") " pod="calico-system/csi-node-driver-rxzfq" Oct 9 07:18:10.351709 kubelet[3375]: E1009 07:18:10.350892 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.351709 kubelet[3375]: W1009 07:18:10.350902 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.351709 kubelet[3375]: E1009 07:18:10.350922 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.352570 kubelet[3375]: E1009 07:18:10.352432 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.352570 kubelet[3375]: W1009 07:18:10.352449 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.352570 kubelet[3375]: E1009 07:18:10.352477 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.353102 kubelet[3375]: E1009 07:18:10.353070 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.353673 kubelet[3375]: W1009 07:18:10.353403 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.353673 kubelet[3375]: E1009 07:18:10.353534 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.356539 kubelet[3375]: E1009 07:18:10.356506 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.356539 kubelet[3375]: W1009 07:18:10.356534 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.356793 kubelet[3375]: E1009 07:18:10.356559 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.356793 kubelet[3375]: I1009 07:18:10.356606 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7a7754c0-14d0-4084-9006-948f71afe7d1-socket-dir\") pod \"csi-node-driver-rxzfq\" (UID: \"7a7754c0-14d0-4084-9006-948f71afe7d1\") " pod="calico-system/csi-node-driver-rxzfq" Oct 9 07:18:10.358789 kubelet[3375]: E1009 07:18:10.358568 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.358789 kubelet[3375]: W1009 07:18:10.358591 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.358789 kubelet[3375]: E1009 07:18:10.358625 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.358789 kubelet[3375]: I1009 07:18:10.358663 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7a7754c0-14d0-4084-9006-948f71afe7d1-registration-dir\") pod \"csi-node-driver-rxzfq\" (UID: \"7a7754c0-14d0-4084-9006-948f71afe7d1\") " pod="calico-system/csi-node-driver-rxzfq" Oct 9 07:18:10.360466 kubelet[3375]: E1009 07:18:10.360434 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.360581 kubelet[3375]: W1009 07:18:10.360466 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.360852 kubelet[3375]: E1009 07:18:10.360828 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.360919 kubelet[3375]: I1009 07:18:10.360887 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a7754c0-14d0-4084-9006-948f71afe7d1-kubelet-dir\") pod \"csi-node-driver-rxzfq\" (UID: \"7a7754c0-14d0-4084-9006-948f71afe7d1\") " pod="calico-system/csi-node-driver-rxzfq" Oct 9 07:18:10.362971 kubelet[3375]: E1009 07:18:10.361826 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.362971 kubelet[3375]: W1009 07:18:10.361844 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.362971 kubelet[3375]: E1009 07:18:10.362577 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.364421 kubelet[3375]: E1009 07:18:10.363422 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.364421 kubelet[3375]: W1009 07:18:10.363438 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.364421 kubelet[3375]: E1009 07:18:10.364088 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.366120 kubelet[3375]: E1009 07:18:10.365807 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.366120 kubelet[3375]: W1009 07:18:10.365823 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.366120 kubelet[3375]: E1009 07:18:10.366112 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.367459 kubelet[3375]: E1009 07:18:10.366408 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.367459 kubelet[3375]: W1009 07:18:10.366420 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.367459 kubelet[3375]: E1009 07:18:10.366439 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.367459 kubelet[3375]: E1009 07:18:10.366922 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.367459 kubelet[3375]: W1009 07:18:10.366933 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.367459 kubelet[3375]: E1009 07:18:10.366953 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.368770 kubelet[3375]: E1009 07:18:10.368501 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.368770 kubelet[3375]: W1009 07:18:10.368520 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.368770 kubelet[3375]: E1009 07:18:10.368537 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.382201 systemd[1]: Started cri-containerd-b3f6b1991ba76fd92bca1fdaadc71d24044e9d8dbe44243e6af6cffc390b0f72.scope - libcontainer container b3f6b1991ba76fd92bca1fdaadc71d24044e9d8dbe44243e6af6cffc390b0f72. Oct 9 07:18:10.427569 containerd[1982]: time="2024-10-09T07:18:10.427316868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:10.427569 containerd[1982]: time="2024-10-09T07:18:10.427438812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:10.438728 containerd[1982]: time="2024-10-09T07:18:10.427480908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:10.438728 containerd[1982]: time="2024-10-09T07:18:10.427502238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:10.470607 systemd[1]: Started cri-containerd-a4ee10ee9523bf6c70ec657c374b4a003bcf8e4fac9ee90ba739b59b787566d8.scope - libcontainer container a4ee10ee9523bf6c70ec657c374b4a003bcf8e4fac9ee90ba739b59b787566d8. Oct 9 07:18:10.473664 kubelet[3375]: E1009 07:18:10.473638 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.473979 kubelet[3375]: W1009 07:18:10.473854 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.474230 kubelet[3375]: E1009 07:18:10.474202 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.475131 kubelet[3375]: E1009 07:18:10.475097 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.475131 kubelet[3375]: W1009 07:18:10.475112 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.475265 kubelet[3375]: E1009 07:18:10.475150 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.476515 kubelet[3375]: E1009 07:18:10.476497 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.476515 kubelet[3375]: W1009 07:18:10.476514 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.476829 kubelet[3375]: E1009 07:18:10.476555 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.476979 kubelet[3375]: E1009 07:18:10.476870 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.476979 kubelet[3375]: W1009 07:18:10.476880 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.476979 kubelet[3375]: E1009 07:18:10.476975 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.477738 kubelet[3375]: E1009 07:18:10.477412 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.477738 kubelet[3375]: W1009 07:18:10.477426 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.477738 kubelet[3375]: E1009 07:18:10.477483 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.477738 kubelet[3375]: E1009 07:18:10.477730 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.477738 kubelet[3375]: W1009 07:18:10.477740 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.478330 kubelet[3375]: E1009 07:18:10.478097 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.478551 kubelet[3375]: E1009 07:18:10.478521 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.478551 kubelet[3375]: W1009 07:18:10.478532 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.478686 kubelet[3375]: E1009 07:18:10.478556 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.479455 kubelet[3375]: E1009 07:18:10.479440 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.479455 kubelet[3375]: W1009 07:18:10.479454 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.479573 kubelet[3375]: E1009 07:18:10.479472 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.479969 kubelet[3375]: E1009 07:18:10.479958 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.479969 kubelet[3375]: W1009 07:18:10.479968 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.480097 kubelet[3375]: E1009 07:18:10.479984 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.482571 kubelet[3375]: E1009 07:18:10.482539 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.482571 kubelet[3375]: W1009 07:18:10.482559 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.482748 kubelet[3375]: E1009 07:18:10.482578 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.485816 kubelet[3375]: E1009 07:18:10.484626 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.485816 kubelet[3375]: W1009 07:18:10.484644 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.485816 kubelet[3375]: E1009 07:18:10.484665 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.487481 kubelet[3375]: E1009 07:18:10.487418 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.487481 kubelet[3375]: W1009 07:18:10.487435 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.487481 kubelet[3375]: E1009 07:18:10.487458 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.489834 kubelet[3375]: E1009 07:18:10.489540 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.489834 kubelet[3375]: W1009 07:18:10.489559 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.489834 kubelet[3375]: E1009 07:18:10.489580 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.492962 kubelet[3375]: E1009 07:18:10.492610 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.492962 kubelet[3375]: W1009 07:18:10.492630 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.492962 kubelet[3375]: E1009 07:18:10.492658 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.493401 kubelet[3375]: E1009 07:18:10.493302 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.493401 kubelet[3375]: W1009 07:18:10.493316 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.493401 kubelet[3375]: E1009 07:18:10.493336 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.494540 kubelet[3375]: E1009 07:18:10.494501 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.494540 kubelet[3375]: W1009 07:18:10.494514 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.494540 kubelet[3375]: E1009 07:18:10.494538 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.495208 kubelet[3375]: E1009 07:18:10.495136 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.495208 kubelet[3375]: W1009 07:18:10.495149 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.496051 kubelet[3375]: E1009 07:18:10.495830 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.496051 kubelet[3375]: W1009 07:18:10.495844 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.496223 kubelet[3375]: E1009 07:18:10.496148 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.496223 kubelet[3375]: W1009 07:18:10.496160 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.499426 kubelet[3375]: E1009 07:18:10.498446 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.499426 kubelet[3375]: W1009 07:18:10.498469 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.499426 kubelet[3375]: E1009 07:18:10.498493 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.499426 kubelet[3375]: E1009 07:18:10.498921 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.499426 kubelet[3375]: W1009 07:18:10.498933 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.499426 kubelet[3375]: E1009 07:18:10.498952 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.499981 kubelet[3375]: E1009 07:18:10.499853 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.499981 kubelet[3375]: W1009 07:18:10.499865 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.499981 kubelet[3375]: E1009 07:18:10.499884 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.499981 kubelet[3375]: E1009 07:18:10.499922 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.499981 kubelet[3375]: E1009 07:18:10.499941 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.500958 kubelet[3375]: E1009 07:18:10.500791 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.501483 kubelet[3375]: E1009 07:18:10.501442 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.501483 kubelet[3375]: W1009 07:18:10.501456 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.502033 kubelet[3375]: E1009 07:18:10.501695 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.503193 kubelet[3375]: E1009 07:18:10.503020 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.503193 kubelet[3375]: W1009 07:18:10.503034 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.503193 kubelet[3375]: E1009 07:18:10.503053 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.506017 kubelet[3375]: E1009 07:18:10.505994 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.506017 kubelet[3375]: W1009 07:18:10.506016 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.506183 kubelet[3375]: E1009 07:18:10.506039 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.553007 kubelet[3375]: E1009 07:18:10.552978 3375 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.553261 kubelet[3375]: W1009 07:18:10.553188 3375 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.553261 kubelet[3375]: E1009 07:18:10.553221 3375 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.559387 containerd[1982]: time="2024-10-09T07:18:10.558878929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-58n7j,Uid:acafa9bc-8bfa-4cbd-a543-fe1dabc32950,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4ee10ee9523bf6c70ec657c374b4a003bcf8e4fac9ee90ba739b59b787566d8\"" Oct 9 07:18:10.563878 containerd[1982]: time="2024-10-09T07:18:10.563822418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:18:10.619132 containerd[1982]: time="2024-10-09T07:18:10.619016599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b6b797f6f-jm56v,Uid:4c5e077e-ecd9-4941-a2ee-0359a3087b05,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3f6b1991ba76fd92bca1fdaadc71d24044e9d8dbe44243e6af6cffc390b0f72\"" Oct 9 07:18:12.275969 kubelet[3375]: E1009 07:18:12.275746 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxzfq" podUID="7a7754c0-14d0-4084-9006-948f71afe7d1" Oct 9 07:18:12.289583 containerd[1982]: time="2024-10-09T07:18:12.289532274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:12.293642 containerd[1982]: time="2024-10-09T07:18:12.293517821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:18:12.296449 containerd[1982]: time="2024-10-09T07:18:12.296391169Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:12.303533 containerd[1982]: time="2024-10-09T07:18:12.303482195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:12.305487 containerd[1982]: time="2024-10-09T07:18:12.305410998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.741526108s" Oct 9 07:18:12.305612 containerd[1982]: time="2024-10-09T07:18:12.305491575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:18:12.308890 containerd[1982]: time="2024-10-09T07:18:12.307775746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:18:12.313379 containerd[1982]: time="2024-10-09T07:18:12.313317955Z" level=info msg="CreateContainer within sandbox \"a4ee10ee9523bf6c70ec657c374b4a003bcf8e4fac9ee90ba739b59b787566d8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:18:12.377872 containerd[1982]: time="2024-10-09T07:18:12.377822759Z" level=info msg="CreateContainer within sandbox \"a4ee10ee9523bf6c70ec657c374b4a003bcf8e4fac9ee90ba739b59b787566d8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"64dc983a594241a668fb781ece09630c25ee7a2bffe041a5b5663240367ddcc0\"" Oct 9 07:18:12.379297 containerd[1982]: time="2024-10-09T07:18:12.379262876Z" level=info msg="StartContainer for \"64dc983a594241a668fb781ece09630c25ee7a2bffe041a5b5663240367ddcc0\"" Oct 9 07:18:12.461321 systemd[1]: Started cri-containerd-64dc983a594241a668fb781ece09630c25ee7a2bffe041a5b5663240367ddcc0.scope - libcontainer container 64dc983a594241a668fb781ece09630c25ee7a2bffe041a5b5663240367ddcc0. Oct 9 07:18:12.514149 containerd[1982]: time="2024-10-09T07:18:12.513795232Z" level=info msg="StartContainer for \"64dc983a594241a668fb781ece09630c25ee7a2bffe041a5b5663240367ddcc0\" returns successfully" Oct 9 07:18:12.543920 systemd[1]: cri-containerd-64dc983a594241a668fb781ece09630c25ee7a2bffe041a5b5663240367ddcc0.scope: Deactivated successfully. Oct 9 07:18:12.586848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64dc983a594241a668fb781ece09630c25ee7a2bffe041a5b5663240367ddcc0-rootfs.mount: Deactivated successfully. Oct 9 07:18:12.635020 containerd[1982]: time="2024-10-09T07:18:12.590134996Z" level=info msg="shim disconnected" id=64dc983a594241a668fb781ece09630c25ee7a2bffe041a5b5663240367ddcc0 namespace=k8s.io Oct 9 07:18:12.635020 containerd[1982]: time="2024-10-09T07:18:12.634427647Z" level=warning msg="cleaning up after shim disconnected" id=64dc983a594241a668fb781ece09630c25ee7a2bffe041a5b5663240367ddcc0 namespace=k8s.io Oct 9 07:18:12.635020 containerd[1982]: time="2024-10-09T07:18:12.634446875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:18:12.659302 containerd[1982]: time="2024-10-09T07:18:12.657915221Z" level=warning msg="cleanup warnings time=\"2024-10-09T07:18:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 07:18:14.276367 kubelet[3375]: E1009 07:18:14.275928 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxzfq" podUID="7a7754c0-14d0-4084-9006-948f71afe7d1" Oct 9 07:18:15.362767 containerd[1982]: time="2024-10-09T07:18:15.362503394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:15.365332 containerd[1982]: time="2024-10-09T07:18:15.365243390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:18:15.366559 containerd[1982]: time="2024-10-09T07:18:15.366519093Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:15.376570 containerd[1982]: time="2024-10-09T07:18:15.376526181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:15.377448 containerd[1982]: time="2024-10-09T07:18:15.377407987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.069592067s" Oct 9 07:18:15.377565 containerd[1982]: time="2024-10-09T07:18:15.377452657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:18:15.381430 containerd[1982]: time="2024-10-09T07:18:15.379276086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:18:15.417020 containerd[1982]: time="2024-10-09T07:18:15.416953749Z" level=info msg="CreateContainer within sandbox \"b3f6b1991ba76fd92bca1fdaadc71d24044e9d8dbe44243e6af6cffc390b0f72\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:18:15.445217 containerd[1982]: time="2024-10-09T07:18:15.444549303Z" level=info msg="CreateContainer within sandbox \"b3f6b1991ba76fd92bca1fdaadc71d24044e9d8dbe44243e6af6cffc390b0f72\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e9618c42d394dcc834fb5f32c3f79864f7589ed194d046fcee2a8d5f94e3d791\"" Oct 9 07:18:15.444753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048621493.mount: Deactivated successfully. Oct 9 07:18:15.449322 containerd[1982]: time="2024-10-09T07:18:15.449275165Z" level=info msg="StartContainer for \"e9618c42d394dcc834fb5f32c3f79864f7589ed194d046fcee2a8d5f94e3d791\"" Oct 9 07:18:15.558873 systemd[1]: Started cri-containerd-e9618c42d394dcc834fb5f32c3f79864f7589ed194d046fcee2a8d5f94e3d791.scope - libcontainer container e9618c42d394dcc834fb5f32c3f79864f7589ed194d046fcee2a8d5f94e3d791. Oct 9 07:18:15.683668 containerd[1982]: time="2024-10-09T07:18:15.683026376Z" level=info msg="StartContainer for \"e9618c42d394dcc834fb5f32c3f79864f7589ed194d046fcee2a8d5f94e3d791\" returns successfully" Oct 9 07:18:16.276176 kubelet[3375]: E1009 07:18:16.276135 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxzfq" podUID="7a7754c0-14d0-4084-9006-948f71afe7d1" Oct 9 07:18:16.517926 kubelet[3375]: I1009 07:18:16.517335 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-b6b797f6f-jm56v" podStartSLOduration=2.759144405 podStartE2EDuration="7.516233005s" podCreationTimestamp="2024-10-09 07:18:09 +0000 UTC" firstStartedPulling="2024-10-09 07:18:10.621126179 +0000 UTC m=+21.568212590" lastFinishedPulling="2024-10-09 07:18:15.378214765 +0000 UTC m=+26.325301190" observedRunningTime="2024-10-09 07:18:16.516109511 +0000 UTC m=+27.463195943" watchObservedRunningTime="2024-10-09 07:18:16.516233005 +0000 UTC m=+27.463319436" Oct 9 07:18:17.507094 kubelet[3375]: I1009 07:18:17.507054 3375 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:18:18.275711 kubelet[3375]: E1009 07:18:18.275662 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxzfq" podUID="7a7754c0-14d0-4084-9006-948f71afe7d1" Oct 9 07:18:18.759551 kubelet[3375]: I1009 07:18:18.756908 3375 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:18:20.278372 kubelet[3375]: E1009 07:18:20.275289 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxzfq" podUID="7a7754c0-14d0-4084-9006-948f71afe7d1" Oct 9 07:18:20.417503 containerd[1982]: time="2024-10-09T07:18:20.417450111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:20.419139 containerd[1982]: time="2024-10-09T07:18:20.419089120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:18:20.420862 containerd[1982]: time="2024-10-09T07:18:20.420602341Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:20.423384 containerd[1982]: time="2024-10-09T07:18:20.423322368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:20.424160 containerd[1982]: time="2024-10-09T07:18:20.424123273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.04480631s" Oct 9 07:18:20.424237 containerd[1982]: time="2024-10-09T07:18:20.424167526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:18:20.426428 containerd[1982]: time="2024-10-09T07:18:20.426405218Z" level=info msg="CreateContainer within sandbox \"a4ee10ee9523bf6c70ec657c374b4a003bcf8e4fac9ee90ba739b59b787566d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:18:20.463609 containerd[1982]: time="2024-10-09T07:18:20.463452763Z" level=info msg="CreateContainer within sandbox \"a4ee10ee9523bf6c70ec657c374b4a003bcf8e4fac9ee90ba739b59b787566d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d697a4bd6db390a0ab416436ccf8941a1f5290a68885318429ac7f426d9a6781\"" Oct 9 07:18:20.465602 containerd[1982]: time="2024-10-09T07:18:20.464748331Z" level=info msg="StartContainer for \"d697a4bd6db390a0ab416436ccf8941a1f5290a68885318429ac7f426d9a6781\"" Oct 9 07:18:20.556573 systemd[1]: Started cri-containerd-d697a4bd6db390a0ab416436ccf8941a1f5290a68885318429ac7f426d9a6781.scope - libcontainer container d697a4bd6db390a0ab416436ccf8941a1f5290a68885318429ac7f426d9a6781. Oct 9 07:18:20.601771 containerd[1982]: time="2024-10-09T07:18:20.600981095Z" level=info msg="StartContainer for \"d697a4bd6db390a0ab416436ccf8941a1f5290a68885318429ac7f426d9a6781\" returns successfully" Oct 9 07:18:21.977777 systemd[1]: cri-containerd-d697a4bd6db390a0ab416436ccf8941a1f5290a68885318429ac7f426d9a6781.scope: Deactivated successfully. Oct 9 07:18:22.032083 kubelet[3375]: I1009 07:18:22.032055 3375 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:18:22.045662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d697a4bd6db390a0ab416436ccf8941a1f5290a68885318429ac7f426d9a6781-rootfs.mount: Deactivated successfully. Oct 9 07:18:22.058783 containerd[1982]: time="2024-10-09T07:18:22.058717777Z" level=info msg="shim disconnected" id=d697a4bd6db390a0ab416436ccf8941a1f5290a68885318429ac7f426d9a6781 namespace=k8s.io Oct 9 07:18:22.059703 containerd[1982]: time="2024-10-09T07:18:22.059450490Z" level=warning msg="cleaning up after shim disconnected" id=d697a4bd6db390a0ab416436ccf8941a1f5290a68885318429ac7f426d9a6781 namespace=k8s.io Oct 9 07:18:22.059703 containerd[1982]: time="2024-10-09T07:18:22.059478375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:18:22.127797 kubelet[3375]: I1009 07:18:22.127760 3375 topology_manager.go:215] "Topology Admit Handler" podUID="24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d" podNamespace="kube-system" podName="coredns-76f75df574-54ntm" Oct 9 07:18:22.130375 kubelet[3375]: I1009 07:18:22.128315 3375 topology_manager.go:215] "Topology Admit Handler" podUID="a0a45f61-4fac-4f17-b5b5-47247f5a0090" podNamespace="kube-system" podName="coredns-76f75df574-h7rpt" Oct 9 07:18:22.137813 kubelet[3375]: I1009 07:18:22.137669 3375 topology_manager.go:215] "Topology Admit Handler" podUID="929a21d5-9f89-46e5-b6f2-e6f1adb14ec5" podNamespace="calico-system" podName="calico-kube-controllers-67d6d587cd-v94sc" Oct 9 07:18:22.141492 kubelet[3375]: I1009 07:18:22.140819 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/929a21d5-9f89-46e5-b6f2-e6f1adb14ec5-tigera-ca-bundle\") pod \"calico-kube-controllers-67d6d587cd-v94sc\" (UID: \"929a21d5-9f89-46e5-b6f2-e6f1adb14ec5\") " pod="calico-system/calico-kube-controllers-67d6d587cd-v94sc" Oct 9 07:18:22.141492 kubelet[3375]: I1009 07:18:22.140876 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vppsm\" (UniqueName: \"kubernetes.io/projected/24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d-kube-api-access-vppsm\") pod \"coredns-76f75df574-54ntm\" (UID: \"24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d\") " pod="kube-system/coredns-76f75df574-54ntm" Oct 9 07:18:22.141492 kubelet[3375]: I1009 07:18:22.140918 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phf8h\" (UniqueName: \"kubernetes.io/projected/929a21d5-9f89-46e5-b6f2-e6f1adb14ec5-kube-api-access-phf8h\") pod \"calico-kube-controllers-67d6d587cd-v94sc\" (UID: \"929a21d5-9f89-46e5-b6f2-e6f1adb14ec5\") " pod="calico-system/calico-kube-controllers-67d6d587cd-v94sc" Oct 9 07:18:22.141492 kubelet[3375]: I1009 07:18:22.140954 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d-config-volume\") pod \"coredns-76f75df574-54ntm\" (UID: \"24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d\") " pod="kube-system/coredns-76f75df574-54ntm" Oct 9 07:18:22.141492 kubelet[3375]: I1009 07:18:22.140993 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0a45f61-4fac-4f17-b5b5-47247f5a0090-config-volume\") pod \"coredns-76f75df574-h7rpt\" (UID: \"a0a45f61-4fac-4f17-b5b5-47247f5a0090\") " pod="kube-system/coredns-76f75df574-h7rpt" Oct 9 07:18:22.141942 kubelet[3375]: I1009 07:18:22.141098 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgqxr\" (UniqueName: \"kubernetes.io/projected/a0a45f61-4fac-4f17-b5b5-47247f5a0090-kube-api-access-qgqxr\") pod \"coredns-76f75df574-h7rpt\" (UID: \"a0a45f61-4fac-4f17-b5b5-47247f5a0090\") " pod="kube-system/coredns-76f75df574-h7rpt" Oct 9 07:18:22.158807 systemd[1]: Created slice kubepods-burstable-pod24d677fb_e0a9_4c5a_99f8_f0eb4ad0492d.slice - libcontainer container kubepods-burstable-pod24d677fb_e0a9_4c5a_99f8_f0eb4ad0492d.slice. Oct 9 07:18:22.171797 systemd[1]: Created slice kubepods-burstable-poda0a45f61_4fac_4f17_b5b5_47247f5a0090.slice - libcontainer container kubepods-burstable-poda0a45f61_4fac_4f17_b5b5_47247f5a0090.slice. Oct 9 07:18:22.186445 systemd[1]: Created slice kubepods-besteffort-pod929a21d5_9f89_46e5_b6f2_e6f1adb14ec5.slice - libcontainer container kubepods-besteffort-pod929a21d5_9f89_46e5_b6f2_e6f1adb14ec5.slice. Oct 9 07:18:22.307596 systemd[1]: Created slice kubepods-besteffort-pod7a7754c0_14d0_4084_9006_948f71afe7d1.slice - libcontainer container kubepods-besteffort-pod7a7754c0_14d0_4084_9006_948f71afe7d1.slice. Oct 9 07:18:22.312590 containerd[1982]: time="2024-10-09T07:18:22.312544848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rxzfq,Uid:7a7754c0-14d0-4084-9006-948f71afe7d1,Namespace:calico-system,Attempt:0,}" Oct 9 07:18:22.469644 containerd[1982]: time="2024-10-09T07:18:22.468747004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-54ntm,Uid:24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d,Namespace:kube-system,Attempt:0,}" Oct 9 07:18:22.486487 containerd[1982]: time="2024-10-09T07:18:22.484776130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7rpt,Uid:a0a45f61-4fac-4f17-b5b5-47247f5a0090,Namespace:kube-system,Attempt:0,}" Oct 9 07:18:22.495566 containerd[1982]: time="2024-10-09T07:18:22.495522306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d6d587cd-v94sc,Uid:929a21d5-9f89-46e5-b6f2-e6f1adb14ec5,Namespace:calico-system,Attempt:0,}" Oct 9 07:18:22.654261 containerd[1982]: time="2024-10-09T07:18:22.651871194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:18:22.780461 containerd[1982]: time="2024-10-09T07:18:22.780208278Z" level=error msg="Failed to destroy network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.790517 containerd[1982]: time="2024-10-09T07:18:22.790183080Z" level=error msg="encountered an error cleaning up failed sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.790517 containerd[1982]: time="2024-10-09T07:18:22.790294825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rxzfq,Uid:7a7754c0-14d0-4084-9006-948f71afe7d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.795271 kubelet[3375]: E1009 07:18:22.793672 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.795271 kubelet[3375]: E1009 07:18:22.793893 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rxzfq" Oct 9 07:18:22.795271 kubelet[3375]: E1009 07:18:22.793929 3375 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rxzfq" Oct 9 07:18:22.795609 kubelet[3375]: E1009 07:18:22.794007 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rxzfq_calico-system(7a7754c0-14d0-4084-9006-948f71afe7d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rxzfq_calico-system(7a7754c0-14d0-4084-9006-948f71afe7d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rxzfq" podUID="7a7754c0-14d0-4084-9006-948f71afe7d1" Oct 9 07:18:22.863191 containerd[1982]: time="2024-10-09T07:18:22.862809135Z" level=error msg="Failed to destroy network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.864022 containerd[1982]: time="2024-10-09T07:18:22.863969215Z" level=error msg="encountered an error cleaning up failed sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.864639 containerd[1982]: time="2024-10-09T07:18:22.864548249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-54ntm,Uid:24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.866973 kubelet[3375]: E1009 07:18:22.865583 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.866973 kubelet[3375]: E1009 07:18:22.865651 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-54ntm" Oct 9 07:18:22.866973 kubelet[3375]: E1009 07:18:22.865681 3375 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-54ntm" Oct 9 07:18:22.867267 kubelet[3375]: E1009 07:18:22.865749 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-54ntm_kube-system(24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-54ntm_kube-system(24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-54ntm" podUID="24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d" Oct 9 07:18:22.870175 containerd[1982]: time="2024-10-09T07:18:22.870133954Z" level=error msg="Failed to destroy network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.871047 containerd[1982]: time="2024-10-09T07:18:22.870844663Z" level=error msg="encountered an error cleaning up failed sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.871047 containerd[1982]: time="2024-10-09T07:18:22.870916926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7rpt,Uid:a0a45f61-4fac-4f17-b5b5-47247f5a0090,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.872339 kubelet[3375]: E1009 07:18:22.872252 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.872339 kubelet[3375]: E1009 07:18:22.872315 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-h7rpt" Oct 9 07:18:22.872616 kubelet[3375]: E1009 07:18:22.872342 3375 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-h7rpt" Oct 9 07:18:22.872616 kubelet[3375]: E1009 07:18:22.872438 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-h7rpt_kube-system(a0a45f61-4fac-4f17-b5b5-47247f5a0090)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-h7rpt_kube-system(a0a45f61-4fac-4f17-b5b5-47247f5a0090)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-h7rpt" podUID="a0a45f61-4fac-4f17-b5b5-47247f5a0090" Oct 9 07:18:22.879217 containerd[1982]: time="2024-10-09T07:18:22.879173784Z" level=error msg="Failed to destroy network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.879630 containerd[1982]: time="2024-10-09T07:18:22.879587217Z" level=error msg="encountered an error cleaning up failed sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.879750 containerd[1982]: time="2024-10-09T07:18:22.879654835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d6d587cd-v94sc,Uid:929a21d5-9f89-46e5-b6f2-e6f1adb14ec5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.879952 kubelet[3375]: E1009 07:18:22.879926 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:22.880460 kubelet[3375]: E1009 07:18:22.880001 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d6d587cd-v94sc" Oct 9 07:18:22.880460 kubelet[3375]: E1009 07:18:22.880033 3375 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d6d587cd-v94sc" Oct 9 07:18:22.880460 kubelet[3375]: E1009 07:18:22.880110 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d6d587cd-v94sc_calico-system(929a21d5-9f89-46e5-b6f2-e6f1adb14ec5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d6d587cd-v94sc_calico-system(929a21d5-9f89-46e5-b6f2-e6f1adb14ec5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d6d587cd-v94sc" podUID="929a21d5-9f89-46e5-b6f2-e6f1adb14ec5" Oct 9 07:18:23.045818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b-shm.mount: Deactivated successfully. Oct 9 07:18:23.618167 kubelet[3375]: I1009 07:18:23.618137 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:23.620621 kubelet[3375]: I1009 07:18:23.619804 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:23.622078 kubelet[3375]: I1009 07:18:23.621711 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:23.624220 kubelet[3375]: I1009 07:18:23.624174 3375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:23.640108 containerd[1982]: time="2024-10-09T07:18:23.638400271Z" level=info msg="StopPodSandbox for \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\"" Oct 9 07:18:23.640628 containerd[1982]: time="2024-10-09T07:18:23.640299785Z" level=info msg="StopPodSandbox for \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\"" Oct 9 07:18:23.641385 containerd[1982]: time="2024-10-09T07:18:23.640793664Z" level=info msg="Ensure that sandbox 4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4 in task-service has been cleanup successfully" Oct 9 07:18:23.641385 containerd[1982]: time="2024-10-09T07:18:23.640932227Z" level=info msg="StopPodSandbox for \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\"" Oct 9 07:18:23.641385 containerd[1982]: time="2024-10-09T07:18:23.641214724Z" level=info msg="Ensure that sandbox e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b in task-service has been cleanup successfully" Oct 9 07:18:23.642283 containerd[1982]: time="2024-10-09T07:18:23.641763139Z" level=info msg="StopPodSandbox for \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\"" Oct 9 07:18:23.642283 containerd[1982]: time="2024-10-09T07:18:23.642006215Z" level=info msg="Ensure that sandbox 810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283 in task-service has been cleanup successfully" Oct 9 07:18:23.643516 containerd[1982]: time="2024-10-09T07:18:23.640875454Z" level=info msg="Ensure that sandbox 1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5 in task-service has been cleanup successfully" Oct 9 07:18:23.709330 containerd[1982]: time="2024-10-09T07:18:23.709274316Z" level=error msg="StopPodSandbox for \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\" failed" error="failed to destroy network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:23.710684 kubelet[3375]: E1009 07:18:23.710655 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:23.711154 kubelet[3375]: E1009 07:18:23.711135 3375 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5"} Oct 9 07:18:23.711760 kubelet[3375]: E1009 07:18:23.711568 3375 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0a45f61-4fac-4f17-b5b5-47247f5a0090\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:18:23.711760 kubelet[3375]: E1009 07:18:23.711730 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0a45f61-4fac-4f17-b5b5-47247f5a0090\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-h7rpt" podUID="a0a45f61-4fac-4f17-b5b5-47247f5a0090" Oct 9 07:18:23.741206 containerd[1982]: time="2024-10-09T07:18:23.740571687Z" level=error msg="StopPodSandbox for \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\" failed" error="failed to destroy network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:23.742490 kubelet[3375]: E1009 07:18:23.741027 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:23.742490 kubelet[3375]: E1009 07:18:23.741305 3375 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4"} Oct 9 07:18:23.742490 kubelet[3375]: E1009 07:18:23.741851 3375 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:18:23.742490 kubelet[3375]: E1009 07:18:23.741989 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-54ntm" podUID="24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d" Oct 9 07:18:23.751121 containerd[1982]: time="2024-10-09T07:18:23.751060396Z" level=error msg="StopPodSandbox for \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\" failed" error="failed to destroy network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:23.752025 kubelet[3375]: E1009 07:18:23.751406 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:23.752025 kubelet[3375]: E1009 07:18:23.751456 3375 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283"} Oct 9 07:18:23.752025 kubelet[3375]: E1009 07:18:23.751549 3375 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"929a21d5-9f89-46e5-b6f2-e6f1adb14ec5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:18:23.752025 kubelet[3375]: E1009 07:18:23.751886 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"929a21d5-9f89-46e5-b6f2-e6f1adb14ec5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d6d587cd-v94sc" podUID="929a21d5-9f89-46e5-b6f2-e6f1adb14ec5" Oct 9 07:18:23.771127 containerd[1982]: time="2024-10-09T07:18:23.770989028Z" level=error msg="StopPodSandbox for \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\" failed" error="failed to destroy network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:23.772715 kubelet[3375]: E1009 07:18:23.771416 3375 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:23.772715 kubelet[3375]: E1009 07:18:23.771486 3375 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b"} Oct 9 07:18:23.772715 kubelet[3375]: E1009 07:18:23.771550 3375 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a7754c0-14d0-4084-9006-948f71afe7d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:18:23.772715 kubelet[3375]: E1009 07:18:23.771625 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a7754c0-14d0-4084-9006-948f71afe7d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rxzfq" podUID="7a7754c0-14d0-4084-9006-948f71afe7d1" Oct 9 07:18:29.567821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670745430.mount: Deactivated successfully. Oct 9 07:18:29.810610 containerd[1982]: time="2024-10-09T07:18:29.809343889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:29.810610 containerd[1982]: time="2024-10-09T07:18:29.809461803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:18:29.867131 containerd[1982]: time="2024-10-09T07:18:29.866993622Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:29.869857 containerd[1982]: time="2024-10-09T07:18:29.869816877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:29.903044 containerd[1982]: time="2024-10-09T07:18:29.885893887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 7.219449092s" Oct 9 07:18:29.903279 containerd[1982]: time="2024-10-09T07:18:29.903251661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:18:30.073687 containerd[1982]: time="2024-10-09T07:18:30.073633243Z" level=info msg="CreateContainer within sandbox \"a4ee10ee9523bf6c70ec657c374b4a003bcf8e4fac9ee90ba739b59b787566d8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:18:30.147787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342402605.mount: Deactivated successfully. Oct 9 07:18:30.184495 containerd[1982]: time="2024-10-09T07:18:30.184445547Z" level=info msg="CreateContainer within sandbox \"a4ee10ee9523bf6c70ec657c374b4a003bcf8e4fac9ee90ba739b59b787566d8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"51ee42719483ae08bff2061e1f67643c7b19a669693850b5061975ed99e16136\"" Oct 9 07:18:30.185343 containerd[1982]: time="2024-10-09T07:18:30.185298389Z" level=info msg="StartContainer for \"51ee42719483ae08bff2061e1f67643c7b19a669693850b5061975ed99e16136\"" Oct 9 07:18:30.308815 systemd[1]: Started cri-containerd-51ee42719483ae08bff2061e1f67643c7b19a669693850b5061975ed99e16136.scope - libcontainer container 51ee42719483ae08bff2061e1f67643c7b19a669693850b5061975ed99e16136. Oct 9 07:18:30.365424 containerd[1982]: time="2024-10-09T07:18:30.365268405Z" level=info msg="StartContainer for \"51ee42719483ae08bff2061e1f67643c7b19a669693850b5061975ed99e16136\" returns successfully" Oct 9 07:18:30.521395 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:18:30.523506 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:18:33.177799 kernel: bpftool[4562]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:18:33.523862 systemd-networkd[1823]: vxlan.calico: Link UP Oct 9 07:18:33.523875 systemd-networkd[1823]: vxlan.calico: Gained carrier Oct 9 07:18:33.523933 (udev-worker)[4587]: Network interface NamePolicy= disabled on kernel command line. Oct 9 07:18:33.565589 (udev-worker)[4348]: Network interface NamePolicy= disabled on kernel command line. Oct 9 07:18:33.575928 (udev-worker)[4595]: Network interface NamePolicy= disabled on kernel command line. Oct 9 07:18:34.278016 containerd[1982]: time="2024-10-09T07:18:34.277728824Z" level=info msg="StopPodSandbox for \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\"" Oct 9 07:18:34.419436 kubelet[3375]: I1009 07:18:34.419384 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-58n7j" podStartSLOduration=6.046276667 podStartE2EDuration="25.388954458s" podCreationTimestamp="2024-10-09 07:18:09 +0000 UTC" firstStartedPulling="2024-10-09 07:18:10.560963703 +0000 UTC m=+21.508050127" lastFinishedPulling="2024-10-09 07:18:29.903641496 +0000 UTC m=+40.850727918" observedRunningTime="2024-10-09 07:18:30.917796871 +0000 UTC m=+41.864883303" watchObservedRunningTime="2024-10-09 07:18:34.388954458 +0000 UTC m=+45.336040890" Oct 9 07:18:34.605171 systemd-networkd[1823]: vxlan.calico: Gained IPv6LL Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.388 [INFO][4645] k8s.go 608: Cleaning up netns ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.391 [INFO][4645] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" iface="eth0" netns="/var/run/netns/cni-cead99b2-1b6a-7c54-fbd2-7cf571534788" Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.393 [INFO][4645] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" iface="eth0" netns="/var/run/netns/cni-cead99b2-1b6a-7c54-fbd2-7cf571534788" Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.397 [INFO][4645] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" iface="eth0" netns="/var/run/netns/cni-cead99b2-1b6a-7c54-fbd2-7cf571534788" Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.397 [INFO][4645] k8s.go 615: Releasing IP address(es) ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.397 [INFO][4645] utils.go 188: Calico CNI releasing IP address ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.658 [INFO][4651] ipam_plugin.go 417: Releasing address using handleID ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" HandleID="k8s-pod-network.1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.659 [INFO][4651] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.660 [INFO][4651] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.672 [WARNING][4651] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" HandleID="k8s-pod-network.1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.672 [INFO][4651] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" HandleID="k8s-pod-network.1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.675 [INFO][4651] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:34.680674 containerd[1982]: 2024-10-09 07:18:34.677 [INFO][4645] k8s.go 621: Teardown processing complete. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:34.685537 containerd[1982]: time="2024-10-09T07:18:34.685488861Z" level=info msg="TearDown network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\" successfully" Oct 9 07:18:34.685649 containerd[1982]: time="2024-10-09T07:18:34.685543438Z" level=info msg="StopPodSandbox for \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\" returns successfully" Oct 9 07:18:34.690254 systemd[1]: run-netns-cni\x2dcead99b2\x2d1b6a\x2d7c54\x2dfbd2\x2d7cf571534788.mount: Deactivated successfully. Oct 9 07:18:34.695925 containerd[1982]: time="2024-10-09T07:18:34.695884686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7rpt,Uid:a0a45f61-4fac-4f17-b5b5-47247f5a0090,Namespace:kube-system,Attempt:1,}" Oct 9 07:18:34.957902 systemd-networkd[1823]: cali907549639f6: Link UP Oct 9 07:18:34.960001 systemd-networkd[1823]: cali907549639f6: Gained carrier Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.828 [INFO][4659] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0 coredns-76f75df574- kube-system a0a45f61-4fac-4f17-b5b5-47247f5a0090 722 0 2024-10-09 07:18:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-194 coredns-76f75df574-h7rpt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali907549639f6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Namespace="kube-system" Pod="coredns-76f75df574-h7rpt" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.828 [INFO][4659] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Namespace="kube-system" Pod="coredns-76f75df574-h7rpt" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.878 [INFO][4671] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" HandleID="k8s-pod-network.2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.896 [INFO][4671] ipam_plugin.go 270: Auto assigning IP ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" HandleID="k8s-pod-network.2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291a40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-194", "pod":"coredns-76f75df574-h7rpt", "timestamp":"2024-10-09 07:18:34.878443422 +0000 UTC"}, Hostname:"ip-172-31-23-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.896 [INFO][4671] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.896 [INFO][4671] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.897 [INFO][4671] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-194' Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.903 [INFO][4671] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" host="ip-172-31-23-194" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.915 [INFO][4671] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-194" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.922 [INFO][4671] ipam.go 489: Trying affinity for 192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.926 [INFO][4671] ipam.go 155: Attempting to load block cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.929 [INFO][4671] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.929 [INFO][4671] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" host="ip-172-31-23-194" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.931 [INFO][4671] ipam.go 1685: Creating new handle: k8s-pod-network.2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55 Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.936 [INFO][4671] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" host="ip-172-31-23-194" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.945 [INFO][4671] ipam.go 1216: Successfully claimed IPs: [192.168.60.1/26] block=192.168.60.0/26 handle="k8s-pod-network.2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" host="ip-172-31-23-194" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.945 [INFO][4671] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.1/26] handle="k8s-pod-network.2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" host="ip-172-31-23-194" Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.945 [INFO][4671] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:34.989660 containerd[1982]: 2024-10-09 07:18:34.945 [INFO][4671] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.1/26] IPv6=[] ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" HandleID="k8s-pod-network.2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:34.995947 containerd[1982]: 2024-10-09 07:18:34.952 [INFO][4659] k8s.go 386: Populated endpoint ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Namespace="kube-system" Pod="coredns-76f75df574-h7rpt" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a0a45f61-4fac-4f17-b5b5-47247f5a0090", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"", Pod:"coredns-76f75df574-h7rpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali907549639f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:34.995947 containerd[1982]: 2024-10-09 07:18:34.952 [INFO][4659] k8s.go 387: Calico CNI using IPs: [192.168.60.1/32] ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Namespace="kube-system" Pod="coredns-76f75df574-h7rpt" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:34.995947 containerd[1982]: 2024-10-09 07:18:34.952 [INFO][4659] dataplane_linux.go 68: Setting the host side veth name to cali907549639f6 ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Namespace="kube-system" Pod="coredns-76f75df574-h7rpt" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:34.995947 containerd[1982]: 2024-10-09 07:18:34.957 [INFO][4659] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Namespace="kube-system" Pod="coredns-76f75df574-h7rpt" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:34.995947 containerd[1982]: 2024-10-09 07:18:34.959 [INFO][4659] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Namespace="kube-system" Pod="coredns-76f75df574-h7rpt" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a0a45f61-4fac-4f17-b5b5-47247f5a0090", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55", Pod:"coredns-76f75df574-h7rpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali907549639f6", MAC:"16:68:9d:ac:14:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:34.995947 containerd[1982]: 2024-10-09 07:18:34.979 [INFO][4659] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55" Namespace="kube-system" Pod="coredns-76f75df574-h7rpt" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:35.122665 containerd[1982]: time="2024-10-09T07:18:35.111908867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:35.122905 containerd[1982]: time="2024-10-09T07:18:35.122636830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:35.122905 containerd[1982]: time="2024-10-09T07:18:35.122665208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:35.122905 containerd[1982]: time="2024-10-09T07:18:35.122679499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:35.178673 systemd[1]: Started cri-containerd-2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55.scope - libcontainer container 2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55. Oct 9 07:18:35.283134 containerd[1982]: time="2024-10-09T07:18:35.283089976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7rpt,Uid:a0a45f61-4fac-4f17-b5b5-47247f5a0090,Namespace:kube-system,Attempt:1,} returns sandbox id \"2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55\"" Oct 9 07:18:35.289086 containerd[1982]: time="2024-10-09T07:18:35.288951861Z" level=info msg="CreateContainer within sandbox \"2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:18:35.320705 containerd[1982]: time="2024-10-09T07:18:35.320652779Z" level=info msg="CreateContainer within sandbox \"2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0a088bc9468e6e35f9149ce89933363e7b4250c48ca6166ba4b3c9162aeb85d\"" Oct 9 07:18:35.321509 containerd[1982]: time="2024-10-09T07:18:35.321475532Z" level=info msg="StartContainer for \"c0a088bc9468e6e35f9149ce89933363e7b4250c48ca6166ba4b3c9162aeb85d\"" Oct 9 07:18:35.359588 systemd[1]: Started cri-containerd-c0a088bc9468e6e35f9149ce89933363e7b4250c48ca6166ba4b3c9162aeb85d.scope - libcontainer container c0a088bc9468e6e35f9149ce89933363e7b4250c48ca6166ba4b3c9162aeb85d. Oct 9 07:18:35.424870 containerd[1982]: time="2024-10-09T07:18:35.424815739Z" level=info msg="StartContainer for \"c0a088bc9468e6e35f9149ce89933363e7b4250c48ca6166ba4b3c9162aeb85d\" returns successfully" Oct 9 07:18:35.911334 kubelet[3375]: I1009 07:18:35.911215 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-h7rpt" podStartSLOduration=33.910999757 podStartE2EDuration="33.910999757s" podCreationTimestamp="2024-10-09 07:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:18:35.909849389 +0000 UTC m=+46.856935820" watchObservedRunningTime="2024-10-09 07:18:35.910999757 +0000 UTC m=+46.858086186" Oct 9 07:18:36.276775 containerd[1982]: time="2024-10-09T07:18:36.276733673Z" level=info msg="StopPodSandbox for \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\"" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.365 [INFO][4785] k8s.go 608: Cleaning up netns ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.365 [INFO][4785] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" iface="eth0" netns="/var/run/netns/cni-eb6bc90d-c838-12d5-7fba-1901e9766c50" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.366 [INFO][4785] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" iface="eth0" netns="/var/run/netns/cni-eb6bc90d-c838-12d5-7fba-1901e9766c50" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.367 [INFO][4785] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" iface="eth0" netns="/var/run/netns/cni-eb6bc90d-c838-12d5-7fba-1901e9766c50" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.368 [INFO][4785] k8s.go 615: Releasing IP address(es) ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.368 [INFO][4785] utils.go 188: Calico CNI releasing IP address ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.427 [INFO][4791] ipam_plugin.go 417: Releasing address using handleID ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" HandleID="k8s-pod-network.e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.427 [INFO][4791] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.427 [INFO][4791] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.436 [WARNING][4791] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" HandleID="k8s-pod-network.e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.436 [INFO][4791] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" HandleID="k8s-pod-network.e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.440 [INFO][4791] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:36.444057 containerd[1982]: 2024-10-09 07:18:36.441 [INFO][4785] k8s.go 621: Teardown processing complete. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:36.449547 containerd[1982]: time="2024-10-09T07:18:36.444215410Z" level=info msg="TearDown network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\" successfully" Oct 9 07:18:36.449547 containerd[1982]: time="2024-10-09T07:18:36.444266973Z" level=info msg="StopPodSandbox for \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\" returns successfully" Oct 9 07:18:36.449547 containerd[1982]: time="2024-10-09T07:18:36.445560264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rxzfq,Uid:7a7754c0-14d0-4084-9006-948f71afe7d1,Namespace:calico-system,Attempt:1,}" Oct 9 07:18:36.450887 systemd[1]: run-netns-cni\x2deb6bc90d\x2dc838\x2d12d5\x2d7fba\x2d1901e9766c50.mount: Deactivated successfully. Oct 9 07:18:36.459663 systemd-networkd[1823]: cali907549639f6: Gained IPv6LL Oct 9 07:18:36.624402 systemd-networkd[1823]: cali7a51ef431c5: Link UP Oct 9 07:18:36.624893 systemd-networkd[1823]: cali7a51ef431c5: Gained carrier Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.533 [INFO][4797] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0 csi-node-driver- calico-system 7a7754c0-14d0-4084-9006-948f71afe7d1 737 0 2024-10-09 07:18:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-23-194 csi-node-driver-rxzfq eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali7a51ef431c5 [] []}} ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Namespace="calico-system" Pod="csi-node-driver-rxzfq" WorkloadEndpoint="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.533 [INFO][4797] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Namespace="calico-system" Pod="csi-node-driver-rxzfq" WorkloadEndpoint="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.571 [INFO][4809] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" HandleID="k8s-pod-network.1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.582 [INFO][4809] ipam_plugin.go 270: Auto assigning IP ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" HandleID="k8s-pod-network.1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000505f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-194", "pod":"csi-node-driver-rxzfq", "timestamp":"2024-10-09 07:18:36.571021705 +0000 UTC"}, Hostname:"ip-172-31-23-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.582 [INFO][4809] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.582 [INFO][4809] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.582 [INFO][4809] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-194' Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.585 [INFO][4809] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" host="ip-172-31-23-194" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.589 [INFO][4809] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-194" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.594 [INFO][4809] ipam.go 489: Trying affinity for 192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.596 [INFO][4809] ipam.go 155: Attempting to load block cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.599 [INFO][4809] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.599 [INFO][4809] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" host="ip-172-31-23-194" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.602 [INFO][4809] ipam.go 1685: Creating new handle: k8s-pod-network.1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203 Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.607 [INFO][4809] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" host="ip-172-31-23-194" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.616 [INFO][4809] ipam.go 1216: Successfully claimed IPs: [192.168.60.2/26] block=192.168.60.0/26 handle="k8s-pod-network.1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" host="ip-172-31-23-194" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.617 [INFO][4809] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.2/26] handle="k8s-pod-network.1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" host="ip-172-31-23-194" Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.617 [INFO][4809] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:36.657071 containerd[1982]: 2024-10-09 07:18:36.617 [INFO][4809] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.2/26] IPv6=[] ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" HandleID="k8s-pod-network.1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.660563 containerd[1982]: 2024-10-09 07:18:36.620 [INFO][4797] k8s.go 386: Populated endpoint ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Namespace="calico-system" Pod="csi-node-driver-rxzfq" WorkloadEndpoint="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a7754c0-14d0-4084-9006-948f71afe7d1", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"", Pod:"csi-node-driver-rxzfq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7a51ef431c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:36.660563 containerd[1982]: 2024-10-09 07:18:36.620 [INFO][4797] k8s.go 387: Calico CNI using IPs: [192.168.60.2/32] ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Namespace="calico-system" Pod="csi-node-driver-rxzfq" WorkloadEndpoint="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.660563 containerd[1982]: 2024-10-09 07:18:36.620 [INFO][4797] dataplane_linux.go 68: Setting the host side veth name to cali7a51ef431c5 ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Namespace="calico-system" Pod="csi-node-driver-rxzfq" WorkloadEndpoint="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.660563 containerd[1982]: 2024-10-09 07:18:36.627 [INFO][4797] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Namespace="calico-system" Pod="csi-node-driver-rxzfq" WorkloadEndpoint="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.660563 containerd[1982]: 2024-10-09 07:18:36.627 [INFO][4797] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Namespace="calico-system" Pod="csi-node-driver-rxzfq" WorkloadEndpoint="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a7754c0-14d0-4084-9006-948f71afe7d1", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203", Pod:"csi-node-driver-rxzfq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7a51ef431c5", MAC:"da:8e:b2:29:f1:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:36.660563 containerd[1982]: 2024-10-09 07:18:36.649 [INFO][4797] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203" Namespace="calico-system" Pod="csi-node-driver-rxzfq" WorkloadEndpoint="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:36.702595 containerd[1982]: time="2024-10-09T07:18:36.702273516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:36.702595 containerd[1982]: time="2024-10-09T07:18:36.702345649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:36.702595 containerd[1982]: time="2024-10-09T07:18:36.702432614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:36.702595 containerd[1982]: time="2024-10-09T07:18:36.702449203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:36.745907 systemd[1]: Started cri-containerd-1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203.scope - libcontainer container 1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203. Oct 9 07:18:36.783783 containerd[1982]: time="2024-10-09T07:18:36.783743860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rxzfq,Uid:7a7754c0-14d0-4084-9006-948f71afe7d1,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203\"" Oct 9 07:18:36.786730 containerd[1982]: time="2024-10-09T07:18:36.786688961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:18:37.278120 containerd[1982]: time="2024-10-09T07:18:37.278072974Z" level=info msg="StopPodSandbox for \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\"" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.336 [INFO][4883] k8s.go 608: Cleaning up netns ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.336 [INFO][4883] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" iface="eth0" netns="/var/run/netns/cni-8a1d969f-43d1-533f-6377-66e22171e4b9" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.337 [INFO][4883] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" iface="eth0" netns="/var/run/netns/cni-8a1d969f-43d1-533f-6377-66e22171e4b9" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.338 [INFO][4883] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" iface="eth0" netns="/var/run/netns/cni-8a1d969f-43d1-533f-6377-66e22171e4b9" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.338 [INFO][4883] k8s.go 615: Releasing IP address(es) ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.338 [INFO][4883] utils.go 188: Calico CNI releasing IP address ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.363 [INFO][4889] ipam_plugin.go 417: Releasing address using handleID ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" HandleID="k8s-pod-network.810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.363 [INFO][4889] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.363 [INFO][4889] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.369 [WARNING][4889] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" HandleID="k8s-pod-network.810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.369 [INFO][4889] ipam_plugin.go 445: Releasing address using workloadID ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" HandleID="k8s-pod-network.810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.371 [INFO][4889] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:37.383644 containerd[1982]: 2024-10-09 07:18:37.381 [INFO][4883] k8s.go 621: Teardown processing complete. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:37.384425 containerd[1982]: time="2024-10-09T07:18:37.383828585Z" level=info msg="TearDown network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\" successfully" Oct 9 07:18:37.387659 containerd[1982]: time="2024-10-09T07:18:37.386399752Z" level=info msg="StopPodSandbox for \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\" returns successfully" Oct 9 07:18:37.387876 systemd[1]: run-netns-cni\x2d8a1d969f\x2d43d1\x2d533f\x2d6377\x2d66e22171e4b9.mount: Deactivated successfully. Oct 9 07:18:37.390066 containerd[1982]: time="2024-10-09T07:18:37.389708495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d6d587cd-v94sc,Uid:929a21d5-9f89-46e5-b6f2-e6f1adb14ec5,Namespace:calico-system,Attempt:1,}" Oct 9 07:18:37.573345 systemd-networkd[1823]: cali2a7d158388f: Link UP Oct 9 07:18:37.577988 systemd-networkd[1823]: cali2a7d158388f: Gained carrier Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.455 [INFO][4896] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0 calico-kube-controllers-67d6d587cd- calico-system 929a21d5-9f89-46e5-b6f2-e6f1adb14ec5 751 0 2024-10-09 07:18:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67d6d587cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-194 calico-kube-controllers-67d6d587cd-v94sc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2a7d158388f [] []}} ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Namespace="calico-system" Pod="calico-kube-controllers-67d6d587cd-v94sc" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.455 [INFO][4896] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Namespace="calico-system" Pod="calico-kube-controllers-67d6d587cd-v94sc" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.500 [INFO][4906] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" HandleID="k8s-pod-network.2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.526 [INFO][4906] ipam_plugin.go 270: Auto assigning IP ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" HandleID="k8s-pod-network.2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-194", "pod":"calico-kube-controllers-67d6d587cd-v94sc", "timestamp":"2024-10-09 07:18:37.500691967 +0000 UTC"}, Hostname:"ip-172-31-23-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.527 [INFO][4906] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.527 [INFO][4906] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.527 [INFO][4906] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-194' Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.529 [INFO][4906] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" host="ip-172-31-23-194" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.535 [INFO][4906] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-194" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.541 [INFO][4906] ipam.go 489: Trying affinity for 192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.543 [INFO][4906] ipam.go 155: Attempting to load block cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.546 [INFO][4906] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.546 [INFO][4906] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" host="ip-172-31-23-194" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.548 [INFO][4906] ipam.go 1685: Creating new handle: k8s-pod-network.2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409 Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.558 [INFO][4906] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" host="ip-172-31-23-194" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.566 [INFO][4906] ipam.go 1216: Successfully claimed IPs: [192.168.60.3/26] block=192.168.60.0/26 handle="k8s-pod-network.2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" host="ip-172-31-23-194" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.567 [INFO][4906] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.3/26] handle="k8s-pod-network.2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" host="ip-172-31-23-194" Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.567 [INFO][4906] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:37.607503 containerd[1982]: 2024-10-09 07:18:37.567 [INFO][4906] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.3/26] IPv6=[] ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" HandleID="k8s-pod-network.2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.611998 containerd[1982]: 2024-10-09 07:18:37.569 [INFO][4896] k8s.go 386: Populated endpoint ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Namespace="calico-system" Pod="calico-kube-controllers-67d6d587cd-v94sc" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0", GenerateName:"calico-kube-controllers-67d6d587cd-", Namespace:"calico-system", SelfLink:"", UID:"929a21d5-9f89-46e5-b6f2-e6f1adb14ec5", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d6d587cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"", Pod:"calico-kube-controllers-67d6d587cd-v94sc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2a7d158388f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:37.611998 containerd[1982]: 2024-10-09 07:18:37.569 [INFO][4896] k8s.go 387: Calico CNI using IPs: [192.168.60.3/32] ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Namespace="calico-system" Pod="calico-kube-controllers-67d6d587cd-v94sc" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.611998 containerd[1982]: 2024-10-09 07:18:37.569 [INFO][4896] dataplane_linux.go 68: Setting the host side veth name to cali2a7d158388f ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Namespace="calico-system" Pod="calico-kube-controllers-67d6d587cd-v94sc" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.611998 containerd[1982]: 2024-10-09 07:18:37.580 [INFO][4896] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Namespace="calico-system" Pod="calico-kube-controllers-67d6d587cd-v94sc" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.611998 containerd[1982]: 2024-10-09 07:18:37.580 [INFO][4896] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Namespace="calico-system" Pod="calico-kube-controllers-67d6d587cd-v94sc" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0", GenerateName:"calico-kube-controllers-67d6d587cd-", Namespace:"calico-system", SelfLink:"", UID:"929a21d5-9f89-46e5-b6f2-e6f1adb14ec5", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d6d587cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409", Pod:"calico-kube-controllers-67d6d587cd-v94sc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2a7d158388f", MAC:"42:36:55:40:f1:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:37.611998 containerd[1982]: 2024-10-09 07:18:37.597 [INFO][4896] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409" Namespace="calico-system" Pod="calico-kube-controllers-67d6d587cd-v94sc" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:37.658479 containerd[1982]: time="2024-10-09T07:18:37.657800686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:37.658479 containerd[1982]: time="2024-10-09T07:18:37.657861600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:37.658479 containerd[1982]: time="2024-10-09T07:18:37.657893082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:37.658479 containerd[1982]: time="2024-10-09T07:18:37.657909172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:37.688758 systemd[1]: Started cri-containerd-2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409.scope - libcontainer container 2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409. Oct 9 07:18:37.759689 containerd[1982]: time="2024-10-09T07:18:37.759646109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d6d587cd-v94sc,Uid:929a21d5-9f89-46e5-b6f2-e6f1adb14ec5,Namespace:calico-system,Attempt:1,} returns sandbox id \"2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409\"" Oct 9 07:18:38.277958 containerd[1982]: time="2024-10-09T07:18:38.276915778Z" level=info msg="StopPodSandbox for \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\"" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.456 [INFO][4984] k8s.go 608: Cleaning up netns ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.456 [INFO][4984] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" iface="eth0" netns="/var/run/netns/cni-ed31cb9e-8349-2664-ceb3-0bb80b5a4745" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.465 [INFO][4984] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" iface="eth0" netns="/var/run/netns/cni-ed31cb9e-8349-2664-ceb3-0bb80b5a4745" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.466 [INFO][4984] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" iface="eth0" netns="/var/run/netns/cni-ed31cb9e-8349-2664-ceb3-0bb80b5a4745" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.466 [INFO][4984] k8s.go 615: Releasing IP address(es) ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.466 [INFO][4984] utils.go 188: Calico CNI releasing IP address ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.519 [INFO][4992] ipam_plugin.go 417: Releasing address using handleID ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" HandleID="k8s-pod-network.4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.519 [INFO][4992] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.519 [INFO][4992] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.529 [WARNING][4992] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" HandleID="k8s-pod-network.4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.529 [INFO][4992] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" HandleID="k8s-pod-network.4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.536 [INFO][4992] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:38.546445 containerd[1982]: 2024-10-09 07:18:38.541 [INFO][4984] k8s.go 621: Teardown processing complete. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:38.549035 systemd[1]: run-netns-cni\x2ded31cb9e\x2d8349\x2d2664\x2dceb3\x2d0bb80b5a4745.mount: Deactivated successfully. Oct 9 07:18:38.555100 containerd[1982]: time="2024-10-09T07:18:38.554055420Z" level=info msg="TearDown network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\" successfully" Oct 9 07:18:38.555302 containerd[1982]: time="2024-10-09T07:18:38.555260546Z" level=info msg="StopPodSandbox for \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\" returns successfully" Oct 9 07:18:38.556707 containerd[1982]: time="2024-10-09T07:18:38.556672260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-54ntm,Uid:24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d,Namespace:kube-system,Attempt:1,}" Oct 9 07:18:38.699779 systemd-networkd[1823]: cali7a51ef431c5: Gained IPv6LL Oct 9 07:18:38.769770 containerd[1982]: time="2024-10-09T07:18:38.769677821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:38.773715 containerd[1982]: time="2024-10-09T07:18:38.773595026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:18:38.774768 containerd[1982]: time="2024-10-09T07:18:38.774731429Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:38.781186 containerd[1982]: time="2024-10-09T07:18:38.781071009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:38.782312 containerd[1982]: time="2024-10-09T07:18:38.782259785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.993829833s" Oct 9 07:18:38.783468 containerd[1982]: time="2024-10-09T07:18:38.783130923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:18:38.784956 containerd[1982]: time="2024-10-09T07:18:38.784440836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:18:38.786937 containerd[1982]: time="2024-10-09T07:18:38.786604435Z" level=info msg="CreateContainer within sandbox \"1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:18:38.827731 systemd-networkd[1823]: cali2a7d158388f: Gained IPv6LL Oct 9 07:18:38.877674 containerd[1982]: time="2024-10-09T07:18:38.877478379Z" level=info msg="CreateContainer within sandbox \"1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d1467edf4ee9f7d409f929d899914cdc674231556e21748f0288a1f9c6f97b20\"" Oct 9 07:18:38.887377 containerd[1982]: time="2024-10-09T07:18:38.885970944Z" level=info msg="StartContainer for \"d1467edf4ee9f7d409f929d899914cdc674231556e21748f0288a1f9c6f97b20\"" Oct 9 07:18:39.041017 systemd[1]: run-containerd-runc-k8s.io-d1467edf4ee9f7d409f929d899914cdc674231556e21748f0288a1f9c6f97b20-runc.PpSHsi.mount: Deactivated successfully. Oct 9 07:18:39.056677 systemd[1]: Started cri-containerd-d1467edf4ee9f7d409f929d899914cdc674231556e21748f0288a1f9c6f97b20.scope - libcontainer container d1467edf4ee9f7d409f929d899914cdc674231556e21748f0288a1f9c6f97b20. Oct 9 07:18:39.115655 systemd-networkd[1823]: cali004570c2160: Link UP Oct 9 07:18:39.119322 systemd-networkd[1823]: cali004570c2160: Gained carrier Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.778 [INFO][5004] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0 coredns-76f75df574- kube-system 24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d 759 0 2024-10-09 07:18:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-194 coredns-76f75df574-54ntm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali004570c2160 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Namespace="kube-system" Pod="coredns-76f75df574-54ntm" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.779 [INFO][5004] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Namespace="kube-system" Pod="coredns-76f75df574-54ntm" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.876 [INFO][5010] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" HandleID="k8s-pod-network.ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.942 [INFO][5010] ipam_plugin.go 270: Auto assigning IP ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" HandleID="k8s-pod-network.ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000468d30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-194", "pod":"coredns-76f75df574-54ntm", "timestamp":"2024-10-09 07:18:38.87691699 +0000 UTC"}, Hostname:"ip-172-31-23-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.943 [INFO][5010] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.943 [INFO][5010] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.943 [INFO][5010] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-194' Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.948 [INFO][5010] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" host="ip-172-31-23-194" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.960 [INFO][5010] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-194" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.991 [INFO][5010] ipam.go 489: Trying affinity for 192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:38.996 [INFO][5010] ipam.go 155: Attempting to load block cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:39.022 [INFO][5010] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:39.022 [INFO][5010] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" host="ip-172-31-23-194" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:39.053 [INFO][5010] ipam.go 1685: Creating new handle: k8s-pod-network.ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19 Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:39.079 [INFO][5010] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" host="ip-172-31-23-194" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:39.097 [INFO][5010] ipam.go 1216: Successfully claimed IPs: [192.168.60.4/26] block=192.168.60.0/26 handle="k8s-pod-network.ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" host="ip-172-31-23-194" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:39.097 [INFO][5010] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.4/26] handle="k8s-pod-network.ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" host="ip-172-31-23-194" Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:39.097 [INFO][5010] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:39.182663 containerd[1982]: 2024-10-09 07:18:39.097 [INFO][5010] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.4/26] IPv6=[] ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" HandleID="k8s-pod-network.ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:39.185555 containerd[1982]: 2024-10-09 07:18:39.106 [INFO][5004] k8s.go 386: Populated endpoint ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Namespace="kube-system" Pod="coredns-76f75df574-54ntm" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"", Pod:"coredns-76f75df574-54ntm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali004570c2160", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:39.185555 containerd[1982]: 2024-10-09 07:18:39.107 [INFO][5004] k8s.go 387: Calico CNI using IPs: [192.168.60.4/32] ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Namespace="kube-system" Pod="coredns-76f75df574-54ntm" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:39.185555 containerd[1982]: 2024-10-09 07:18:39.107 [INFO][5004] dataplane_linux.go 68: Setting the host side veth name to cali004570c2160 ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Namespace="kube-system" Pod="coredns-76f75df574-54ntm" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:39.185555 containerd[1982]: 2024-10-09 07:18:39.120 [INFO][5004] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Namespace="kube-system" Pod="coredns-76f75df574-54ntm" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:39.185555 containerd[1982]: 2024-10-09 07:18:39.121 [INFO][5004] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Namespace="kube-system" Pod="coredns-76f75df574-54ntm" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19", Pod:"coredns-76f75df574-54ntm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali004570c2160", MAC:"5a:7c:1f:d7:bc:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:39.185555 containerd[1982]: 2024-10-09 07:18:39.167 [INFO][5004] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19" Namespace="kube-system" Pod="coredns-76f75df574-54ntm" WorkloadEndpoint="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:39.209950 containerd[1982]: time="2024-10-09T07:18:39.209805916Z" level=info msg="StartContainer for \"d1467edf4ee9f7d409f929d899914cdc674231556e21748f0288a1f9c6f97b20\" returns successfully" Oct 9 07:18:39.303200 containerd[1982]: time="2024-10-09T07:18:39.297408037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:39.303200 containerd[1982]: time="2024-10-09T07:18:39.297484640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:39.303200 containerd[1982]: time="2024-10-09T07:18:39.297523773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:39.303200 containerd[1982]: time="2024-10-09T07:18:39.297545776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:39.345964 systemd[1]: Started cri-containerd-ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19.scope - libcontainer container ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19. Oct 9 07:18:39.405419 containerd[1982]: time="2024-10-09T07:18:39.404423639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-54ntm,Uid:24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d,Namespace:kube-system,Attempt:1,} returns sandbox id \"ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19\"" Oct 9 07:18:39.410848 containerd[1982]: time="2024-10-09T07:18:39.410768887Z" level=info msg="CreateContainer within sandbox \"ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:18:39.438135 containerd[1982]: time="2024-10-09T07:18:39.438082396Z" level=info msg="CreateContainer within sandbox \"ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e57e6b1de02e073c0d5d4d5fcf5c289d7f5dc0764e01992fc61633a2de48f6d0\"" Oct 9 07:18:39.439274 containerd[1982]: time="2024-10-09T07:18:39.439054324Z" level=info msg="StartContainer for \"e57e6b1de02e073c0d5d4d5fcf5c289d7f5dc0764e01992fc61633a2de48f6d0\"" Oct 9 07:18:39.482341 systemd[1]: Started cri-containerd-e57e6b1de02e073c0d5d4d5fcf5c289d7f5dc0764e01992fc61633a2de48f6d0.scope - libcontainer container e57e6b1de02e073c0d5d4d5fcf5c289d7f5dc0764e01992fc61633a2de48f6d0. Oct 9 07:18:39.521065 containerd[1982]: time="2024-10-09T07:18:39.520977001Z" level=info msg="StartContainer for \"e57e6b1de02e073c0d5d4d5fcf5c289d7f5dc0764e01992fc61633a2de48f6d0\" returns successfully" Oct 9 07:18:39.997880 kubelet[3375]: I1009 07:18:39.997761 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-54ntm" podStartSLOduration=37.997417567 podStartE2EDuration="37.997417567s" podCreationTimestamp="2024-10-09 07:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:18:39.99382237 +0000 UTC m=+50.940908802" watchObservedRunningTime="2024-10-09 07:18:39.997417567 +0000 UTC m=+50.944503998" Oct 9 07:18:40.685706 systemd-networkd[1823]: cali004570c2160: Gained IPv6LL Oct 9 07:18:41.406774 systemd[1]: Started sshd@7-172.31.23.194:22-139.178.89.65:45546.service - OpenSSH per-connection server daemon (139.178.89.65:45546). Oct 9 07:18:41.679449 sshd[5166]: Accepted publickey for core from 139.178.89.65 port 45546 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:18:41.685022 sshd[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:41.725213 systemd-logind[1961]: New session 8 of user core. Oct 9 07:18:41.728681 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:18:42.372527 containerd[1982]: time="2024-10-09T07:18:42.372474601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:42.378879 containerd[1982]: time="2024-10-09T07:18:42.377934259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:18:42.385784 containerd[1982]: time="2024-10-09T07:18:42.384573019Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:42.393962 containerd[1982]: time="2024-10-09T07:18:42.393850691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:42.396514 containerd[1982]: time="2024-10-09T07:18:42.396329304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.610202782s" Oct 9 07:18:42.396514 containerd[1982]: time="2024-10-09T07:18:42.396396801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:18:42.403664 containerd[1982]: time="2024-10-09T07:18:42.403055332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:18:42.478096 containerd[1982]: time="2024-10-09T07:18:42.476897764Z" level=info msg="CreateContainer within sandbox \"2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:18:42.527208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4077940544.mount: Deactivated successfully. Oct 9 07:18:42.530223 containerd[1982]: time="2024-10-09T07:18:42.526138273Z" level=info msg="CreateContainer within sandbox \"2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b0500e55b98a1e94ab281401d1b6655bb3dbd7de5193b75343e10034fdfe2efa\"" Oct 9 07:18:42.531554 containerd[1982]: time="2024-10-09T07:18:42.531517349Z" level=info msg="StartContainer for \"b0500e55b98a1e94ab281401d1b6655bb3dbd7de5193b75343e10034fdfe2efa\"" Oct 9 07:18:42.644595 systemd[1]: Started cri-containerd-b0500e55b98a1e94ab281401d1b6655bb3dbd7de5193b75343e10034fdfe2efa.scope - libcontainer container b0500e55b98a1e94ab281401d1b6655bb3dbd7de5193b75343e10034fdfe2efa. Oct 9 07:18:42.809722 containerd[1982]: time="2024-10-09T07:18:42.808115956Z" level=info msg="StartContainer for \"b0500e55b98a1e94ab281401d1b6655bb3dbd7de5193b75343e10034fdfe2efa\" returns successfully" Oct 9 07:18:43.000186 sshd[5166]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:43.014325 systemd[1]: sshd@7-172.31.23.194:22-139.178.89.65:45546.service: Deactivated successfully. Oct 9 07:18:43.019109 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:18:43.022568 systemd-logind[1961]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:18:43.026925 systemd-logind[1961]: Removed session 8. Oct 9 07:18:43.596396 ntpd[1953]: Listen normally on 7 vxlan.calico 192.168.60.0:123 Oct 9 07:18:43.598720 ntpd[1953]: 9 Oct 07:18:43 ntpd[1953]: Listen normally on 7 vxlan.calico 192.168.60.0:123 Oct 9 07:18:43.598720 ntpd[1953]: 9 Oct 07:18:43 ntpd[1953]: Listen normally on 8 vxlan.calico [fe80::64bb:2ff:fe5e:8835%4]:123 Oct 9 07:18:43.598720 ntpd[1953]: 9 Oct 07:18:43 ntpd[1953]: Listen normally on 9 cali907549639f6 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 9 07:18:43.598720 ntpd[1953]: 9 Oct 07:18:43 ntpd[1953]: Listen normally on 10 cali7a51ef431c5 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 9 07:18:43.598720 ntpd[1953]: 9 Oct 07:18:43 ntpd[1953]: Listen normally on 11 cali2a7d158388f [fe80::ecee:eeff:feee:eeee%9]:123 Oct 9 07:18:43.598720 ntpd[1953]: 9 Oct 07:18:43 ntpd[1953]: Listen normally on 12 cali004570c2160 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 9 07:18:43.596488 ntpd[1953]: Listen normally on 8 vxlan.calico [fe80::64bb:2ff:fe5e:8835%4]:123 Oct 9 07:18:43.596542 ntpd[1953]: Listen normally on 9 cali907549639f6 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 9 07:18:43.596576 ntpd[1953]: Listen normally on 10 cali7a51ef431c5 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 9 07:18:43.596617 ntpd[1953]: Listen normally on 11 cali2a7d158388f [fe80::ecee:eeff:feee:eeee%9]:123 Oct 9 07:18:43.597879 ntpd[1953]: Listen normally on 12 cali004570c2160 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 9 07:18:44.205810 kubelet[3375]: I1009 07:18:44.205052 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67d6d587cd-v94sc" podStartSLOduration=29.569676578 podStartE2EDuration="34.204995282s" podCreationTimestamp="2024-10-09 07:18:10 +0000 UTC" firstStartedPulling="2024-10-09 07:18:37.761421927 +0000 UTC m=+48.708508341" lastFinishedPulling="2024-10-09 07:18:42.396740625 +0000 UTC m=+53.343827045" observedRunningTime="2024-10-09 07:18:43.084775405 +0000 UTC m=+54.031861828" watchObservedRunningTime="2024-10-09 07:18:44.204995282 +0000 UTC m=+55.152081724" Oct 9 07:18:44.324606 containerd[1982]: time="2024-10-09T07:18:44.324556196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:44.326748 containerd[1982]: time="2024-10-09T07:18:44.326657512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:18:44.329607 containerd[1982]: time="2024-10-09T07:18:44.329345383Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:44.333716 containerd[1982]: time="2024-10-09T07:18:44.333652334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:44.335101 containerd[1982]: time="2024-10-09T07:18:44.334472717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.931366726s" Oct 9 07:18:44.335101 containerd[1982]: time="2024-10-09T07:18:44.334526416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:18:44.342659 containerd[1982]: time="2024-10-09T07:18:44.342613914Z" level=info msg="CreateContainer within sandbox \"1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:18:44.379936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712723359.mount: Deactivated successfully. Oct 9 07:18:44.382871 containerd[1982]: time="2024-10-09T07:18:44.382766158Z" level=info msg="CreateContainer within sandbox \"1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0b37fc4bb57ed4a9b66ee71a6ff3e769caabfbdcac112dc4964bd2db408977fc\"" Oct 9 07:18:44.384478 containerd[1982]: time="2024-10-09T07:18:44.383839874Z" level=info msg="StartContainer for \"0b37fc4bb57ed4a9b66ee71a6ff3e769caabfbdcac112dc4964bd2db408977fc\"" Oct 9 07:18:44.459992 systemd[1]: Started cri-containerd-0b37fc4bb57ed4a9b66ee71a6ff3e769caabfbdcac112dc4964bd2db408977fc.scope - libcontainer container 0b37fc4bb57ed4a9b66ee71a6ff3e769caabfbdcac112dc4964bd2db408977fc. Oct 9 07:18:44.527470 containerd[1982]: time="2024-10-09T07:18:44.526625963Z" level=info msg="StartContainer for \"0b37fc4bb57ed4a9b66ee71a6ff3e769caabfbdcac112dc4964bd2db408977fc\" returns successfully" Oct 9 07:18:45.078528 kubelet[3375]: I1009 07:18:45.077620 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-rxzfq" podStartSLOduration=27.52696573 podStartE2EDuration="35.077561974s" podCreationTimestamp="2024-10-09 07:18:10 +0000 UTC" firstStartedPulling="2024-10-09 07:18:36.78511195 +0000 UTC m=+47.732198365" lastFinishedPulling="2024-10-09 07:18:44.335708186 +0000 UTC m=+55.282794609" observedRunningTime="2024-10-09 07:18:45.07547432 +0000 UTC m=+56.022560753" watchObservedRunningTime="2024-10-09 07:18:45.077561974 +0000 UTC m=+56.024648405" Oct 9 07:18:45.655976 kubelet[3375]: I1009 07:18:45.655898 3375 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:18:45.660687 kubelet[3375]: I1009 07:18:45.660654 3375 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:18:48.036142 systemd[1]: Started sshd@8-172.31.23.194:22-139.178.89.65:56196.service - OpenSSH per-connection server daemon (139.178.89.65:56196). Oct 9 07:18:48.278409 sshd[5335]: Accepted publickey for core from 139.178.89.65 port 56196 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:18:48.280218 sshd[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:48.290847 systemd-logind[1961]: New session 9 of user core. Oct 9 07:18:48.297887 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:18:48.599418 sshd[5335]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:48.626061 systemd[1]: sshd@8-172.31.23.194:22-139.178.89.65:56196.service: Deactivated successfully. Oct 9 07:18:48.644136 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:18:48.648293 systemd-logind[1961]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:18:48.650143 systemd-logind[1961]: Removed session 9. Oct 9 07:18:49.307141 containerd[1982]: time="2024-10-09T07:18:49.307007308Z" level=info msg="StopPodSandbox for \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\"" Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.450 [WARNING][5362] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0", GenerateName:"calico-kube-controllers-67d6d587cd-", Namespace:"calico-system", SelfLink:"", UID:"929a21d5-9f89-46e5-b6f2-e6f1adb14ec5", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d6d587cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409", Pod:"calico-kube-controllers-67d6d587cd-v94sc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2a7d158388f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.451 [INFO][5362] k8s.go 608: Cleaning up netns ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.452 [INFO][5362] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" iface="eth0" netns="" Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.452 [INFO][5362] k8s.go 615: Releasing IP address(es) ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.452 [INFO][5362] utils.go 188: Calico CNI releasing IP address ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.507 [INFO][5370] ipam_plugin.go 417: Releasing address using handleID ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" HandleID="k8s-pod-network.810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.507 [INFO][5370] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.507 [INFO][5370] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.515 [WARNING][5370] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" HandleID="k8s-pod-network.810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.515 [INFO][5370] ipam_plugin.go 445: Releasing address using workloadID ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" HandleID="k8s-pod-network.810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.517 [INFO][5370] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:49.525168 containerd[1982]: 2024-10-09 07:18:49.523 [INFO][5362] k8s.go 621: Teardown processing complete. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:49.528641 containerd[1982]: time="2024-10-09T07:18:49.525213662Z" level=info msg="TearDown network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\" successfully" Oct 9 07:18:49.528641 containerd[1982]: time="2024-10-09T07:18:49.525244934Z" level=info msg="StopPodSandbox for \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\" returns successfully" Oct 9 07:18:49.528641 containerd[1982]: time="2024-10-09T07:18:49.526832402Z" level=info msg="RemovePodSandbox for \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\"" Oct 9 07:18:49.528641 containerd[1982]: time="2024-10-09T07:18:49.526873279Z" level=info msg="Forcibly stopping sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\"" Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.609 [WARNING][5388] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0", GenerateName:"calico-kube-controllers-67d6d587cd-", Namespace:"calico-system", SelfLink:"", UID:"929a21d5-9f89-46e5-b6f2-e6f1adb14ec5", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d6d587cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"2969bc45c508aace6fed64366517a0a8bf3c963326d6d07cb5dc94eb3efdf409", Pod:"calico-kube-controllers-67d6d587cd-v94sc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2a7d158388f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.609 [INFO][5388] k8s.go 608: Cleaning up netns ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.609 [INFO][5388] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" iface="eth0" netns="" Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.609 [INFO][5388] k8s.go 615: Releasing IP address(es) ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.609 [INFO][5388] utils.go 188: Calico CNI releasing IP address ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.636 [INFO][5395] ipam_plugin.go 417: Releasing address using handleID ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" HandleID="k8s-pod-network.810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.636 [INFO][5395] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.637 [INFO][5395] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.644 [WARNING][5395] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" HandleID="k8s-pod-network.810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.644 [INFO][5395] ipam_plugin.go 445: Releasing address using workloadID ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" HandleID="k8s-pod-network.810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Workload="ip--172--31--23--194-k8s-calico--kube--controllers--67d6d587cd--v94sc-eth0" Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.645 [INFO][5395] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:49.650245 containerd[1982]: 2024-10-09 07:18:49.648 [INFO][5388] k8s.go 621: Teardown processing complete. ContainerID="810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283" Oct 9 07:18:49.650245 containerd[1982]: time="2024-10-09T07:18:49.650084798Z" level=info msg="TearDown network for sandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\" successfully" Oct 9 07:18:49.656876 containerd[1982]: time="2024-10-09T07:18:49.656820249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:18:49.657038 containerd[1982]: time="2024-10-09T07:18:49.656909551Z" level=info msg="RemovePodSandbox \"810465f96e8561f2914441aa5079d015bd7e6a64e321eb5765b8b008f26b2283\" returns successfully" Oct 9 07:18:49.657678 containerd[1982]: time="2024-10-09T07:18:49.657623395Z" level=info msg="StopPodSandbox for \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\"" Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.706 [WARNING][5413] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a7754c0-14d0-4084-9006-948f71afe7d1", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203", Pod:"csi-node-driver-rxzfq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7a51ef431c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.706 [INFO][5413] k8s.go 608: Cleaning up netns ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.706 [INFO][5413] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" iface="eth0" netns="" Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.707 [INFO][5413] k8s.go 615: Releasing IP address(es) ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.707 [INFO][5413] utils.go 188: Calico CNI releasing IP address ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.734 [INFO][5419] ipam_plugin.go 417: Releasing address using handleID ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" HandleID="k8s-pod-network.e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.734 [INFO][5419] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.734 [INFO][5419] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.741 [WARNING][5419] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" HandleID="k8s-pod-network.e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.741 [INFO][5419] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" HandleID="k8s-pod-network.e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.744 [INFO][5419] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:49.748367 containerd[1982]: 2024-10-09 07:18:49.746 [INFO][5413] k8s.go 621: Teardown processing complete. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:49.750330 containerd[1982]: time="2024-10-09T07:18:49.748532896Z" level=info msg="TearDown network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\" successfully" Oct 9 07:18:49.750330 containerd[1982]: time="2024-10-09T07:18:49.748580012Z" level=info msg="StopPodSandbox for \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\" returns successfully" Oct 9 07:18:49.750330 containerd[1982]: time="2024-10-09T07:18:49.749437578Z" level=info msg="RemovePodSandbox for \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\"" Oct 9 07:18:49.750330 containerd[1982]: time="2024-10-09T07:18:49.749497201Z" level=info msg="Forcibly stopping sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\"" Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.795 [WARNING][5438] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a7754c0-14d0-4084-9006-948f71afe7d1", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"1a568dd52bc98a08ad89596e24d97cff83c9d6e172c531af64f704f67955b203", Pod:"csi-node-driver-rxzfq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7a51ef431c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.796 [INFO][5438] k8s.go 608: Cleaning up netns ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.796 [INFO][5438] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" iface="eth0" netns="" Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.796 [INFO][5438] k8s.go 615: Releasing IP address(es) ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.796 [INFO][5438] utils.go 188: Calico CNI releasing IP address ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.840 [INFO][5444] ipam_plugin.go 417: Releasing address using handleID ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" HandleID="k8s-pod-network.e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.840 [INFO][5444] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.840 [INFO][5444] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.856 [WARNING][5444] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" HandleID="k8s-pod-network.e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.857 [INFO][5444] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" HandleID="k8s-pod-network.e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Workload="ip--172--31--23--194-k8s-csi--node--driver--rxzfq-eth0" Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.862 [INFO][5444] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:49.868376 containerd[1982]: 2024-10-09 07:18:49.866 [INFO][5438] k8s.go 621: Teardown processing complete. ContainerID="e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b" Oct 9 07:18:49.870956 containerd[1982]: time="2024-10-09T07:18:49.868479811Z" level=info msg="TearDown network for sandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\" successfully" Oct 9 07:18:49.876828 containerd[1982]: time="2024-10-09T07:18:49.876676104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:18:49.877440 containerd[1982]: time="2024-10-09T07:18:49.876860116Z" level=info msg="RemovePodSandbox \"e9e87c033c0ef08591e323f23b6df1bcb88fe50ff950b43cfec09a6acc727b8b\" returns successfully" Oct 9 07:18:49.879001 containerd[1982]: time="2024-10-09T07:18:49.878956560Z" level=info msg="StopPodSandbox for \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\"" Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.952 [WARNING][5463] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19", Pod:"coredns-76f75df574-54ntm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali004570c2160", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.952 [INFO][5463] k8s.go 608: Cleaning up netns ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.952 [INFO][5463] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" iface="eth0" netns="" Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.953 [INFO][5463] k8s.go 615: Releasing IP address(es) ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.953 [INFO][5463] utils.go 188: Calico CNI releasing IP address ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.991 [INFO][5470] ipam_plugin.go 417: Releasing address using handleID ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" HandleID="k8s-pod-network.4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.991 [INFO][5470] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.991 [INFO][5470] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.997 [WARNING][5470] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" HandleID="k8s-pod-network.4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.998 [INFO][5470] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" HandleID="k8s-pod-network.4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:49.999 [INFO][5470] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:50.003245 containerd[1982]: 2024-10-09 07:18:50.001 [INFO][5463] k8s.go 621: Teardown processing complete. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:50.004319 containerd[1982]: time="2024-10-09T07:18:50.003293890Z" level=info msg="TearDown network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\" successfully" Oct 9 07:18:50.004319 containerd[1982]: time="2024-10-09T07:18:50.003323643Z" level=info msg="StopPodSandbox for \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\" returns successfully" Oct 9 07:18:50.004319 containerd[1982]: time="2024-10-09T07:18:50.003875112Z" level=info msg="RemovePodSandbox for \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\"" Oct 9 07:18:50.004319 containerd[1982]: time="2024-10-09T07:18:50.003911482Z" level=info msg="Forcibly stopping sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\"" Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.065 [WARNING][5488] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"24d677fb-e0a9-4c5a-99f8-f0eb4ad0492d", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"ed9d175e17f2fb11548d9d57e05de4bcb9c92451d7012d8db1388b05658f5f19", Pod:"coredns-76f75df574-54ntm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali004570c2160", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.065 [INFO][5488] k8s.go 608: Cleaning up netns ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.065 [INFO][5488] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" iface="eth0" netns="" Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.066 [INFO][5488] k8s.go 615: Releasing IP address(es) ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.066 [INFO][5488] utils.go 188: Calico CNI releasing IP address ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.103 [INFO][5494] ipam_plugin.go 417: Releasing address using handleID ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" HandleID="k8s-pod-network.4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.103 [INFO][5494] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.103 [INFO][5494] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.112 [WARNING][5494] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" HandleID="k8s-pod-network.4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.112 [INFO][5494] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" HandleID="k8s-pod-network.4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--54ntm-eth0" Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.114 [INFO][5494] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:50.119592 containerd[1982]: 2024-10-09 07:18:50.116 [INFO][5488] k8s.go 621: Teardown processing complete. ContainerID="4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4" Oct 9 07:18:50.119592 containerd[1982]: time="2024-10-09T07:18:50.118052821Z" level=info msg="TearDown network for sandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\" successfully" Oct 9 07:18:50.123677 containerd[1982]: time="2024-10-09T07:18:50.123439463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:18:50.124165 containerd[1982]: time="2024-10-09T07:18:50.124131841Z" level=info msg="RemovePodSandbox \"4dabef55286f1bce925812105b502dcc1f8015496d769b4209d22148699413d4\" returns successfully" Oct 9 07:18:50.125087 containerd[1982]: time="2024-10-09T07:18:50.124754077Z" level=info msg="StopPodSandbox for \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\"" Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.174 [WARNING][5514] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a0a45f61-4fac-4f17-b5b5-47247f5a0090", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55", Pod:"coredns-76f75df574-h7rpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali907549639f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.175 [INFO][5514] k8s.go 608: Cleaning up netns ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.175 [INFO][5514] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" iface="eth0" netns="" Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.175 [INFO][5514] k8s.go 615: Releasing IP address(es) ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.175 [INFO][5514] utils.go 188: Calico CNI releasing IP address ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.205 [INFO][5520] ipam_plugin.go 417: Releasing address using handleID ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" HandleID="k8s-pod-network.1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.206 [INFO][5520] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.206 [INFO][5520] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.215 [WARNING][5520] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" HandleID="k8s-pod-network.1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.215 [INFO][5520] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" HandleID="k8s-pod-network.1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.217 [INFO][5520] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:50.220889 containerd[1982]: 2024-10-09 07:18:50.218 [INFO][5514] k8s.go 621: Teardown processing complete. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:50.221794 containerd[1982]: time="2024-10-09T07:18:50.220939470Z" level=info msg="TearDown network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\" successfully" Oct 9 07:18:50.221794 containerd[1982]: time="2024-10-09T07:18:50.220969724Z" level=info msg="StopPodSandbox for \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\" returns successfully" Oct 9 07:18:50.221794 containerd[1982]: time="2024-10-09T07:18:50.221532366Z" level=info msg="RemovePodSandbox for \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\"" Oct 9 07:18:50.221794 containerd[1982]: time="2024-10-09T07:18:50.221567366Z" level=info msg="Forcibly stopping sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\"" Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.335 [WARNING][5539] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a0a45f61-4fac-4f17-b5b5-47247f5a0090", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"2dde1a5b13f836b0f384715608f31153b16a1c82adc23a6ba1fa7cc10c719a55", Pod:"coredns-76f75df574-h7rpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali907549639f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.335 [INFO][5539] k8s.go 608: Cleaning up netns ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.335 [INFO][5539] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" iface="eth0" netns="" Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.335 [INFO][5539] k8s.go 615: Releasing IP address(es) ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.335 [INFO][5539] utils.go 188: Calico CNI releasing IP address ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.365 [INFO][5545] ipam_plugin.go 417: Releasing address using handleID ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" HandleID="k8s-pod-network.1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.365 [INFO][5545] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.365 [INFO][5545] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.372 [WARNING][5545] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" HandleID="k8s-pod-network.1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.372 [INFO][5545] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" HandleID="k8s-pod-network.1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Workload="ip--172--31--23--194-k8s-coredns--76f75df574--h7rpt-eth0" Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.375 [INFO][5545] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:50.379055 containerd[1982]: 2024-10-09 07:18:50.377 [INFO][5539] k8s.go 621: Teardown processing complete. ContainerID="1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5" Oct 9 07:18:50.379055 containerd[1982]: time="2024-10-09T07:18:50.379026833Z" level=info msg="TearDown network for sandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\" successfully" Oct 9 07:18:50.384443 containerd[1982]: time="2024-10-09T07:18:50.384257599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:18:50.384443 containerd[1982]: time="2024-10-09T07:18:50.384333687Z" level=info msg="RemovePodSandbox \"1adf283c1f2dfda55d2141f759efdd61a1e0134f8fd9980c96dca1779b4dbdc5\" returns successfully" Oct 9 07:18:53.667886 systemd[1]: Started sshd@9-172.31.23.194:22-139.178.89.65:56198.service - OpenSSH per-connection server daemon (139.178.89.65:56198). Oct 9 07:18:53.894400 sshd[5570]: Accepted publickey for core from 139.178.89.65 port 56198 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:18:53.895867 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:53.901911 systemd-logind[1961]: New session 10 of user core. Oct 9 07:18:53.906138 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:18:54.198569 sshd[5570]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:54.209975 systemd[1]: sshd@9-172.31.23.194:22-139.178.89.65:56198.service: Deactivated successfully. Oct 9 07:18:54.214089 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:18:54.217469 systemd-logind[1961]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:18:54.229810 systemd[1]: Started sshd@10-172.31.23.194:22-139.178.89.65:56204.service - OpenSSH per-connection server daemon (139.178.89.65:56204). Oct 9 07:18:54.232427 systemd-logind[1961]: Removed session 10. Oct 9 07:18:54.396793 sshd[5590]: Accepted publickey for core from 139.178.89.65 port 56204 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:18:54.397786 sshd[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:54.403548 systemd-logind[1961]: New session 11 of user core. Oct 9 07:18:54.409584 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:18:54.745411 sshd[5590]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:54.763636 systemd-logind[1961]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:18:54.765378 systemd[1]: sshd@10-172.31.23.194:22-139.178.89.65:56204.service: Deactivated successfully. Oct 9 07:18:54.774578 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:18:54.792839 systemd[1]: Started sshd@11-172.31.23.194:22-139.178.89.65:56208.service - OpenSSH per-connection server daemon (139.178.89.65:56208). Oct 9 07:18:54.797673 systemd-logind[1961]: Removed session 11. Oct 9 07:18:54.970275 sshd[5606]: Accepted publickey for core from 139.178.89.65 port 56208 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:18:54.972365 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:54.981824 systemd-logind[1961]: New session 12 of user core. Oct 9 07:18:54.988583 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:18:55.239441 sshd[5606]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:55.256445 systemd-logind[1961]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:18:55.256975 systemd[1]: sshd@11-172.31.23.194:22-139.178.89.65:56208.service: Deactivated successfully. Oct 9 07:18:55.271458 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:18:55.278998 systemd-logind[1961]: Removed session 12. Oct 9 07:19:00.285806 systemd[1]: Started sshd@12-172.31.23.194:22-139.178.89.65:36608.service - OpenSSH per-connection server daemon (139.178.89.65:36608). Oct 9 07:19:00.552627 sshd[5621]: Accepted publickey for core from 139.178.89.65 port 36608 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:00.553387 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:00.577753 systemd-logind[1961]: New session 13 of user core. Oct 9 07:19:00.607870 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:19:00.927501 sshd[5621]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:00.941629 systemd[1]: sshd@12-172.31.23.194:22-139.178.89.65:36608.service: Deactivated successfully. Oct 9 07:19:00.947801 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:19:00.953203 systemd-logind[1961]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:19:00.955467 systemd-logind[1961]: Removed session 13. Oct 9 07:19:05.963742 systemd[1]: Started sshd@13-172.31.23.194:22-139.178.89.65:53636.service - OpenSSH per-connection server daemon (139.178.89.65:53636). Oct 9 07:19:06.204030 sshd[5646]: Accepted publickey for core from 139.178.89.65 port 53636 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:06.211037 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:06.230239 systemd-logind[1961]: New session 14 of user core. Oct 9 07:19:06.238175 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:19:06.534634 sshd[5646]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:06.544011 systemd[1]: sshd@13-172.31.23.194:22-139.178.89.65:53636.service: Deactivated successfully. Oct 9 07:19:06.546630 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:19:06.547963 systemd-logind[1961]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:19:06.549315 systemd-logind[1961]: Removed session 14. Oct 9 07:19:11.575134 systemd[1]: Started sshd@14-172.31.23.194:22-139.178.89.65:53644.service - OpenSSH per-connection server daemon (139.178.89.65:53644). Oct 9 07:19:11.766536 sshd[5659]: Accepted publickey for core from 139.178.89.65 port 53644 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:11.770220 sshd[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:11.785426 systemd-logind[1961]: New session 15 of user core. Oct 9 07:19:11.789627 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:19:12.159034 sshd[5659]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:12.169293 systemd-logind[1961]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:19:12.171046 systemd[1]: sshd@14-172.31.23.194:22-139.178.89.65:53644.service: Deactivated successfully. Oct 9 07:19:12.175137 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:19:12.177719 systemd-logind[1961]: Removed session 15. Oct 9 07:19:17.197031 systemd[1]: Started sshd@15-172.31.23.194:22-139.178.89.65:45116.service - OpenSSH per-connection server daemon (139.178.89.65:45116). Oct 9 07:19:17.394726 sshd[5704]: Accepted publickey for core from 139.178.89.65 port 45116 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:17.396189 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:17.407049 systemd-logind[1961]: New session 16 of user core. Oct 9 07:19:17.413579 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:19:17.854775 sshd[5704]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:17.872831 systemd[1]: sshd@15-172.31.23.194:22-139.178.89.65:45116.service: Deactivated successfully. Oct 9 07:19:17.881812 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:19:17.883411 systemd-logind[1961]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:19:17.909438 systemd[1]: Started sshd@16-172.31.23.194:22-139.178.89.65:45132.service - OpenSSH per-connection server daemon (139.178.89.65:45132). Oct 9 07:19:17.914743 systemd-logind[1961]: Removed session 16. Oct 9 07:19:18.102409 sshd[5717]: Accepted publickey for core from 139.178.89.65 port 45132 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:18.105813 sshd[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:18.114716 systemd-logind[1961]: New session 17 of user core. Oct 9 07:19:18.122694 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:19:18.796235 sshd[5717]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:18.807020 systemd[1]: sshd@16-172.31.23.194:22-139.178.89.65:45132.service: Deactivated successfully. Oct 9 07:19:18.811207 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:19:18.816146 systemd-logind[1961]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:19:18.834040 systemd[1]: Started sshd@17-172.31.23.194:22-139.178.89.65:45144.service - OpenSSH per-connection server daemon (139.178.89.65:45144). Oct 9 07:19:18.835586 systemd-logind[1961]: Removed session 17. Oct 9 07:19:19.012756 sshd[5747]: Accepted publickey for core from 139.178.89.65 port 45144 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:19.014808 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:19.021495 systemd-logind[1961]: New session 18 of user core. Oct 9 07:19:19.027567 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:19:21.826527 sshd[5747]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:21.850777 systemd[1]: sshd@17-172.31.23.194:22-139.178.89.65:45144.service: Deactivated successfully. Oct 9 07:19:21.855341 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:19:21.860082 systemd-logind[1961]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:19:21.870409 systemd[1]: Started sshd@18-172.31.23.194:22-139.178.89.65:45158.service - OpenSSH per-connection server daemon (139.178.89.65:45158). Oct 9 07:19:21.873264 systemd-logind[1961]: Removed session 18. Oct 9 07:19:22.082129 sshd[5770]: Accepted publickey for core from 139.178.89.65 port 45158 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:22.084960 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:22.092191 systemd-logind[1961]: New session 19 of user core. Oct 9 07:19:22.098601 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:19:22.967840 sshd[5770]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:22.972923 systemd[1]: sshd@18-172.31.23.194:22-139.178.89.65:45158.service: Deactivated successfully. Oct 9 07:19:22.976272 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:19:22.978125 systemd-logind[1961]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:19:22.981145 systemd-logind[1961]: Removed session 19. Oct 9 07:19:23.004804 systemd[1]: Started sshd@19-172.31.23.194:22-139.178.89.65:45170.service - OpenSSH per-connection server daemon (139.178.89.65:45170). Oct 9 07:19:23.205382 sshd[5799]: Accepted publickey for core from 139.178.89.65 port 45170 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:23.207206 sshd[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:23.231703 systemd-logind[1961]: New session 20 of user core. Oct 9 07:19:23.241604 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:19:23.517765 sshd[5799]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:23.523196 systemd-logind[1961]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:19:23.523743 systemd[1]: sshd@19-172.31.23.194:22-139.178.89.65:45170.service: Deactivated successfully. Oct 9 07:19:23.529343 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:19:23.531084 systemd-logind[1961]: Removed session 20. Oct 9 07:19:24.662036 kubelet[3375]: I1009 07:19:24.661746 3375 topology_manager.go:215] "Topology Admit Handler" podUID="5349dc51-392f-475c-8960-dfe2170530b1" podNamespace="calico-apiserver" podName="calico-apiserver-7fcb8757df-d55ks" Oct 9 07:19:24.748642 kubelet[3375]: I1009 07:19:24.748417 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5349dc51-392f-475c-8960-dfe2170530b1-calico-apiserver-certs\") pod \"calico-apiserver-7fcb8757df-d55ks\" (UID: \"5349dc51-392f-475c-8960-dfe2170530b1\") " pod="calico-apiserver/calico-apiserver-7fcb8757df-d55ks" Oct 9 07:19:24.752698 kubelet[3375]: I1009 07:19:24.752260 3375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcc8n\" (UniqueName: \"kubernetes.io/projected/5349dc51-392f-475c-8960-dfe2170530b1-kube-api-access-zcc8n\") pod \"calico-apiserver-7fcb8757df-d55ks\" (UID: \"5349dc51-392f-475c-8960-dfe2170530b1\") " pod="calico-apiserver/calico-apiserver-7fcb8757df-d55ks" Oct 9 07:19:24.756091 systemd[1]: Created slice kubepods-besteffort-pod5349dc51_392f_475c_8960_dfe2170530b1.slice - libcontainer container kubepods-besteffort-pod5349dc51_392f_475c_8960_dfe2170530b1.slice. Oct 9 07:19:24.861268 kubelet[3375]: E1009 07:19:24.861221 3375 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:19:24.951288 kubelet[3375]: E1009 07:19:24.951101 3375 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5349dc51-392f-475c-8960-dfe2170530b1-calico-apiserver-certs podName:5349dc51-392f-475c-8960-dfe2170530b1 nodeName:}" failed. No retries permitted until 2024-10-09 07:19:25.383236023 +0000 UTC m=+96.330322434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/5349dc51-392f-475c-8960-dfe2170530b1-calico-apiserver-certs") pod "calico-apiserver-7fcb8757df-d55ks" (UID: "5349dc51-392f-475c-8960-dfe2170530b1") : secret "calico-apiserver-certs" not found Oct 9 07:19:25.668631 containerd[1982]: time="2024-10-09T07:19:25.668411420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcb8757df-d55ks,Uid:5349dc51-392f-475c-8960-dfe2170530b1,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:19:26.307816 systemd-networkd[1823]: cali64a207d100e: Link UP Oct 9 07:19:26.310998 systemd-networkd[1823]: cali64a207d100e: Gained carrier Oct 9 07:19:26.319615 (udev-worker)[5841]: Network interface NamePolicy= disabled on kernel command line. Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.076 [INFO][5823] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0 calico-apiserver-7fcb8757df- calico-apiserver 5349dc51-392f-475c-8960-dfe2170530b1 1101 0 2024-10-09 07:19:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcb8757df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-194 calico-apiserver-7fcb8757df-d55ks eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64a207d100e [] []}} ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcb8757df-d55ks" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.076 [INFO][5823] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcb8757df-d55ks" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.148 [INFO][5833] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" HandleID="k8s-pod-network.5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Workload="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.201 [INFO][5833] ipam_plugin.go 270: Auto assigning IP ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" HandleID="k8s-pod-network.5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Workload="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ec330), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-194", "pod":"calico-apiserver-7fcb8757df-d55ks", "timestamp":"2024-10-09 07:19:26.148272633 +0000 UTC"}, Hostname:"ip-172-31-23-194", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.202 [INFO][5833] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.202 [INFO][5833] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.202 [INFO][5833] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-194' Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.219 [INFO][5833] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" host="ip-172-31-23-194" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.232 [INFO][5833] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-194" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.247 [INFO][5833] ipam.go 489: Trying affinity for 192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.257 [INFO][5833] ipam.go 155: Attempting to load block cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.261 [INFO][5833] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ip-172-31-23-194" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.261 [INFO][5833] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" host="ip-172-31-23-194" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.265 [INFO][5833] ipam.go 1685: Creating new handle: k8s-pod-network.5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1 Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.280 [INFO][5833] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" host="ip-172-31-23-194" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.295 [INFO][5833] ipam.go 1216: Successfully claimed IPs: [192.168.60.5/26] block=192.168.60.0/26 handle="k8s-pod-network.5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" host="ip-172-31-23-194" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.295 [INFO][5833] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.5/26] handle="k8s-pod-network.5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" host="ip-172-31-23-194" Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.295 [INFO][5833] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:19:26.361428 containerd[1982]: 2024-10-09 07:19:26.295 [INFO][5833] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.5/26] IPv6=[] ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" HandleID="k8s-pod-network.5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Workload="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" Oct 9 07:19:26.364108 containerd[1982]: 2024-10-09 07:19:26.301 [INFO][5823] k8s.go 386: Populated endpoint ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcb8757df-d55ks" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0", GenerateName:"calico-apiserver-7fcb8757df-", Namespace:"calico-apiserver", SelfLink:"", UID:"5349dc51-392f-475c-8960-dfe2170530b1", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 19, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcb8757df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"", Pod:"calico-apiserver-7fcb8757df-d55ks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64a207d100e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:19:26.364108 containerd[1982]: 2024-10-09 07:19:26.301 [INFO][5823] k8s.go 387: Calico CNI using IPs: [192.168.60.5/32] ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcb8757df-d55ks" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" Oct 9 07:19:26.364108 containerd[1982]: 2024-10-09 07:19:26.301 [INFO][5823] dataplane_linux.go 68: Setting the host side veth name to cali64a207d100e ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcb8757df-d55ks" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" Oct 9 07:19:26.364108 containerd[1982]: 2024-10-09 07:19:26.313 [INFO][5823] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcb8757df-d55ks" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" Oct 9 07:19:26.364108 containerd[1982]: 2024-10-09 07:19:26.315 [INFO][5823] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcb8757df-d55ks" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0", GenerateName:"calico-apiserver-7fcb8757df-", Namespace:"calico-apiserver", SelfLink:"", UID:"5349dc51-392f-475c-8960-dfe2170530b1", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 19, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcb8757df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-194", ContainerID:"5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1", Pod:"calico-apiserver-7fcb8757df-d55ks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64a207d100e", MAC:"9a:ed:98:4d:16:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:19:26.364108 containerd[1982]: 2024-10-09 07:19:26.337 [INFO][5823] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcb8757df-d55ks" WorkloadEndpoint="ip--172--31--23--194-k8s-calico--apiserver--7fcb8757df--d55ks-eth0" Oct 9 07:19:26.470794 containerd[1982]: time="2024-10-09T07:19:26.469654040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:19:26.471619 containerd[1982]: time="2024-10-09T07:19:26.471236597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:19:26.471619 containerd[1982]: time="2024-10-09T07:19:26.471335325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:19:26.471619 containerd[1982]: time="2024-10-09T07:19:26.471374590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:19:26.639858 systemd[1]: Started cri-containerd-5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1.scope - libcontainer container 5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1. Oct 9 07:19:26.746279 containerd[1982]: time="2024-10-09T07:19:26.746222864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcb8757df-d55ks,Uid:5349dc51-392f-475c-8960-dfe2170530b1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1\"" Oct 9 07:19:26.767872 containerd[1982]: time="2024-10-09T07:19:26.767811377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:19:27.852502 systemd-networkd[1823]: cali64a207d100e: Gained IPv6LL Oct 9 07:19:28.561289 systemd[1]: Started sshd@20-172.31.23.194:22-139.178.89.65:50630.service - OpenSSH per-connection server daemon (139.178.89.65:50630). Oct 9 07:19:28.759880 sshd[5898]: Accepted publickey for core from 139.178.89.65 port 50630 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:28.763282 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:28.774499 systemd-logind[1961]: New session 21 of user core. Oct 9 07:19:28.779709 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:19:29.511829 sshd[5898]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:29.531146 systemd[1]: sshd@20-172.31.23.194:22-139.178.89.65:50630.service: Deactivated successfully. Oct 9 07:19:29.531495 systemd-logind[1961]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:19:29.537372 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:19:29.543398 systemd-logind[1961]: Removed session 21. Oct 9 07:19:30.596805 ntpd[1953]: Listen normally on 13 cali64a207d100e [fe80::ecee:eeff:feee:eeee%11]:123 Oct 9 07:19:30.597203 ntpd[1953]: 9 Oct 07:19:30 ntpd[1953]: Listen normally on 13 cali64a207d100e [fe80::ecee:eeff:feee:eeee%11]:123 Oct 9 07:19:31.759534 containerd[1982]: time="2024-10-09T07:19:31.758406226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:19:31.764814 containerd[1982]: time="2024-10-09T07:19:31.764762075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 4.99690014s" Oct 9 07:19:31.764814 containerd[1982]: time="2024-10-09T07:19:31.764808333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:19:31.793172 containerd[1982]: time="2024-10-09T07:19:31.793115761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:31.809915 containerd[1982]: time="2024-10-09T07:19:31.809862729Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:31.811203 containerd[1982]: time="2024-10-09T07:19:31.811165263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:31.818460 containerd[1982]: time="2024-10-09T07:19:31.818195795Z" level=info msg="CreateContainer within sandbox \"5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:19:31.858000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712024822.mount: Deactivated successfully. Oct 9 07:19:31.860323 containerd[1982]: time="2024-10-09T07:19:31.860287515Z" level=info msg="CreateContainer within sandbox \"5ef98279137c96b386ba2b576296b71acb93ee1e82727dcc4efa447d647fe5c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"54a1d84285826ef8b993888914b9712d844d7f3edb825b4d6855e81cf612a0df\"" Oct 9 07:19:31.861142 containerd[1982]: time="2024-10-09T07:19:31.861114489Z" level=info msg="StartContainer for \"54a1d84285826ef8b993888914b9712d844d7f3edb825b4d6855e81cf612a0df\"" Oct 9 07:19:32.059761 systemd[1]: run-containerd-runc-k8s.io-54a1d84285826ef8b993888914b9712d844d7f3edb825b4d6855e81cf612a0df-runc.xzT7RH.mount: Deactivated successfully. Oct 9 07:19:32.073851 systemd[1]: Started cri-containerd-54a1d84285826ef8b993888914b9712d844d7f3edb825b4d6855e81cf612a0df.scope - libcontainer container 54a1d84285826ef8b993888914b9712d844d7f3edb825b4d6855e81cf612a0df. Oct 9 07:19:32.163749 containerd[1982]: time="2024-10-09T07:19:32.163662089Z" level=info msg="StartContainer for \"54a1d84285826ef8b993888914b9712d844d7f3edb825b4d6855e81cf612a0df\" returns successfully" Oct 9 07:19:32.800053 kubelet[3375]: I1009 07:19:32.800003 3375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fcb8757df-d55ks" podStartSLOduration=3.770216101 podStartE2EDuration="8.78513666s" podCreationTimestamp="2024-10-09 07:19:24 +0000 UTC" firstStartedPulling="2024-10-09 07:19:26.750239894 +0000 UTC m=+97.697326303" lastFinishedPulling="2024-10-09 07:19:31.765160442 +0000 UTC m=+102.712246862" observedRunningTime="2024-10-09 07:19:32.306972216 +0000 UTC m=+103.254058648" watchObservedRunningTime="2024-10-09 07:19:32.78513666 +0000 UTC m=+103.732223091" Oct 9 07:19:34.552812 systemd[1]: Started sshd@21-172.31.23.194:22-139.178.89.65:50638.service - OpenSSH per-connection server daemon (139.178.89.65:50638). Oct 9 07:19:34.769747 sshd[5974]: Accepted publickey for core from 139.178.89.65 port 50638 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:34.772600 sshd[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:34.779302 systemd-logind[1961]: New session 22 of user core. Oct 9 07:19:34.784590 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:19:35.828906 sshd[5974]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:35.834141 systemd-logind[1961]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:19:35.835138 systemd[1]: sshd@21-172.31.23.194:22-139.178.89.65:50638.service: Deactivated successfully. Oct 9 07:19:35.840815 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:19:35.846054 systemd-logind[1961]: Removed session 22. Oct 9 07:19:40.869273 systemd[1]: Started sshd@22-172.31.23.194:22-139.178.89.65:53424.service - OpenSSH per-connection server daemon (139.178.89.65:53424). Oct 9 07:19:41.040957 sshd[5996]: Accepted publickey for core from 139.178.89.65 port 53424 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:41.043565 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:41.055067 systemd-logind[1961]: New session 23 of user core. Oct 9 07:19:41.063605 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:19:41.481158 sshd[5996]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:41.493090 systemd[1]: sshd@22-172.31.23.194:22-139.178.89.65:53424.service: Deactivated successfully. Oct 9 07:19:41.500773 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:19:41.507091 systemd-logind[1961]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:19:41.513153 systemd-logind[1961]: Removed session 23. Oct 9 07:19:46.533072 systemd[1]: Started sshd@23-172.31.23.194:22-139.178.89.65:53940.service - OpenSSH per-connection server daemon (139.178.89.65:53940). Oct 9 07:19:46.768952 sshd[6016]: Accepted publickey for core from 139.178.89.65 port 53940 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:46.785488 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:46.828715 systemd-logind[1961]: New session 24 of user core. Oct 9 07:19:46.833695 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:19:47.407321 sshd[6016]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:47.418143 systemd[1]: sshd@23-172.31.23.194:22-139.178.89.65:53940.service: Deactivated successfully. Oct 9 07:19:47.422256 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:19:47.426609 systemd-logind[1961]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:19:47.431421 systemd-logind[1961]: Removed session 24. Oct 9 07:19:52.460877 systemd[1]: Started sshd@24-172.31.23.194:22-139.178.89.65:53946.service - OpenSSH per-connection server daemon (139.178.89.65:53946). Oct 9 07:19:52.788678 sshd[6052]: Accepted publickey for core from 139.178.89.65 port 53946 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:52.793342 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:52.800592 systemd-logind[1961]: New session 25 of user core. Oct 9 07:19:52.805661 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:19:53.365625 sshd[6052]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:53.378430 systemd[1]: sshd@24-172.31.23.194:22-139.178.89.65:53946.service: Deactivated successfully. Oct 9 07:19:53.383537 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:19:53.386335 systemd-logind[1961]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:19:53.387726 systemd-logind[1961]: Removed session 25. Oct 9 07:19:58.405025 systemd[1]: Started sshd@25-172.31.23.194:22-139.178.89.65:50804.service - OpenSSH per-connection server daemon (139.178.89.65:50804). Oct 9 07:19:58.592119 sshd[6095]: Accepted publickey for core from 139.178.89.65 port 50804 ssh2: RSA SHA256:BjsJ/lx981z8fjQkklWlKi6NfD3vBaXt/xIj5M1daHs Oct 9 07:19:58.594019 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:58.601187 systemd-logind[1961]: New session 26 of user core. Oct 9 07:19:58.605554 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 07:19:58.879280 sshd[6095]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:58.885816 systemd-logind[1961]: Session 26 logged out. Waiting for processes to exit. Oct 9 07:19:58.889019 systemd[1]: sshd@25-172.31.23.194:22-139.178.89.65:50804.service: Deactivated successfully. Oct 9 07:19:58.895803 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 07:19:58.899770 systemd-logind[1961]: Removed session 26. Oct 9 07:20:13.972158 systemd[1]: cri-containerd-b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53.scope: Deactivated successfully. Oct 9 07:20:13.972817 systemd[1]: cri-containerd-b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53.scope: Consumed 6.431s CPU time. Oct 9 07:20:14.100921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53-rootfs.mount: Deactivated successfully. Oct 9 07:20:14.166963 containerd[1982]: time="2024-10-09T07:20:14.086282125Z" level=info msg="shim disconnected" id=b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53 namespace=k8s.io Oct 9 07:20:14.166963 containerd[1982]: time="2024-10-09T07:20:14.166961969Z" level=warning msg="cleaning up after shim disconnected" id=b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53 namespace=k8s.io Oct 9 07:20:14.167606 containerd[1982]: time="2024-10-09T07:20:14.166985655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:20:14.618585 systemd[1]: cri-containerd-fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e.scope: Deactivated successfully. Oct 9 07:20:14.618915 systemd[1]: cri-containerd-fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e.scope: Consumed 3.464s CPU time, 28.1M memory peak, 0B memory swap peak. Oct 9 07:20:14.661869 containerd[1982]: time="2024-10-09T07:20:14.661329350Z" level=info msg="shim disconnected" id=fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e namespace=k8s.io Oct 9 07:20:14.661869 containerd[1982]: time="2024-10-09T07:20:14.661458524Z" level=warning msg="cleaning up after shim disconnected" id=fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e namespace=k8s.io Oct 9 07:20:14.661869 containerd[1982]: time="2024-10-09T07:20:14.661473714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:20:14.665899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e-rootfs.mount: Deactivated successfully. Oct 9 07:20:14.696481 kubelet[3375]: I1009 07:20:14.696450 3375 scope.go:117] "RemoveContainer" containerID="b5e692b1df11dde88a1e8cc68a1d276e7f73778cfe85f56e05cd01af3580db53" Oct 9 07:20:14.723625 containerd[1982]: time="2024-10-09T07:20:14.723579791Z" level=info msg="CreateContainer within sandbox \"c4d36eecc69f37770f52f5c1ff6e0d7da6ea57ff6b7d3a63cbd83750d9834593\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 9 07:20:14.751437 containerd[1982]: time="2024-10-09T07:20:14.751390997Z" level=info msg="CreateContainer within sandbox \"c4d36eecc69f37770f52f5c1ff6e0d7da6ea57ff6b7d3a63cbd83750d9834593\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3e0dad392b0d5362b88aa1c14478b9803795ba872c2e054f5883195657460d46\"" Oct 9 07:20:14.752020 containerd[1982]: time="2024-10-09T07:20:14.751988315Z" level=info msg="StartContainer for \"3e0dad392b0d5362b88aa1c14478b9803795ba872c2e054f5883195657460d46\"" Oct 9 07:20:14.817046 systemd[1]: Started cri-containerd-3e0dad392b0d5362b88aa1c14478b9803795ba872c2e054f5883195657460d46.scope - libcontainer container 3e0dad392b0d5362b88aa1c14478b9803795ba872c2e054f5883195657460d46. Oct 9 07:20:14.874459 containerd[1982]: time="2024-10-09T07:20:14.873881867Z" level=info msg="StartContainer for \"3e0dad392b0d5362b88aa1c14478b9803795ba872c2e054f5883195657460d46\" returns successfully" Oct 9 07:20:15.733919 kubelet[3375]: I1009 07:20:15.733874 3375 scope.go:117] "RemoveContainer" containerID="fcb3d516fbd5c2d3e176cb4dca2a63d1e8b6a0521fb1004c035b0bb6f376923e" Oct 9 07:20:15.738024 containerd[1982]: time="2024-10-09T07:20:15.737929406Z" level=info msg="CreateContainer within sandbox \"b0145828e7566af3598fc244b9106d69fff78266b34c0842659ef2db008273b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 9 07:20:15.779543 containerd[1982]: time="2024-10-09T07:20:15.775689472Z" level=info msg="CreateContainer within sandbox \"b0145828e7566af3598fc244b9106d69fff78266b34c0842659ef2db008273b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7b45225d13d29e546fd22e269062fd315e8011259c0acaf3a276d8fe6dcec22a\"" Oct 9 07:20:15.792729 containerd[1982]: time="2024-10-09T07:20:15.792679944Z" level=info msg="StartContainer for \"7b45225d13d29e546fd22e269062fd315e8011259c0acaf3a276d8fe6dcec22a\"" Oct 9 07:20:15.793805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1610744495.mount: Deactivated successfully. Oct 9 07:20:15.880434 systemd[1]: run-containerd-runc-k8s.io-7b45225d13d29e546fd22e269062fd315e8011259c0acaf3a276d8fe6dcec22a-runc.fJYXeB.mount: Deactivated successfully. Oct 9 07:20:15.886602 systemd[1]: Started cri-containerd-7b45225d13d29e546fd22e269062fd315e8011259c0acaf3a276d8fe6dcec22a.scope - libcontainer container 7b45225d13d29e546fd22e269062fd315e8011259c0acaf3a276d8fe6dcec22a. Oct 9 07:20:15.949669 containerd[1982]: time="2024-10-09T07:20:15.949599909Z" level=info msg="StartContainer for \"7b45225d13d29e546fd22e269062fd315e8011259c0acaf3a276d8fe6dcec22a\" returns successfully" Oct 9 07:20:18.339502 systemd[1]: run-containerd-runc-k8s.io-b0500e55b98a1e94ab281401d1b6655bb3dbd7de5193b75343e10034fdfe2efa-runc.Etanjg.mount: Deactivated successfully. Oct 9 07:20:19.454658 systemd[1]: cri-containerd-e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e.scope: Deactivated successfully. Oct 9 07:20:19.454969 systemd[1]: cri-containerd-e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e.scope: Consumed 2.297s CPU time, 20.1M memory peak, 0B memory swap peak. Oct 9 07:20:19.538128 containerd[1982]: time="2024-10-09T07:20:19.533985968Z" level=info msg="shim disconnected" id=e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e namespace=k8s.io Oct 9 07:20:19.538128 containerd[1982]: time="2024-10-09T07:20:19.534463204Z" level=warning msg="cleaning up after shim disconnected" id=e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e namespace=k8s.io Oct 9 07:20:19.538128 containerd[1982]: time="2024-10-09T07:20:19.534480622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:20:19.536142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e-rootfs.mount: Deactivated successfully. Oct 9 07:20:19.787609 kubelet[3375]: I1009 07:20:19.785980 3375 scope.go:117] "RemoveContainer" containerID="e0e602064a15af70b4558b046f3d7424e3d5f189dce59ceb70d7e9d0bc93ee1e" Oct 9 07:20:19.796396 containerd[1982]: time="2024-10-09T07:20:19.796228070Z" level=info msg="CreateContainer within sandbox \"6538f6717180c7fd54bed7635f94e3f213f8cef482efcb4139e8e1da104cfef9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 9 07:20:19.854514 containerd[1982]: time="2024-10-09T07:20:19.854459340Z" level=info msg="CreateContainer within sandbox \"6538f6717180c7fd54bed7635f94e3f213f8cef482efcb4139e8e1da104cfef9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6193176c2e00e67abd7754e6be3281c997bfbe21e9b95fb36bf83619564fa1da\"" Oct 9 07:20:19.857139 containerd[1982]: time="2024-10-09T07:20:19.855141730Z" level=info msg="StartContainer for \"6193176c2e00e67abd7754e6be3281c997bfbe21e9b95fb36bf83619564fa1da\"" Oct 9 07:20:19.924704 systemd[1]: Started cri-containerd-6193176c2e00e67abd7754e6be3281c997bfbe21e9b95fb36bf83619564fa1da.scope - libcontainer container 6193176c2e00e67abd7754e6be3281c997bfbe21e9b95fb36bf83619564fa1da. Oct 9 07:20:20.053767 containerd[1982]: time="2024-10-09T07:20:20.053648671Z" level=info msg="StartContainer for \"6193176c2e00e67abd7754e6be3281c997bfbe21e9b95fb36bf83619564fa1da\" returns successfully" Oct 9 07:20:22.044282 kubelet[3375]: E1009 07:20:22.044034 3375 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-194?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 9 07:20:32.045802 kubelet[3375]: E1009 07:20:32.045571 3375 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-194?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"