Jul 2 00:22:56.022178 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:22:56.022218 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:22:56.022232 kernel: BIOS-provided physical RAM map: Jul 2 00:22:56.022243 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:22:56.022253 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:22:56.022263 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:22:56.022278 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jul 2 00:22:56.022289 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jul 2 00:22:56.022299 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jul 2 00:22:56.022310 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:22:56.022320 kernel: NX (Execute Disable) protection: active Jul 2 00:22:56.022331 kernel: APIC: Static calls initialized Jul 2 00:22:56.022341 kernel: SMBIOS 2.7 present. Jul 2 00:22:56.022351 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 2 00:22:56.022367 kernel: Hypervisor detected: KVM Jul 2 00:22:56.022379 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:22:56.022391 kernel: kvm-clock: using sched offset of 6982437984 cycles Jul 2 00:22:56.022403 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:22:56.022416 kernel: tsc: Detected 2499.996 MHz processor Jul 2 00:22:56.022427 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:22:56.022440 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:22:56.022454 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jul 2 00:22:56.022466 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:22:56.022478 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:22:56.022489 kernel: Using GB pages for direct mapping Jul 2 00:22:56.022501 kernel: ACPI: Early table checksum verification disabled Jul 2 00:22:56.023739 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jul 2 00:22:56.023754 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jul 2 00:22:56.023766 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:22:56.023778 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 2 00:22:56.023798 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jul 2 00:22:56.023810 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:22:56.023822 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:22:56.023834 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 2 00:22:56.023846 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:22:56.023858 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 2 00:22:56.023869 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 2 00:22:56.023881 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:22:56.023896 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jul 2 00:22:56.023908 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jul 2 00:22:56.023925 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jul 2 00:22:56.023937 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jul 2 00:22:56.024015 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jul 2 00:22:56.024033 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jul 2 00:22:56.024140 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jul 2 00:22:56.024157 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jul 2 00:22:56.024173 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jul 2 00:22:56.024188 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jul 2 00:22:56.024203 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:22:56.024218 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:22:56.024778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 2 00:22:56.024797 kernel: NUMA: Initialized distance table, cnt=1 Jul 2 00:22:56.024812 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jul 2 00:22:56.024833 kernel: Zone ranges: Jul 2 00:22:56.024848 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:22:56.024863 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jul 2 00:22:56.024878 kernel: Normal empty Jul 2 00:22:56.024893 kernel: Movable zone start for each node Jul 2 00:22:56.024908 kernel: Early memory node ranges Jul 2 00:22:56.024923 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:22:56.024937 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jul 2 00:22:56.024952 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jul 2 00:22:56.024971 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:22:56.024985 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:22:56.025000 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jul 2 00:22:56.025015 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 00:22:56.025030 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:22:56.025045 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 2 00:22:56.025060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:22:56.025075 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:22:56.025090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:22:56.025109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:22:56.025123 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:22:56.025138 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:22:56.025153 kernel: TSC deadline timer available Jul 2 00:22:56.025168 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:22:56.025183 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:22:56.025199 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jul 2 00:22:56.025213 kernel: Booting paravirtualized kernel on KVM Jul 2 00:22:56.025229 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:22:56.025298 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:22:56.025320 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:22:56.025336 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:22:56.025350 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:22:56.025365 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:22:56.025380 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:22:56.025397 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:22:56.025413 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:22:56.025428 kernel: random: crng init done Jul 2 00:22:56.025445 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:22:56.025461 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:22:56.025475 kernel: Fallback order for Node 0: 0 Jul 2 00:22:56.025490 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jul 2 00:22:56.025505 kernel: Policy zone: DMA32 Jul 2 00:22:56.025520 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:22:56.025535 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Jul 2 00:22:56.025551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:22:56.025569 kernel: Kernel/User page tables isolation: enabled Jul 2 00:22:56.025583 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:22:56.025598 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:22:56.025612 kernel: Dynamic Preempt: voluntary Jul 2 00:22:56.026340 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:22:56.026356 kernel: rcu: RCU event tracing is enabled. Jul 2 00:22:56.026370 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:22:56.026383 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:22:56.026396 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:22:56.026408 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:22:56.026425 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:22:56.026438 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:22:56.026516 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:22:56.026530 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:22:56.026542 kernel: Console: colour VGA+ 80x25 Jul 2 00:22:56.026563 kernel: printk: console [ttyS0] enabled Jul 2 00:22:56.026576 kernel: ACPI: Core revision 20230628 Jul 2 00:22:56.026590 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 2 00:22:56.026603 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:22:56.026707 kernel: x2apic enabled Jul 2 00:22:56.026721 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:22:56.026745 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 2 00:22:56.026769 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 2 00:22:56.026783 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 00:22:56.026796 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 00:22:56.026810 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:22:56.026823 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:22:56.026836 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:22:56.026849 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:22:56.026862 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 00:22:56.026875 kernel: RETBleed: Vulnerable Jul 2 00:22:56.026891 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:22:56.026904 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:22:56.026917 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:22:56.026929 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 00:22:56.026942 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:22:56.026960 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:22:56.026977 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:22:56.026990 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 00:22:56.027003 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 00:22:56.027016 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 00:22:56.027029 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 00:22:56.027042 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 00:22:56.027055 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 2 00:22:56.027120 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:22:56.027135 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 00:22:56.027149 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 00:22:56.027161 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 2 00:22:56.027178 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 2 00:22:56.027190 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 2 00:22:56.027203 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 2 00:22:56.027217 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 2 00:22:56.027236 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:22:56.027249 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:22:56.027262 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:22:56.027275 kernel: SELinux: Initializing. Jul 2 00:22:56.027288 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:22:56.027302 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:22:56.027315 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 00:22:56.027328 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:22:56.027345 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:22:56.027358 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:22:56.027372 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 00:22:56.027385 kernel: signal: max sigframe size: 3632 Jul 2 00:22:56.027398 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:22:56.027418 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:22:56.027432 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:22:56.027446 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:22:56.027459 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:22:56.027475 kernel: .... node #0, CPUs: #1 Jul 2 00:22:56.027490 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 00:22:56.027504 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 00:22:56.027517 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:22:56.027531 kernel: smpboot: Max logical packages: 1 Jul 2 00:22:56.027545 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 2 00:22:56.027650 kernel: devtmpfs: initialized Jul 2 00:22:56.027666 kernel: x86/mm: Memory block size: 128MB Jul 2 00:22:56.027757 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:22:56.027773 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:22:56.027787 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:22:56.027800 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:22:56.027813 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:22:56.027827 kernel: audit: type=2000 audit(1719879775.266:1): state=initialized audit_enabled=0 res=1 Jul 2 00:22:56.027840 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:22:56.027854 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:22:56.027867 kernel: cpuidle: using governor menu Jul 2 00:22:56.027884 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:22:56.027897 kernel: dca service started, version 1.12.1 Jul 2 00:22:56.028139 kernel: PCI: Using configuration type 1 for base access Jul 2 00:22:56.028159 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:22:56.028175 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:22:56.028191 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:22:56.028207 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:22:56.028223 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:22:56.028240 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:22:56.028260 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:22:56.028276 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:22:56.028291 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:22:56.028307 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 00:22:56.028323 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:22:56.028339 kernel: ACPI: Interpreter enabled Jul 2 00:22:56.028355 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:22:56.028440 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:22:56.028459 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:22:56.028520 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:22:56.028541 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 00:22:56.028558 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:22:56.029388 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:22:56.029548 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:22:56.032572 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:22:56.032602 kernel: acpiphp: Slot [3] registered Jul 2 00:22:56.033662 kernel: acpiphp: Slot [4] registered Jul 2 00:22:56.033683 kernel: acpiphp: Slot [5] registered Jul 2 00:22:56.033699 kernel: acpiphp: Slot [6] registered Jul 2 00:22:56.033715 kernel: acpiphp: Slot [7] registered Jul 2 00:22:56.033730 kernel: acpiphp: Slot [8] registered Jul 2 00:22:56.033745 kernel: acpiphp: Slot [9] registered Jul 2 00:22:56.033761 kernel: acpiphp: Slot [10] registered Jul 2 00:22:56.033889 kernel: acpiphp: Slot [11] registered Jul 2 00:22:56.033907 kernel: acpiphp: Slot [12] registered Jul 2 00:22:56.033923 kernel: acpiphp: Slot [13] registered Jul 2 00:22:56.033945 kernel: acpiphp: Slot [14] registered Jul 2 00:22:56.033961 kernel: acpiphp: Slot [15] registered Jul 2 00:22:56.033976 kernel: acpiphp: Slot [16] registered Jul 2 00:22:56.033992 kernel: acpiphp: Slot [17] registered Jul 2 00:22:56.034007 kernel: acpiphp: Slot [18] registered Jul 2 00:22:56.034023 kernel: acpiphp: Slot [19] registered Jul 2 00:22:56.034038 kernel: acpiphp: Slot [20] registered Jul 2 00:22:56.034054 kernel: acpiphp: Slot [21] registered Jul 2 00:22:56.034069 kernel: acpiphp: Slot [22] registered Jul 2 00:22:56.034230 kernel: acpiphp: Slot [23] registered Jul 2 00:22:56.034247 kernel: acpiphp: Slot [24] registered Jul 2 00:22:56.034263 kernel: acpiphp: Slot [25] registered Jul 2 00:22:56.034278 kernel: acpiphp: Slot [26] registered Jul 2 00:22:56.034294 kernel: acpiphp: Slot [27] registered Jul 2 00:22:56.034309 kernel: acpiphp: Slot [28] registered Jul 2 00:22:56.034325 kernel: acpiphp: Slot [29] registered Jul 2 00:22:56.034340 kernel: acpiphp: Slot [30] registered Jul 2 00:22:56.034356 kernel: acpiphp: Slot [31] registered Jul 2 00:22:56.034371 kernel: PCI host bridge to bus 0000:00 Jul 2 00:22:56.034555 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:22:56.036796 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:22:56.036928 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:22:56.037140 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:22:56.037366 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:22:56.037520 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:22:56.038755 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:22:56.038916 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 2 00:22:56.039041 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 00:22:56.039163 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 00:22:56.039367 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 2 00:22:56.039493 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 2 00:22:56.039614 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 2 00:22:56.046900 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 2 00:22:56.047113 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 2 00:22:56.047320 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 2 00:22:56.047465 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 13671 usecs Jul 2 00:22:56.047605 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 2 00:22:56.049840 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jul 2 00:22:56.049989 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 00:22:56.050143 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:22:56.050294 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:22:56.050433 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jul 2 00:22:56.050581 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:22:56.053093 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jul 2 00:22:56.053125 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:22:56.053142 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:22:56.053165 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:22:56.053181 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:22:56.053197 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:22:56.053213 kernel: iommu: Default domain type: Translated Jul 2 00:22:56.053229 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:22:56.053244 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:22:56.053260 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:22:56.053276 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:22:56.053291 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jul 2 00:22:56.053437 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 2 00:22:56.053576 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 2 00:22:56.053893 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:22:56.053919 kernel: vgaarb: loaded Jul 2 00:22:56.053936 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 2 00:22:56.053953 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 2 00:22:56.053969 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:22:56.053985 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:22:56.054001 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:22:56.054022 kernel: pnp: PnP ACPI init Jul 2 00:22:56.054037 kernel: pnp: PnP ACPI: found 5 devices Jul 2 00:22:56.054054 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:22:56.054070 kernel: NET: Registered PF_INET protocol family Jul 2 00:22:56.054095 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:22:56.054111 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:22:56.054127 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:22:56.054142 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:22:56.054162 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:22:56.054178 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:22:56.054194 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:22:56.054210 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:22:56.054226 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:22:56.054242 kernel: NET: Registered PF_XDP protocol family Jul 2 00:22:56.054492 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:22:56.054637 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:22:56.054775 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:22:56.054902 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:22:56.055045 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:22:56.055066 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:22:56.055159 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:22:56.055176 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 2 00:22:56.055192 kernel: clocksource: Switched to clocksource tsc Jul 2 00:22:56.055208 kernel: Initialise system trusted keyrings Jul 2 00:22:56.055224 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:22:56.055244 kernel: Key type asymmetric registered Jul 2 00:22:56.055260 kernel: Asymmetric key parser 'x509' registered Jul 2 00:22:56.055276 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:22:56.055292 kernel: io scheduler mq-deadline registered Jul 2 00:22:56.055308 kernel: io scheduler kyber registered Jul 2 00:22:56.055324 kernel: io scheduler bfq registered Jul 2 00:22:56.055340 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:22:56.055356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:22:56.055372 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:22:56.055392 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:22:56.055408 kernel: i8042: Warning: Keylock active Jul 2 00:22:56.055423 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:22:56.055440 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:22:56.055596 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 00:22:56.058796 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 00:22:56.058938 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T00:22:55 UTC (1719879775) Jul 2 00:22:56.059068 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 00:22:56.059094 kernel: intel_pstate: CPU model not supported Jul 2 00:22:56.059111 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:22:56.059128 kernel: Segment Routing with IPv6 Jul 2 00:22:56.059144 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:22:56.059160 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:22:56.059177 kernel: Key type dns_resolver registered Jul 2 00:22:56.059193 kernel: IPI shorthand broadcast: enabled Jul 2 00:22:56.059209 kernel: sched_clock: Marking stable (653002290, 284409843)->(1108490328, -171078195) Jul 2 00:22:56.059225 kernel: registered taskstats version 1 Jul 2 00:22:56.059244 kernel: Loading compiled-in X.509 certificates Jul 2 00:22:56.059260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:22:56.059276 kernel: Key type .fscrypt registered Jul 2 00:22:56.059292 kernel: Key type fscrypt-provisioning registered Jul 2 00:22:56.059308 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:22:56.059324 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:22:56.059340 kernel: ima: No architecture policies found Jul 2 00:22:56.059356 kernel: clk: Disabling unused clocks Jul 2 00:22:56.059372 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:22:56.059391 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:22:56.059408 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:22:56.059424 kernel: Run /init as init process Jul 2 00:22:56.059440 kernel: with arguments: Jul 2 00:22:56.059456 kernel: /init Jul 2 00:22:56.059471 kernel: with environment: Jul 2 00:22:56.059486 kernel: HOME=/ Jul 2 00:22:56.059502 kernel: TERM=linux Jul 2 00:22:56.059517 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:22:56.059543 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:22:56.059577 systemd[1]: Detected virtualization amazon. Jul 2 00:22:56.059597 systemd[1]: Detected architecture x86-64. Jul 2 00:22:56.059614 systemd[1]: Running in initrd. Jul 2 00:22:56.059697 systemd[1]: No hostname configured, using default hostname. Jul 2 00:22:56.059717 systemd[1]: Hostname set to . Jul 2 00:22:56.059735 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:22:56.059752 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:22:56.059770 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:22:56.059787 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:22:56.059806 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:22:56.059823 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:22:56.059947 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:22:56.059970 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:22:56.059992 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:22:56.060009 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:22:56.060027 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:22:56.060044 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:22:56.060062 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:22:56.060082 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:22:56.060099 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:22:56.060117 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:22:56.060134 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:22:56.060151 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:22:56.060169 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:22:56.060187 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:22:56.060204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:22:56.060222 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:22:56.060243 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:22:56.060261 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:22:56.060278 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:22:56.060296 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:22:56.060314 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:22:56.060331 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:22:56.060349 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:22:56.060369 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:22:56.060387 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:22:56.060440 systemd-journald[178]: Collecting audit messages is disabled. Jul 2 00:22:56.060484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:22:56.060505 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:22:56.060523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:22:56.060541 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:22:56.060560 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:22:56.060579 systemd-journald[178]: Journal started Jul 2 00:22:56.064642 systemd-journald[178]: Runtime Journal (/run/log/journal/ec23a20b60c7b1fb838c065c1421c7d9) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:22:56.044683 systemd-modules-load[179]: Inserted module 'overlay' Jul 2 00:22:56.072652 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:22:56.102995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:22:56.102960 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:22:56.285838 kernel: Bridge firewalling registered Jul 2 00:22:56.107649 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 2 00:22:56.291878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:22:56.294589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:22:56.296155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:22:56.308889 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:22:56.312579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:22:56.319868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:22:56.320570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:22:56.339882 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:22:56.345505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:22:56.355839 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:22:56.358220 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:22:56.362774 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:22:56.390473 dracut-cmdline[214]: dracut-dracut-053 Jul 2 00:22:56.392230 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:22:56.400850 systemd-resolved[212]: Positive Trust Anchors: Jul 2 00:22:56.400869 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:22:56.400906 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:22:56.416298 systemd-resolved[212]: Defaulting to hostname 'linux'. Jul 2 00:22:56.418974 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:22:56.421802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:22:56.530651 kernel: SCSI subsystem initialized Jul 2 00:22:56.544714 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:22:56.561646 kernel: iscsi: registered transport (tcp) Jul 2 00:22:56.593647 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:22:56.593721 kernel: QLogic iSCSI HBA Driver Jul 2 00:22:56.651345 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:22:56.662928 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:22:56.719456 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:22:56.719539 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:22:56.719560 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:22:56.772743 kernel: raid6: avx512x4 gen() 14497 MB/s Jul 2 00:22:56.789674 kernel: raid6: avx512x2 gen() 13227 MB/s Jul 2 00:22:56.807713 kernel: raid6: avx512x1 gen() 12587 MB/s Jul 2 00:22:56.824695 kernel: raid6: avx2x4 gen() 12281 MB/s Jul 2 00:22:56.841676 kernel: raid6: avx2x2 gen() 14383 MB/s Jul 2 00:22:56.864301 kernel: raid6: avx2x1 gen() 11195 MB/s Jul 2 00:22:56.864379 kernel: raid6: using algorithm avx512x4 gen() 14497 MB/s Jul 2 00:22:56.882034 kernel: raid6: .... xor() 4343 MB/s, rmw enabled Jul 2 00:22:56.882152 kernel: raid6: using avx512x2 recovery algorithm Jul 2 00:22:56.951659 kernel: xor: automatically using best checksumming function avx Jul 2 00:22:57.223652 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:22:57.238303 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:22:57.246894 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:22:57.280184 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jul 2 00:22:57.289431 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:22:57.298833 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:22:57.329672 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jul 2 00:22:57.375246 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:22:57.386920 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:22:57.481214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:22:57.492832 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:22:57.521495 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:22:57.536585 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:22:57.540125 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:22:57.545370 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:22:57.554964 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:22:57.585602 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:22:57.630805 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:22:57.647157 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:22:57.649713 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:22:57.655392 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:22:57.662708 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:22:57.663260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:22:57.665850 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:22:57.691406 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:22:57.691472 kernel: AES CTR mode by8 optimization enabled Jul 2 00:22:57.691290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:22:57.705830 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:22:57.717800 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:22:57.717993 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 2 00:22:57.718157 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:a0:47:4b:de:99 Jul 2 00:22:57.724277 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:22:57.761089 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:22:57.761559 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:22:57.776850 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:22:57.780647 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:22:57.780713 kernel: GPT:9289727 != 16777215 Jul 2 00:22:57.780734 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:22:57.780753 kernel: GPT:9289727 != 16777215 Jul 2 00:22:57.780772 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:22:57.780792 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:22:57.923087 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (443) Jul 2 00:22:57.955720 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (454) Jul 2 00:22:57.983067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:22:57.995856 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:22:58.042507 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 00:22:58.049555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:22:58.093056 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 00:22:58.100673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:22:58.107892 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 00:22:58.108033 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 00:22:58.118980 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:22:58.142027 disk-uuid[627]: Primary Header is updated. Jul 2 00:22:58.142027 disk-uuid[627]: Secondary Entries is updated. Jul 2 00:22:58.142027 disk-uuid[627]: Secondary Header is updated. Jul 2 00:22:58.152667 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:22:58.162647 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:22:58.170653 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:22:59.175406 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:22:59.178782 disk-uuid[628]: The operation has completed successfully. Jul 2 00:22:59.416189 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:22:59.416334 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:22:59.463843 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:22:59.481829 sh[971]: Success Jul 2 00:22:59.506710 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:22:59.651526 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:22:59.672817 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:22:59.685879 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:22:59.710945 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:22:59.711093 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:22:59.711116 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:22:59.713165 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:22:59.713312 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:22:59.813656 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 00:22:59.835501 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:22:59.838664 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:22:59.848379 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:22:59.854967 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:22:59.888018 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:22:59.888096 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:22:59.888121 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:22:59.896819 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:22:59.926566 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:22:59.928102 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:22:59.955910 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:22:59.967084 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:23:00.063417 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:00.074845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:00.116045 systemd-networkd[1175]: lo: Link UP Jul 2 00:23:00.116057 systemd-networkd[1175]: lo: Gained carrier Jul 2 00:23:00.118399 systemd-networkd[1175]: Enumeration completed Jul 2 00:23:00.119731 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:00.125606 systemd-networkd[1175]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:00.125612 systemd-networkd[1175]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:00.129001 systemd[1]: Reached target network.target - Network. Jul 2 00:23:00.139422 systemd-networkd[1175]: eth0: Link UP Jul 2 00:23:00.139433 systemd-networkd[1175]: eth0: Gained carrier Jul 2 00:23:00.139450 systemd-networkd[1175]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:00.161772 systemd-networkd[1175]: eth0: DHCPv4 address 172.31.16.250/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:23:00.440116 ignition[1108]: Ignition 2.18.0 Jul 2 00:23:00.440138 ignition[1108]: Stage: fetch-offline Jul 2 00:23:00.440487 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:00.440502 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:00.444494 ignition[1108]: Ignition finished successfully Jul 2 00:23:00.446222 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:00.450892 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:23:00.527108 ignition[1186]: Ignition 2.18.0 Jul 2 00:23:00.527124 ignition[1186]: Stage: fetch Jul 2 00:23:00.529931 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:00.530696 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:00.532075 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:00.548368 ignition[1186]: PUT result: OK Jul 2 00:23:00.551377 ignition[1186]: parsed url from cmdline: "" Jul 2 00:23:00.551390 ignition[1186]: no config URL provided Jul 2 00:23:00.551400 ignition[1186]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:23:00.551416 ignition[1186]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:23:00.551609 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:00.553078 ignition[1186]: PUT result: OK Jul 2 00:23:00.553135 ignition[1186]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:23:00.555789 ignition[1186]: GET result: OK Jul 2 00:23:00.557098 ignition[1186]: parsing config with SHA512: 07ad6e22b8d5c020263cbd726ca710fa1eb246281a10b5cc99ba65c8be7a2c83826ba4ff0e433983c6814540560990af9cf02d05a6e2517ba47051e524c652f2 Jul 2 00:23:00.565837 unknown[1186]: fetched base config from "system" Jul 2 00:23:00.565851 unknown[1186]: fetched base config from "system" Jul 2 00:23:00.565861 unknown[1186]: fetched user config from "aws" Jul 2 00:23:00.569935 ignition[1186]: fetch: fetch complete Jul 2 00:23:00.569957 ignition[1186]: fetch: fetch passed Jul 2 00:23:00.570040 ignition[1186]: Ignition finished successfully Jul 2 00:23:00.574797 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:23:00.579989 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:23:00.612659 ignition[1193]: Ignition 2.18.0 Jul 2 00:23:00.612675 ignition[1193]: Stage: kargs Jul 2 00:23:00.614176 ignition[1193]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:00.614194 ignition[1193]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:00.614319 ignition[1193]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:00.617229 ignition[1193]: PUT result: OK Jul 2 00:23:00.636781 ignition[1193]: kargs: kargs passed Jul 2 00:23:00.636900 ignition[1193]: Ignition finished successfully Jul 2 00:23:00.645560 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:23:00.664113 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:23:00.732108 ignition[1200]: Ignition 2.18.0 Jul 2 00:23:00.732124 ignition[1200]: Stage: disks Jul 2 00:23:00.733134 ignition[1200]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:00.733150 ignition[1200]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:00.733270 ignition[1200]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:00.735219 ignition[1200]: PUT result: OK Jul 2 00:23:00.743375 ignition[1200]: disks: disks passed Jul 2 00:23:00.743459 ignition[1200]: Ignition finished successfully Jul 2 00:23:00.744857 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:23:00.747437 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:00.750650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:23:00.750826 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:00.755935 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:00.756113 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:00.773944 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:23:00.861222 systemd-fsck[1209]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:23:00.873845 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:23:00.895005 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:23:01.269886 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:23:01.274246 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:23:01.290080 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:01.316262 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:01.331977 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:23:01.340956 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:23:01.344614 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:23:01.348324 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:01.373519 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:23:01.398788 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:23:01.426457 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1228) Jul 2 00:23:01.432969 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:01.433052 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:01.433072 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:23:01.443656 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:23:01.455080 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:01.907799 systemd-networkd[1175]: eth0: Gained IPv6LL Jul 2 00:23:01.938795 initrd-setup-root[1252]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:23:01.986675 initrd-setup-root[1259]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:23:02.010135 initrd-setup-root[1266]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:23:02.033743 initrd-setup-root[1273]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:23:02.452514 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:02.466821 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:23:02.478839 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:23:02.505800 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:23:02.516837 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:02.580712 ignition[1346]: INFO : Ignition 2.18.0 Jul 2 00:23:02.580712 ignition[1346]: INFO : Stage: mount Jul 2 00:23:02.580712 ignition[1346]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:02.580712 ignition[1346]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:02.580712 ignition[1346]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:02.591137 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:23:02.595417 ignition[1346]: INFO : PUT result: OK Jul 2 00:23:02.601195 ignition[1346]: INFO : mount: mount passed Jul 2 00:23:02.603436 ignition[1346]: INFO : Ignition finished successfully Jul 2 00:23:02.610218 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:23:02.620273 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:23:02.654110 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:02.689430 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1358) Jul 2 00:23:02.689519 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:02.689667 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:02.691113 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:23:02.697659 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:23:02.699228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:02.734438 ignition[1375]: INFO : Ignition 2.18.0 Jul 2 00:23:02.734438 ignition[1375]: INFO : Stage: files Jul 2 00:23:02.737343 ignition[1375]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:02.737343 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:02.737343 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:02.741546 ignition[1375]: INFO : PUT result: OK Jul 2 00:23:02.746090 ignition[1375]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:23:02.760154 ignition[1375]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:23:02.760154 ignition[1375]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:23:02.793664 ignition[1375]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:23:02.796128 ignition[1375]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:23:02.804844 unknown[1375]: wrote ssh authorized keys file for user: core Jul 2 00:23:02.807748 ignition[1375]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:23:02.815659 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:23:02.815659 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:23:02.815659 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:23:02.825584 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:23:02.911100 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:23:03.056728 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:23:03.056728 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:23:03.062878 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 00:23:03.507876 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 00:23:03.655747 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:23:03.661667 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 00:23:03.920486 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 2 00:23:04.424271 ignition[1375]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:23:04.424271 ignition[1375]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 2 00:23:04.429024 ignition[1375]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:04.431512 ignition[1375]: INFO : files: files passed Jul 2 00:23:04.431512 ignition[1375]: INFO : Ignition finished successfully Jul 2 00:23:04.450683 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:23:04.465428 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:23:04.469808 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:23:04.472747 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:23:04.472839 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:23:04.496333 initrd-setup-root-after-ignition[1404]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:04.496333 initrd-setup-root-after-ignition[1404]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:04.500835 initrd-setup-root-after-ignition[1408]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:04.504831 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:04.505741 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:23:04.520110 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:23:04.588285 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:23:04.588614 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:23:04.594144 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:23:04.596500 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:23:04.599031 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:23:04.614862 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:23:04.645545 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:04.660476 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:23:04.678450 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:04.678725 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:04.684982 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:23:04.687374 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:23:04.689316 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:04.698252 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:23:04.704018 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:23:04.713606 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:23:04.713837 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:04.719266 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:04.719424 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:23:04.723170 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:04.725476 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:23:04.728894 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:23:04.732488 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:23:04.734557 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:23:04.735732 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:04.740126 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:04.752382 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:04.760205 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:23:04.760325 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:04.764690 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:23:04.765928 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:04.769026 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:23:04.770517 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:04.776867 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:23:04.777006 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:23:04.793097 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:23:04.794723 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:23:04.795211 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:04.806153 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:23:04.807353 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:23:04.807571 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:04.810995 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:23:04.811388 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:04.833659 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:23:04.835360 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:23:04.842878 ignition[1428]: INFO : Ignition 2.18.0 Jul 2 00:23:04.842878 ignition[1428]: INFO : Stage: umount Jul 2 00:23:04.847735 ignition[1428]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:04.847735 ignition[1428]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:04.847735 ignition[1428]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:04.863349 ignition[1428]: INFO : PUT result: OK Jul 2 00:23:04.863349 ignition[1428]: INFO : umount: umount passed Jul 2 00:23:04.863349 ignition[1428]: INFO : Ignition finished successfully Jul 2 00:23:04.857656 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:23:04.857787 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:23:04.859826 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:23:04.859941 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:23:04.861425 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:23:04.861485 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:23:04.862783 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:23:04.862836 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:23:04.868966 systemd[1]: Stopped target network.target - Network. Jul 2 00:23:04.872979 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:23:04.873053 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:04.875597 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:23:04.876822 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:23:04.882688 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:04.884849 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:23:04.887315 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:23:04.887410 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:23:04.887452 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:04.887580 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:23:04.887613 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:04.892967 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:23:04.894156 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:23:04.895184 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:23:04.895257 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:04.898351 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:23:04.898929 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:23:04.907554 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:23:04.908417 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:23:04.908536 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:23:04.909834 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:23:04.909933 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:04.915802 systemd-networkd[1175]: eth0: DHCPv6 lease lost Jul 2 00:23:04.920152 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:23:04.920703 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:23:04.926503 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:23:04.926713 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:23:04.935658 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:23:04.935728 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:04.959135 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:23:04.962742 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:23:04.964579 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:04.970131 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:23:04.970288 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:04.973710 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:23:04.973938 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:04.977567 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:23:04.977656 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:04.980628 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:05.017555 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:23:05.017747 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:05.033085 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:23:05.033200 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:05.036796 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:23:05.036851 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:05.038285 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:23:05.038342 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:05.044959 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:23:05.045039 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:05.048331 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:05.048395 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:05.062700 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:23:05.066418 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:23:05.066571 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:05.071973 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:05.072042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:05.072492 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:23:05.072669 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:23:05.094688 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:23:05.096614 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:23:05.100228 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:23:05.108874 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:23:05.159993 systemd[1]: Switching root. Jul 2 00:23:05.193909 systemd-journald[178]: Journal stopped Jul 2 00:23:08.460803 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 2 00:23:08.461085 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:23:08.461122 kernel: SELinux: policy capability open_perms=1 Jul 2 00:23:08.461148 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:23:08.461167 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:23:08.461186 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:23:08.461208 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:23:08.461226 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:23:08.461243 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:23:08.461269 kernel: audit: type=1403 audit(1719879786.861:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:23:08.461292 systemd[1]: Successfully loaded SELinux policy in 55.372ms. Jul 2 00:23:08.461323 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.498ms. Jul 2 00:23:08.461398 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:08.461422 systemd[1]: Detected virtualization amazon. Jul 2 00:23:08.461443 systemd[1]: Detected architecture x86-64. Jul 2 00:23:08.461462 systemd[1]: Detected first boot. Jul 2 00:23:08.461481 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:23:08.461501 zram_generator::config[1488]: No configuration found. Jul 2 00:23:08.461525 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:23:08.461546 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:23:08.461565 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 00:23:08.461586 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:23:08.461605 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:23:08.461643 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:23:08.461668 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:23:08.461688 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:23:08.461711 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:23:08.461793 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:23:08.461814 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:23:08.461834 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:08.461854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:08.461875 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:23:08.461894 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:23:08.461978 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:23:08.462014 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:08.462035 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:23:08.462099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:08.462124 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:23:08.462144 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:08.462164 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:08.462184 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:08.462204 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:08.462228 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:23:08.462248 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:23:08.462267 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:23:08.462288 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:23:08.462352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:08.462373 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:08.462394 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:08.462443 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:23:08.462463 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:23:08.462508 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:23:08.462608 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:23:08.462792 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:08.463211 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:23:08.463239 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:23:08.463256 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:23:08.463273 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:23:08.463292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:08.463309 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:08.463336 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:23:08.463355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:08.463374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:08.463393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:08.463456 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:23:08.463482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:08.463503 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:23:08.463523 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 00:23:08.463542 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 2 00:23:08.463565 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:08.463584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:08.463603 kernel: loop: module loaded Jul 2 00:23:08.463963 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:23:08.463990 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:23:08.464009 kernel: fuse: init (API version 7.39) Jul 2 00:23:08.464028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:08.464048 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:08.464070 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:23:08.464089 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:23:08.464114 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:23:08.464133 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:23:08.464151 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:23:08.464170 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:23:08.464189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:08.464207 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:23:08.464226 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:23:08.464246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:08.464264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:08.464283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:08.464302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:08.464320 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:23:08.464341 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:23:08.464360 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:08.464378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:08.464440 systemd-journald[1581]: Collecting audit messages is disabled. Jul 2 00:23:08.464476 systemd-journald[1581]: Journal started Jul 2 00:23:08.464512 systemd-journald[1581]: Runtime Journal (/run/log/journal/ec23a20b60c7b1fb838c065c1421c7d9) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:23:08.465644 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:08.472148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:08.473908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:23:08.476016 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:23:08.497885 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:23:08.528548 kernel: ACPI: bus type drm_connector registered Jul 2 00:23:08.524283 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:23:08.535888 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:23:08.538457 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:23:08.551975 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:23:08.567094 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:23:08.568661 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:08.573172 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:23:08.575802 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:08.591945 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:08.604878 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:23:08.618870 systemd-journald[1581]: Time spent on flushing to /var/log/journal/ec23a20b60c7b1fb838c065c1421c7d9 is 125.272ms for 946 entries. Jul 2 00:23:08.618870 systemd-journald[1581]: System Journal (/var/log/journal/ec23a20b60c7b1fb838c065c1421c7d9) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:23:08.765839 systemd-journald[1581]: Received client request to flush runtime journal. Jul 2 00:23:08.624048 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:23:08.625921 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:08.626200 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:08.628146 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:23:08.629873 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:23:08.656840 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:23:08.672878 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:23:08.737680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:08.765378 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Jul 2 00:23:08.765400 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Jul 2 00:23:08.769253 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:23:08.783490 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:23:08.797885 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:23:08.812277 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:08.824026 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:23:08.863988 udevadm[1657]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:23:08.878271 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:23:08.892990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:08.921317 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Jul 2 00:23:08.921349 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Jul 2 00:23:08.932172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:09.887759 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:23:09.898024 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:09.938796 systemd-udevd[1666]: Using default interface naming scheme 'v255'. Jul 2 00:23:10.000375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:10.013818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:10.077828 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:23:10.163653 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1675) Jul 2 00:23:10.172546 (udev-worker)[1679]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:23:10.176745 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 2 00:23:10.182709 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:23:10.367672 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jul 2 00:23:10.373783 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 00:23:10.379707 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:23:10.381808 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 2 00:23:10.385400 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 00:23:10.416079 systemd-networkd[1671]: lo: Link UP Jul 2 00:23:10.416089 systemd-networkd[1671]: lo: Gained carrier Jul 2 00:23:10.418036 systemd-networkd[1671]: Enumeration completed Jul 2 00:23:10.418597 systemd-networkd[1671]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:10.418608 systemd-networkd[1671]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:10.418792 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:10.429541 systemd-networkd[1671]: eth0: Link UP Jul 2 00:23:10.429543 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:23:10.429754 systemd-networkd[1671]: eth0: Gained carrier Jul 2 00:23:10.429781 systemd-networkd[1671]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:10.448275 systemd-networkd[1671]: eth0: DHCPv4 address 172.31.16.250/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:23:10.505007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:10.518672 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jul 2 00:23:10.538777 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1670) Jul 2 00:23:10.562650 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:23:10.708887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:23:10.934084 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:23:10.950685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:11.009402 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:23:11.053180 lvm[1790]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:11.094464 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:23:11.097077 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:11.110196 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:23:11.119713 lvm[1793]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:11.158920 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:23:11.161000 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:23:11.163064 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:23:11.163411 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:11.165422 systemd[1]: Reached target machines.target - Containers. Jul 2 00:23:11.168219 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:23:11.179841 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:23:11.197614 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:23:11.203399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:11.214520 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:23:11.230967 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:23:11.241876 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:23:11.245322 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:23:11.260917 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:23:11.294605 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:23:11.296869 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:23:11.298308 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:23:11.305664 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:23:11.445750 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:23:11.467648 kernel: loop1: detected capacity change from 0 to 80568 Jul 2 00:23:11.571672 kernel: loop2: detected capacity change from 0 to 209816 Jul 2 00:23:11.632655 kernel: loop3: detected capacity change from 0 to 60984 Jul 2 00:23:11.702251 systemd-networkd[1671]: eth0: Gained IPv6LL Jul 2 00:23:11.709535 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:23:11.752655 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:23:11.781653 kernel: loop5: detected capacity change from 0 to 80568 Jul 2 00:23:11.798888 kernel: loop6: detected capacity change from 0 to 209816 Jul 2 00:23:11.819751 kernel: loop7: detected capacity change from 0 to 60984 Jul 2 00:23:11.835409 (sd-merge)[1816]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 00:23:11.836421 (sd-merge)[1816]: Merged extensions into '/usr'. Jul 2 00:23:11.841426 systemd[1]: Reloading requested from client PID 1801 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:23:11.841446 systemd[1]: Reloading... Jul 2 00:23:11.961654 zram_generator::config[1839]: No configuration found. Jul 2 00:23:12.196345 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:12.299055 systemd[1]: Reloading finished in 457 ms. Jul 2 00:23:12.318941 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:23:12.331875 systemd[1]: Starting ensure-sysext.service... Jul 2 00:23:12.335821 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:12.350859 systemd[1]: Reloading requested from client PID 1895 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:23:12.352234 systemd[1]: Reloading... Jul 2 00:23:12.391669 systemd-tmpfiles[1896]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:23:12.392676 systemd-tmpfiles[1896]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:23:12.394118 systemd-tmpfiles[1896]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:23:12.394702 systemd-tmpfiles[1896]: ACLs are not supported, ignoring. Jul 2 00:23:12.394975 systemd-tmpfiles[1896]: ACLs are not supported, ignoring. Jul 2 00:23:12.398950 systemd-tmpfiles[1896]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:12.399098 systemd-tmpfiles[1896]: Skipping /boot Jul 2 00:23:12.439812 systemd-tmpfiles[1896]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:12.440916 systemd-tmpfiles[1896]: Skipping /boot Jul 2 00:23:12.446710 zram_generator::config[1920]: No configuration found. Jul 2 00:23:12.675530 ldconfig[1797]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:23:12.894753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:13.066302 systemd[1]: Reloading finished in 713 ms. Jul 2 00:23:13.089227 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:23:13.098844 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:13.114916 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:13.128183 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:23:13.138896 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:23:13.156940 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:13.206672 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:23:13.240422 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:13.240931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:13.250739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:13.269408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:13.284896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:13.286837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:13.288808 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:13.296922 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:23:13.302503 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:13.303086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:13.312021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:13.312274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:13.315784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:13.316044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:13.339005 augenrules[2016]: No rules Jul 2 00:23:13.345577 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:13.353315 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:23:13.373607 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:13.374684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:13.383609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:13.398284 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:13.410938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:13.434013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:13.435653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:13.435965 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:23:13.447540 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:23:13.448706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:13.450506 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:23:13.460003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:13.460243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:13.464429 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:13.465738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:13.470055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:13.470302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:13.487375 systemd[1]: Finished ensure-sysext.service. Jul 2 00:23:13.489561 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:13.493871 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:13.507317 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:13.507420 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:13.507456 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:23:13.518915 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:23:13.523231 systemd-resolved[1991]: Positive Trust Anchors: Jul 2 00:23:13.523250 systemd-resolved[1991]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:13.523304 systemd-resolved[1991]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:13.528187 systemd-resolved[1991]: Defaulting to hostname 'linux'. Jul 2 00:23:13.531072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:13.532508 systemd[1]: Reached target network.target - Network. Jul 2 00:23:13.533474 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:23:13.534755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:13.536561 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:13.538039 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:23:13.540010 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:23:13.541873 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:23:13.543265 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:23:13.544825 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:23:13.547225 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:23:13.547980 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:13.549119 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:13.553109 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:23:13.558389 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:23:13.564073 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:23:13.566478 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:23:13.568137 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:13.569404 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:13.570810 systemd[1]: System is tainted: cgroupsv1 Jul 2 00:23:13.570875 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:13.570910 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:13.578857 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:23:13.596944 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:23:13.605297 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:23:13.620866 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:23:13.629667 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:23:13.631601 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:23:13.649882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:13.660971 jq[2055]: false Jul 2 00:23:13.714027 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:23:13.722858 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 00:23:13.736881 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:23:13.751956 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:23:13.806294 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 00:23:13.809169 extend-filesystems[2056]: Found loop4 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found loop5 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found loop6 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found loop7 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found nvme0n1 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found nvme0n1p1 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found nvme0n1p2 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found nvme0n1p3 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found usr Jul 2 00:23:13.817094 extend-filesystems[2056]: Found nvme0n1p4 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found nvme0n1p6 Jul 2 00:23:13.817094 extend-filesystems[2056]: Found nvme0n1p7 Jul 2 00:23:13.835983 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:23:13.852190 extend-filesystems[2056]: Found nvme0n1p9 Jul 2 00:23:13.852190 extend-filesystems[2056]: Checking size of /dev/nvme0n1p9 Jul 2 00:23:13.862646 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:23:13.915014 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:23:13.917085 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:23:13.934405 dbus-daemon[2053]: [system] SELinux support is enabled Jul 2 00:23:13.933127 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:23:13.964665 extend-filesystems[2056]: Resized partition /dev/nvme0n1p9 Jul 2 00:23:13.953133 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:23:13.966201 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:23:13.997076 extend-filesystems[2091]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:23:14.016833 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:23:13.997053 dbus-daemon[2053]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1671 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:23:14.016995 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:23:14.016995 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:23:14.016995 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: ---------------------------------------------------- Jul 2 00:23:14.016995 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:23:14.016995 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:23:14.016995 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: corporation. Support and training for ntp-4 are Jul 2 00:23:14.016995 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: available at https://www.nwtime.org/support Jul 2 00:23:14.016995 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: ---------------------------------------------------- Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.014 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.018 INFO Fetch successful Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.018 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.019 INFO Fetch successful Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.020 INFO Fetch successful Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.020 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.026 INFO Fetch successful Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.026 INFO Fetch failed with 404: resource not found Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.028 INFO Fetch successful Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.028 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.029 INFO Fetch successful Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.029 INFO Fetch successful Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.031 INFO Fetch successful Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.031 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 00:23:14.045114 coreos-metadata[2052]: Jul 02 00:23:14.032 INFO Fetch successful Jul 2 00:23:14.013712 ntpd[2062]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:23:14.021514 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: proto: precision = 0.079 usec (-24) Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: basedate set to 2024-06-19 Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: gps base set to 2024-06-23 (week 2320) Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: Listen normally on 3 eth0 172.31.16.250:123 Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: Listen normally on 4 lo [::1]:123 Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: Listen normally on 5 eth0 [fe80::4a0:47ff:fe4b:de99%2]:123 Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: Listening on routing socket on fd #22 for interface updates Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:14.102705 ntpd[2062]: 2 Jul 00:23:14 ntpd[2062]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:14.013740 ntpd[2062]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:23:14.108854 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:23:14.046056 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:23:14.013750 ntpd[2062]: ---------------------------------------------------- Jul 2 00:23:14.067167 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:23:14.161736 jq[2088]: true Jul 2 00:23:14.013761 ntpd[2062]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:23:14.070015 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:23:14.013770 ntpd[2062]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:23:14.079548 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:23:14.013781 ntpd[2062]: corporation. Support and training for ntp-4 are Jul 2 00:23:14.104222 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:23:14.013792 ntpd[2062]: available at https://www.nwtime.org/support Jul 2 00:23:14.116717 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:23:14.013801 ntpd[2062]: ---------------------------------------------------- Jul 2 00:23:14.017720 ntpd[2062]: proto: precision = 0.079 usec (-24) Jul 2 00:23:14.024846 ntpd[2062]: basedate set to 2024-06-19 Jul 2 00:23:14.024868 ntpd[2062]: gps base set to 2024-06-23 (week 2320) Jul 2 00:23:14.037403 ntpd[2062]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:23:14.037475 ntpd[2062]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:23:14.037719 ntpd[2062]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:23:14.037758 ntpd[2062]: Listen normally on 3 eth0 172.31.16.250:123 Jul 2 00:23:14.037798 ntpd[2062]: Listen normally on 4 lo [::1]:123 Jul 2 00:23:14.037960 ntpd[2062]: Listen normally on 5 eth0 [fe80::4a0:47ff:fe4b:de99%2]:123 Jul 2 00:23:14.038010 ntpd[2062]: Listening on routing socket on fd #22 for interface updates Jul 2 00:23:14.094752 ntpd[2062]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:14.094792 ntpd[2062]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:14.171952 extend-filesystems[2091]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:23:14.171952 extend-filesystems[2091]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:23:14.171952 extend-filesystems[2091]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:23:14.183494 extend-filesystems[2056]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:23:14.188654 update_engine[2086]: I0702 00:23:14.184849 2086 main.cc:92] Flatcar Update Engine starting Jul 2 00:23:14.188516 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:23:14.189126 (ntainerd)[2110]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:23:14.192448 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:23:14.195934 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:23:14.233273 update_engine[2086]: I0702 00:23:14.231832 2086 update_check_scheduler.cc:74] Next update check in 6m34s Jul 2 00:23:14.236503 jq[2108]: true Jul 2 00:23:14.244488 dbus-daemon[2053]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 00:23:14.252701 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 00:23:14.269820 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:23:14.281035 tar[2103]: linux-amd64/helm Jul 2 00:23:14.288938 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 00:23:14.290476 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:23:14.290594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:23:14.290647 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:23:14.302062 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 00:23:14.305584 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:23:14.305633 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:23:14.308424 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:23:14.317839 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:23:14.372343 systemd-logind[2080]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:23:14.372378 systemd-logind[2080]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 00:23:14.372402 systemd-logind[2080]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:23:14.373794 systemd-logind[2080]: New seat seat0. Jul 2 00:23:14.391635 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:23:14.453271 bash[2166]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:14.459605 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:23:14.479084 systemd[1]: Starting sshkeys.service... Jul 2 00:23:14.598959 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:23:14.612121 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:23:14.645769 dbus-daemon[2053]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:23:14.646158 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 00:23:14.650129 dbus-daemon[2053]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2149 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:23:14.663048 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 00:23:14.702674 amazon-ssm-agent[2145]: Initializing new seelog logger Jul 2 00:23:14.703188 amazon-ssm-agent[2145]: New Seelog Logger Creation Complete Jul 2 00:23:14.703188 amazon-ssm-agent[2145]: 2024/07/02 00:23:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:14.703188 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:14.708658 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2172) Jul 2 00:23:14.709374 amazon-ssm-agent[2145]: 2024/07/02 00:23:14 processing appconfig overrides Jul 2 00:23:14.714427 amazon-ssm-agent[2145]: 2024/07/02 00:23:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:14.714427 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:14.714589 amazon-ssm-agent[2145]: 2024/07/02 00:23:14 processing appconfig overrides Jul 2 00:23:14.715006 amazon-ssm-agent[2145]: 2024/07/02 00:23:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:14.715006 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:14.715113 amazon-ssm-agent[2145]: 2024/07/02 00:23:14 processing appconfig overrides Jul 2 00:23:14.715565 amazon-ssm-agent[2145]: 2024-07-02 00:23:14 INFO Proxy environment variables: Jul 2 00:23:14.717214 coreos-metadata[2184]: Jul 02 00:23:14.717 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:23:14.718518 coreos-metadata[2184]: Jul 02 00:23:14.718 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 00:23:14.722861 coreos-metadata[2184]: Jul 02 00:23:14.720 INFO Fetch successful Jul 2 00:23:14.722861 coreos-metadata[2184]: Jul 02 00:23:14.721 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:23:14.722861 coreos-metadata[2184]: Jul 02 00:23:14.722 INFO Fetch successful Jul 2 00:23:14.727210 unknown[2184]: wrote ssh authorized keys file for user: core Jul 2 00:23:14.735713 polkitd[2191]: Started polkitd version 121 Jul 2 00:23:14.746659 amazon-ssm-agent[2145]: 2024/07/02 00:23:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:14.746659 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:14.746659 amazon-ssm-agent[2145]: 2024/07/02 00:23:14 processing appconfig overrides Jul 2 00:23:14.789637 locksmithd[2151]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:23:14.793294 polkitd[2191]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:23:14.798548 polkitd[2191]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:23:14.799201 polkitd[2191]: Finished loading, compiling and executing 2 rules Jul 2 00:23:14.801338 update-ssh-keys[2202]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:14.809960 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:23:14.830355 dbus-daemon[2053]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:23:14.832926 amazon-ssm-agent[2145]: 2024-07-02 00:23:14 INFO https_proxy: Jul 2 00:23:14.824393 systemd[1]: Finished sshkeys.service. Jul 2 00:23:14.836369 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 00:23:14.842786 polkitd[2191]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:23:14.883543 systemd-resolved[1991]: System hostname changed to 'ip-172-31-16-250'. Jul 2 00:23:14.884025 systemd-hostnamed[2149]: Hostname set to (transient) Jul 2 00:23:14.933096 amazon-ssm-agent[2145]: 2024-07-02 00:23:14 INFO http_proxy: Jul 2 00:23:15.037543 amazon-ssm-agent[2145]: 2024-07-02 00:23:14 INFO no_proxy: Jul 2 00:23:15.145641 amazon-ssm-agent[2145]: 2024-07-02 00:23:14 INFO Checking if agent identity type OnPrem can be assumed Jul 2 00:23:15.241116 amazon-ssm-agent[2145]: 2024-07-02 00:23:14 INFO Checking if agent identity type EC2 can be assumed Jul 2 00:23:15.311652 containerd[2110]: time="2024-07-02T00:23:15.311117484Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:23:15.339779 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO Agent will take identity from EC2 Jul 2 00:23:15.442661 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:23:15.538571 containerd[2110]: time="2024-07-02T00:23:15.538459775Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:23:15.542780 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:23:15.552715 containerd[2110]: time="2024-07-02T00:23:15.551558283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:15.567648 containerd[2110]: time="2024-07-02T00:23:15.567421432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:15.568065 containerd[2110]: time="2024-07-02T00:23:15.568034299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:15.590213 containerd[2110]: time="2024-07-02T00:23:15.580711273Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:15.590213 containerd[2110]: time="2024-07-02T00:23:15.585824466Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:23:15.590213 containerd[2110]: time="2024-07-02T00:23:15.586514778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:15.590213 containerd[2110]: time="2024-07-02T00:23:15.586650976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:15.590213 containerd[2110]: time="2024-07-02T00:23:15.586674524Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:15.590213 containerd[2110]: time="2024-07-02T00:23:15.586905038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:15.590213 containerd[2110]: time="2024-07-02T00:23:15.590017629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:15.590213 containerd[2110]: time="2024-07-02T00:23:15.590072346Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:23:15.590213 containerd[2110]: time="2024-07-02T00:23:15.590088891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:15.594320 containerd[2110]: time="2024-07-02T00:23:15.593963899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:15.594320 containerd[2110]: time="2024-07-02T00:23:15.594022698Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:23:15.594320 containerd[2110]: time="2024-07-02T00:23:15.594243822Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:23:15.594320 containerd[2110]: time="2024-07-02T00:23:15.594272444Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.626474909Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.626530921Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.626553955Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.626735212Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.626758313Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.626774883Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.626807494Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.627008062Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.627046199Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.627068511Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.627090729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.627126261Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.627163555Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:23:15.627505 containerd[2110]: time="2024-07-02T00:23:15.627199268Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:23:15.628165 containerd[2110]: time="2024-07-02T00:23:15.627220055Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:23:15.628165 containerd[2110]: time="2024-07-02T00:23:15.627241280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:23:15.628165 containerd[2110]: time="2024-07-02T00:23:15.627276511Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:23:15.628165 containerd[2110]: time="2024-07-02T00:23:15.627297339Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:23:15.628165 containerd[2110]: time="2024-07-02T00:23:15.627314821Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:23:15.628165 containerd[2110]: time="2024-07-02T00:23:15.627462203Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.642767159Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.642832749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.642854866Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.642890361Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.642984695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.643004774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.643023480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.643042149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.643059873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.643077786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.643175962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.643195655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.643216895Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:23:15.647651 containerd[2110]: time="2024-07-02T00:23:15.643407093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.648320 containerd[2110]: time="2024-07-02T00:23:15.643428823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.648320 containerd[2110]: time="2024-07-02T00:23:15.643450656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.648320 containerd[2110]: time="2024-07-02T00:23:15.643470278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.648320 containerd[2110]: time="2024-07-02T00:23:15.643490509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.648320 containerd[2110]: time="2024-07-02T00:23:15.643510603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.648320 containerd[2110]: time="2024-07-02T00:23:15.643529516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.648320 containerd[2110]: time="2024-07-02T00:23:15.643546679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:23:15.667461 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:23:15.667552 containerd[2110]: time="2024-07-02T00:23:15.659226329Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:23:15.667552 containerd[2110]: time="2024-07-02T00:23:15.659355610Z" level=info msg="Connect containerd service" Jul 2 00:23:15.667552 containerd[2110]: time="2024-07-02T00:23:15.659426161Z" level=info msg="using legacy CRI server" Jul 2 00:23:15.667552 containerd[2110]: time="2024-07-02T00:23:15.659437023Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:23:15.667552 containerd[2110]: time="2024-07-02T00:23:15.659728956Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.672984873Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673080668Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673129235Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673145784Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673163345Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673167500Z" level=info msg="Start subscribing containerd event" Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673244765Z" level=info msg="Start recovering state" Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673332974Z" level=info msg="Start event monitor" Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673348754Z" level=info msg="Start snapshots syncer" Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673362870Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:23:15.673813 containerd[2110]: time="2024-07-02T00:23:15.673373125Z" level=info msg="Start streaming server" Jul 2 00:23:15.680206 containerd[2110]: time="2024-07-02T00:23:15.675660973Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:23:15.680206 containerd[2110]: time="2024-07-02T00:23:15.675809055Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:23:15.678444 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:23:15.686648 containerd[2110]: time="2024-07-02T00:23:15.685909538Z" level=info msg="containerd successfully booted in 0.377426s" Jul 2 00:23:15.764057 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 00:23:15.865722 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 2 00:23:15.964413 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 00:23:16.008875 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 00:23:16.008875 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [Registrar] Starting registrar module Jul 2 00:23:16.009165 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 00:23:16.009165 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [EC2Identity] EC2 registration was successful. Jul 2 00:23:16.009165 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [CredentialRefresher] credentialRefresher has started Jul 2 00:23:16.009165 amazon-ssm-agent[2145]: 2024-07-02 00:23:15 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 00:23:16.009165 amazon-ssm-agent[2145]: 2024-07-02 00:23:16 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 00:23:16.065534 amazon-ssm-agent[2145]: 2024-07-02 00:23:16 INFO [CredentialRefresher] Next credential rotation will be in 30.6749809748 minutes Jul 2 00:23:16.168393 tar[2103]: linux-amd64/LICENSE Jul 2 00:23:16.169200 tar[2103]: linux-amd64/README.md Jul 2 00:23:16.189304 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:23:16.535264 sshd_keygen[2097]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:23:16.592278 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:23:16.604008 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:23:16.612984 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:23:16.613330 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:23:16.624002 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:23:16.646390 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:23:16.658748 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:23:16.668365 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:23:16.669777 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:23:16.719860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:16.722255 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:23:16.723818 systemd[1]: Startup finished in 11.939s (kernel) + 9.914s (userspace) = 21.854s. Jul 2 00:23:16.838292 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:17.043090 amazon-ssm-agent[2145]: 2024-07-02 00:23:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 00:23:17.156120 amazon-ssm-agent[2145]: 2024-07-02 00:23:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2358) started Jul 2 00:23:17.264227 amazon-ssm-agent[2145]: 2024-07-02 00:23:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 00:23:17.782947 kubelet[2348]: E0702 00:23:17.782852 2348 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:17.786750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:17.787218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:21.805919 systemd-resolved[1991]: Clock change detected. Flushing caches. Jul 2 00:23:22.074867 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:23:22.080927 systemd[1]: Started sshd@0-172.31.16.250:22-147.75.109.163:52716.service - OpenSSH per-connection server daemon (147.75.109.163:52716). Jul 2 00:23:22.272739 sshd[2372]: Accepted publickey for core from 147.75.109.163 port 52716 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:22.275376 sshd[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:22.288827 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:23:22.295209 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:23:22.302771 systemd-logind[2080]: New session 1 of user core. Jul 2 00:23:22.322359 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:23:22.335839 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:23:22.355654 (systemd)[2378]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:22.528285 systemd[2378]: Queued start job for default target default.target. Jul 2 00:23:22.528906 systemd[2378]: Created slice app.slice - User Application Slice. Jul 2 00:23:22.528936 systemd[2378]: Reached target paths.target - Paths. Jul 2 00:23:22.528954 systemd[2378]: Reached target timers.target - Timers. Jul 2 00:23:22.533836 systemd[2378]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:23:22.585321 systemd[2378]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:23:22.585410 systemd[2378]: Reached target sockets.target - Sockets. Jul 2 00:23:22.585429 systemd[2378]: Reached target basic.target - Basic System. Jul 2 00:23:22.585489 systemd[2378]: Reached target default.target - Main User Target. Jul 2 00:23:22.585537 systemd[2378]: Startup finished in 221ms. Jul 2 00:23:22.586109 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:23:22.602232 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:23:22.758559 systemd[1]: Started sshd@1-172.31.16.250:22-147.75.109.163:42118.service - OpenSSH per-connection server daemon (147.75.109.163:42118). Jul 2 00:23:22.940945 sshd[2390]: Accepted publickey for core from 147.75.109.163 port 42118 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:22.942647 sshd[2390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:22.950183 systemd-logind[2080]: New session 2 of user core. Jul 2 00:23:22.961494 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:23:23.100793 sshd[2390]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:23.120409 systemd[1]: sshd@1-172.31.16.250:22-147.75.109.163:42118.service: Deactivated successfully. Jul 2 00:23:23.128240 systemd-logind[2080]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:23:23.128830 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:23:23.139184 systemd[1]: Started sshd@2-172.31.16.250:22-147.75.109.163:42128.service - OpenSSH per-connection server daemon (147.75.109.163:42128). Jul 2 00:23:23.141407 systemd-logind[2080]: Removed session 2. Jul 2 00:23:23.308089 sshd[2398]: Accepted publickey for core from 147.75.109.163 port 42128 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:23.312008 sshd[2398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:23.318563 systemd-logind[2080]: New session 3 of user core. Jul 2 00:23:23.325176 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:23:23.443621 sshd[2398]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:23.449389 systemd[1]: sshd@2-172.31.16.250:22-147.75.109.163:42128.service: Deactivated successfully. Jul 2 00:23:23.454258 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:23:23.455370 systemd-logind[2080]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:23:23.456473 systemd-logind[2080]: Removed session 3. Jul 2 00:23:23.474856 systemd[1]: Started sshd@3-172.31.16.250:22-147.75.109.163:42142.service - OpenSSH per-connection server daemon (147.75.109.163:42142). Jul 2 00:23:23.665881 sshd[2406]: Accepted publickey for core from 147.75.109.163 port 42142 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:23.667567 sshd[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:23.680367 systemd-logind[2080]: New session 4 of user core. Jul 2 00:23:23.691089 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:23:23.829503 sshd[2406]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:23.835733 systemd[1]: sshd@3-172.31.16.250:22-147.75.109.163:42142.service: Deactivated successfully. Jul 2 00:23:23.845541 systemd-logind[2080]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:23:23.846128 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:23:23.864346 systemd[1]: Started sshd@4-172.31.16.250:22-147.75.109.163:42146.service - OpenSSH per-connection server daemon (147.75.109.163:42146). Jul 2 00:23:23.865720 systemd-logind[2080]: Removed session 4. Jul 2 00:23:24.048605 sshd[2414]: Accepted publickey for core from 147.75.109.163 port 42146 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:24.050408 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:24.061480 systemd-logind[2080]: New session 5 of user core. Jul 2 00:23:24.068169 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:23:24.217482 sudo[2418]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:23:24.218086 sudo[2418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:24.233322 sudo[2418]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:24.256413 sshd[2414]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:24.275508 systemd[1]: sshd@4-172.31.16.250:22-147.75.109.163:42146.service: Deactivated successfully. Jul 2 00:23:24.293996 systemd-logind[2080]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:23:24.294597 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:23:24.312344 systemd[1]: Started sshd@5-172.31.16.250:22-147.75.109.163:42162.service - OpenSSH per-connection server daemon (147.75.109.163:42162). Jul 2 00:23:24.315849 systemd-logind[2080]: Removed session 5. Jul 2 00:23:24.483939 sshd[2423]: Accepted publickey for core from 147.75.109.163 port 42162 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:24.485560 sshd[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:24.495092 systemd-logind[2080]: New session 6 of user core. Jul 2 00:23:24.503325 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:23:24.607789 sudo[2428]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:23:24.608341 sudo[2428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:24.614652 sudo[2428]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:24.622553 sudo[2427]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:23:24.623302 sudo[2427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:24.654198 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:24.671533 auditctl[2431]: No rules Jul 2 00:23:24.672075 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:23:24.672582 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:24.683956 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:24.743344 augenrules[2450]: No rules Jul 2 00:23:24.745911 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:24.752593 sudo[2427]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:24.776734 sshd[2423]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:24.786039 systemd[1]: sshd@5-172.31.16.250:22-147.75.109.163:42162.service: Deactivated successfully. Jul 2 00:23:24.786768 systemd-logind[2080]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:23:24.790867 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:23:24.792299 systemd-logind[2080]: Removed session 6. Jul 2 00:23:24.806215 systemd[1]: Started sshd@6-172.31.16.250:22-147.75.109.163:42178.service - OpenSSH per-connection server daemon (147.75.109.163:42178). Jul 2 00:23:24.977946 sshd[2459]: Accepted publickey for core from 147.75.109.163 port 42178 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:24.980454 sshd[2459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:24.997350 systemd-logind[2080]: New session 7 of user core. Jul 2 00:23:25.009031 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:23:25.135997 sudo[2463]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:23:25.136488 sudo[2463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:25.442130 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:23:25.442316 (dockerd)[2473]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:23:26.087639 dockerd[2473]: time="2024-07-02T00:23:26.087576394Z" level=info msg="Starting up" Jul 2 00:23:27.210965 dockerd[2473]: time="2024-07-02T00:23:27.210792629Z" level=info msg="Loading containers: start." Jul 2 00:23:27.389342 kernel: Initializing XFRM netlink socket Jul 2 00:23:27.462680 (udev-worker)[2484]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:23:27.605933 systemd-networkd[1671]: docker0: Link UP Jul 2 00:23:27.624760 dockerd[2473]: time="2024-07-02T00:23:27.624680986Z" level=info msg="Loading containers: done." Jul 2 00:23:27.814886 dockerd[2473]: time="2024-07-02T00:23:27.814678186Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:23:27.815194 dockerd[2473]: time="2024-07-02T00:23:27.815127642Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:23:27.815400 dockerd[2473]: time="2024-07-02T00:23:27.815375044Z" level=info msg="Daemon has completed initialization" Jul 2 00:23:27.871715 dockerd[2473]: time="2024-07-02T00:23:27.870372604Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:23:27.875156 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:23:28.708968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:28.720837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:29.150427 containerd[2110]: time="2024-07-02T00:23:29.150065766Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:23:29.450913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:29.463531 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:29.560270 kubelet[2617]: E0702 00:23:29.560171 2617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:29.564640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:29.565011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:29.905092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724890516.mount: Deactivated successfully. Jul 2 00:23:32.346888 containerd[2110]: time="2024-07-02T00:23:32.346840262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:32.349744 containerd[2110]: time="2024-07-02T00:23:32.349486545Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jul 2 00:23:32.352126 containerd[2110]: time="2024-07-02T00:23:32.351632791Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:32.355362 containerd[2110]: time="2024-07-02T00:23:32.355300114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:32.356508 containerd[2110]: time="2024-07-02T00:23:32.356467613Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.206359863s" Jul 2 00:23:32.356808 containerd[2110]: time="2024-07-02T00:23:32.356633552Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 00:23:32.385354 containerd[2110]: time="2024-07-02T00:23:32.385305081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:23:35.543464 containerd[2110]: time="2024-07-02T00:23:35.543410853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:35.545882 containerd[2110]: time="2024-07-02T00:23:35.545824863Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jul 2 00:23:35.549438 containerd[2110]: time="2024-07-02T00:23:35.547536482Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:35.552510 containerd[2110]: time="2024-07-02T00:23:35.551265403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:35.552510 containerd[2110]: time="2024-07-02T00:23:35.552378312Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 3.167028729s" Jul 2 00:23:35.552510 containerd[2110]: time="2024-07-02T00:23:35.552419815Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 00:23:35.579880 containerd[2110]: time="2024-07-02T00:23:35.579769030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:23:37.617001 containerd[2110]: time="2024-07-02T00:23:37.616948145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:37.618978 containerd[2110]: time="2024-07-02T00:23:37.618772451Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jul 2 00:23:37.622560 containerd[2110]: time="2024-07-02T00:23:37.620512192Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:37.624718 containerd[2110]: time="2024-07-02T00:23:37.624269330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:37.626769 containerd[2110]: time="2024-07-02T00:23:37.625964745Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 2.04614973s" Jul 2 00:23:37.626769 containerd[2110]: time="2024-07-02T00:23:37.626011524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 00:23:37.654365 containerd[2110]: time="2024-07-02T00:23:37.654327730Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:23:39.405034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330032197.mount: Deactivated successfully. Jul 2 00:23:39.713422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:23:39.723610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:40.256715 containerd[2110]: time="2024-07-02T00:23:40.255638579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:40.259628 containerd[2110]: time="2024-07-02T00:23:40.259344352Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jul 2 00:23:40.266548 containerd[2110]: time="2024-07-02T00:23:40.266465000Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:40.285907 containerd[2110]: time="2024-07-02T00:23:40.285823147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:40.291948 containerd[2110]: time="2024-07-02T00:23:40.291317074Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.636925298s" Jul 2 00:23:40.291948 containerd[2110]: time="2024-07-02T00:23:40.291393494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 00:23:40.361046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:40.383838 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:40.400971 containerd[2110]: time="2024-07-02T00:23:40.400533501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:23:40.482162 kubelet[2725]: E0702 00:23:40.482081 2725 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:40.485235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:40.485615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:40.977771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380317449.mount: Deactivated successfully. Jul 2 00:23:40.987640 containerd[2110]: time="2024-07-02T00:23:40.987591457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:40.989043 containerd[2110]: time="2024-07-02T00:23:40.988968373Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:23:40.992634 containerd[2110]: time="2024-07-02T00:23:40.990885319Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:40.995847 containerd[2110]: time="2024-07-02T00:23:40.994201754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:40.998246 containerd[2110]: time="2024-07-02T00:23:40.998156315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 596.668563ms" Jul 2 00:23:40.998368 containerd[2110]: time="2024-07-02T00:23:40.998258280Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:23:41.028124 containerd[2110]: time="2024-07-02T00:23:41.028085433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:23:41.700329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663282013.mount: Deactivated successfully. Jul 2 00:23:45.238082 containerd[2110]: time="2024-07-02T00:23:45.238024340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:45.239592 containerd[2110]: time="2024-07-02T00:23:45.239379735Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 00:23:45.241718 containerd[2110]: time="2024-07-02T00:23:45.241547375Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:45.249137 containerd[2110]: time="2024-07-02T00:23:45.249086786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:45.250604 containerd[2110]: time="2024-07-02T00:23:45.250412577Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.222278847s" Jul 2 00:23:45.250604 containerd[2110]: time="2024-07-02T00:23:45.250475693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:23:45.282313 containerd[2110]: time="2024-07-02T00:23:45.282274539Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:23:45.710408 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:23:45.858423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068957301.mount: Deactivated successfully. Jul 2 00:23:46.860608 containerd[2110]: time="2024-07-02T00:23:46.860551311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:46.890451 containerd[2110]: time="2024-07-02T00:23:46.890198506Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jul 2 00:23:46.904526 containerd[2110]: time="2024-07-02T00:23:46.904289305Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:46.941744 containerd[2110]: time="2024-07-02T00:23:46.941648681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:46.943338 containerd[2110]: time="2024-07-02T00:23:46.943086485Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.660770945s" Jul 2 00:23:46.943338 containerd[2110]: time="2024-07-02T00:23:46.943134467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 00:23:50.708478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:23:50.727498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:51.380679 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:23:51.380951 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:23:51.382178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:51.406639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:51.433199 systemd[1]: Reloading requested from client PID 2876 ('systemctl') (unit session-7.scope)... Jul 2 00:23:51.433519 systemd[1]: Reloading... Jul 2 00:23:51.552823 zram_generator::config[2911]: No configuration found. Jul 2 00:23:51.830870 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:51.959542 systemd[1]: Reloading finished in 525 ms. Jul 2 00:23:52.010651 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:23:52.010890 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:23:52.012124 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:52.027827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:53.050955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:53.064307 (kubelet)[2981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:23:53.141494 kubelet[2981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:23:53.142776 kubelet[2981]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:23:53.142776 kubelet[2981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:23:53.142776 kubelet[2981]: I0702 00:23:53.142028 2981 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:23:53.602816 kubelet[2981]: I0702 00:23:53.602776 2981 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:23:53.602816 kubelet[2981]: I0702 00:23:53.602816 2981 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:23:53.603106 kubelet[2981]: I0702 00:23:53.603085 2981 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:23:53.636386 kubelet[2981]: E0702 00:23:53.636349 2981 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:53.636616 kubelet[2981]: I0702 00:23:53.636433 2981 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:23:53.660187 kubelet[2981]: I0702 00:23:53.660155 2981 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:23:53.662111 kubelet[2981]: I0702 00:23:53.662077 2981 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:23:53.662323 kubelet[2981]: I0702 00:23:53.662301 2981 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:23:53.663056 kubelet[2981]: I0702 00:23:53.663027 2981 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:23:53.663056 kubelet[2981]: I0702 00:23:53.663056 2981 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:23:53.664214 kubelet[2981]: I0702 00:23:53.664186 2981 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:53.666462 kubelet[2981]: I0702 00:23:53.666436 2981 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:23:53.666462 kubelet[2981]: I0702 00:23:53.666466 2981 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:23:53.666588 kubelet[2981]: I0702 00:23:53.666501 2981 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:23:53.666588 kubelet[2981]: I0702 00:23:53.666522 2981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:23:53.668912 kubelet[2981]: W0702 00:23:53.668467 2981 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.16.250:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:53.668912 kubelet[2981]: E0702 00:23:53.668532 2981 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.250:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:53.668912 kubelet[2981]: W0702 00:23:53.668792 2981 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.16.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-250&limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:53.668912 kubelet[2981]: E0702 00:23:53.668841 2981 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-250&limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:53.669286 kubelet[2981]: I0702 00:23:53.669124 2981 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:23:53.675428 kubelet[2981]: W0702 00:23:53.675406 2981 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:23:53.676717 kubelet[2981]: I0702 00:23:53.676471 2981 server.go:1232] "Started kubelet" Jul 2 00:23:53.676717 kubelet[2981]: I0702 00:23:53.676611 2981 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:23:53.680476 kubelet[2981]: I0702 00:23:53.680453 2981 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:23:53.681087 kubelet[2981]: I0702 00:23:53.681061 2981 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:23:53.683086 kubelet[2981]: I0702 00:23:53.682019 2981 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:23:53.683086 kubelet[2981]: E0702 00:23:53.682561 2981 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-16-250.17de3d98aea47572", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-16-250", UID:"ip-172-31-16-250", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-16-250"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 53, 676445042, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 53, 676445042, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-16-250"}': 'Post "https://172.31.16.250:6443/api/v1/namespaces/default/events": dial tcp 172.31.16.250:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:23:53.683941 kubelet[2981]: E0702 00:23:53.683922 2981 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:23:53.684017 kubelet[2981]: E0702 00:23:53.683955 2981 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:23:53.688505 kubelet[2981]: I0702 00:23:53.688412 2981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:23:53.689559 kubelet[2981]: I0702 00:23:53.689528 2981 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:23:53.695793 kubelet[2981]: I0702 00:23:53.695672 2981 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:23:53.695793 kubelet[2981]: I0702 00:23:53.695781 2981 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:23:53.696633 kubelet[2981]: W0702 00:23:53.696376 2981 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.16.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:53.696724 kubelet[2981]: E0702 00:23:53.696649 2981 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:53.697364 kubelet[2981]: E0702 00:23:53.697343 2981 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-16-250\" not found" Jul 2 00:23:53.700667 kubelet[2981]: E0702 00:23:53.700616 2981 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-250?timeout=10s\": dial tcp 172.31.16.250:6443: connect: connection refused" interval="200ms" Jul 2 00:23:53.738606 kubelet[2981]: I0702 00:23:53.738561 2981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:23:53.755603 kubelet[2981]: I0702 00:23:53.755575 2981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:23:53.757304 kubelet[2981]: I0702 00:23:53.756800 2981 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:23:53.757304 kubelet[2981]: I0702 00:23:53.756851 2981 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:23:53.757304 kubelet[2981]: E0702 00:23:53.756919 2981 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:23:53.767183 kubelet[2981]: W0702 00:23:53.767118 2981 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.16.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:53.767183 kubelet[2981]: E0702 00:23:53.767190 2981 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:53.797332 kubelet[2981]: I0702 00:23:53.797302 2981 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:23:53.797332 kubelet[2981]: I0702 00:23:53.797325 2981 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:23:53.797574 kubelet[2981]: I0702 00:23:53.797344 2981 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:53.799863 kubelet[2981]: I0702 00:23:53.799836 2981 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-250" Jul 2 00:23:53.800218 kubelet[2981]: E0702 00:23:53.800182 2981 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.250:6443/api/v1/nodes\": dial tcp 172.31.16.250:6443: connect: connection refused" node="ip-172-31-16-250" Jul 2 00:23:53.811461 kubelet[2981]: I0702 00:23:53.811413 2981 policy_none.go:49] "None policy: Start" Jul 2 00:23:53.812217 kubelet[2981]: I0702 00:23:53.812188 2981 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:23:53.812217 kubelet[2981]: I0702 00:23:53.812219 2981 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:23:53.819721 kubelet[2981]: I0702 00:23:53.819375 2981 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:23:53.819721 kubelet[2981]: I0702 00:23:53.819663 2981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:23:53.824857 kubelet[2981]: E0702 00:23:53.824826 2981 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-250\" not found" Jul 2 00:23:53.857745 kubelet[2981]: I0702 00:23:53.857349 2981 topology_manager.go:215] "Topology Admit Handler" podUID="210562c0d9ad2d5346918bef391e6220" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-250" Jul 2 00:23:53.860281 kubelet[2981]: I0702 00:23:53.859775 2981 topology_manager.go:215] "Topology Admit Handler" podUID="d50a722c17bbdb3ed7516b1b47922096" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-250" Jul 2 00:23:53.862100 kubelet[2981]: I0702 00:23:53.862081 2981 topology_manager.go:215] "Topology Admit Handler" podUID="daac697277f74ff533aaa8bd2d56f134" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-250" Jul 2 00:23:53.901701 kubelet[2981]: E0702 00:23:53.901638 2981 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-250?timeout=10s\": dial tcp 172.31.16.250:6443: connect: connection refused" interval="400ms" Jul 2 00:23:53.997410 kubelet[2981]: I0702 00:23:53.997053 2981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/daac697277f74ff533aaa8bd2d56f134-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-250\" (UID: \"daac697277f74ff533aaa8bd2d56f134\") " pod="kube-system/kube-scheduler-ip-172-31-16-250" Jul 2 00:23:53.997410 kubelet[2981]: I0702 00:23:53.997123 2981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/210562c0d9ad2d5346918bef391e6220-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-250\" (UID: \"210562c0d9ad2d5346918bef391e6220\") " pod="kube-system/kube-apiserver-ip-172-31-16-250" Jul 2 00:23:53.997410 kubelet[2981]: I0702 00:23:53.997265 2981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:23:53.997410 kubelet[2981]: I0702 00:23:53.997313 2981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:23:53.997410 kubelet[2981]: I0702 00:23:53.997344 2981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:23:53.997760 kubelet[2981]: I0702 00:23:53.997405 2981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:23:53.997760 kubelet[2981]: I0702 00:23:53.997460 2981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:23:53.997760 kubelet[2981]: I0702 00:23:53.997486 2981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/210562c0d9ad2d5346918bef391e6220-ca-certs\") pod \"kube-apiserver-ip-172-31-16-250\" (UID: \"210562c0d9ad2d5346918bef391e6220\") " pod="kube-system/kube-apiserver-ip-172-31-16-250" Jul 2 00:23:53.997760 kubelet[2981]: I0702 00:23:53.997511 2981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/210562c0d9ad2d5346918bef391e6220-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-250\" (UID: \"210562c0d9ad2d5346918bef391e6220\") " pod="kube-system/kube-apiserver-ip-172-31-16-250" Jul 2 00:23:54.002878 kubelet[2981]: I0702 00:23:54.002834 2981 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-250" Jul 2 00:23:54.003408 kubelet[2981]: E0702 00:23:54.003382 2981 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.250:6443/api/v1/nodes\": dial tcp 172.31.16.250:6443: connect: connection refused" node="ip-172-31-16-250" Jul 2 00:23:54.168191 containerd[2110]: time="2024-07-02T00:23:54.168014487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-250,Uid:210562c0d9ad2d5346918bef391e6220,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:54.173877 containerd[2110]: time="2024-07-02T00:23:54.173648113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-250,Uid:d50a722c17bbdb3ed7516b1b47922096,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:54.181613 containerd[2110]: time="2024-07-02T00:23:54.181573361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-250,Uid:daac697277f74ff533aaa8bd2d56f134,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:54.303194 kubelet[2981]: E0702 00:23:54.303126 2981 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-250?timeout=10s\": dial tcp 172.31.16.250:6443: connect: connection refused" interval="800ms" Jul 2 00:23:54.406117 kubelet[2981]: I0702 00:23:54.406078 2981 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-250" Jul 2 00:23:54.406506 kubelet[2981]: E0702 00:23:54.406482 2981 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.250:6443/api/v1/nodes\": dial tcp 172.31.16.250:6443: connect: connection refused" node="ip-172-31-16-250" Jul 2 00:23:54.504525 kubelet[2981]: W0702 00:23:54.504465 2981 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.16.250:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:54.504525 kubelet[2981]: E0702 00:23:54.504530 2981 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.250:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:54.737754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330701803.mount: Deactivated successfully. Jul 2 00:23:54.759491 containerd[2110]: time="2024-07-02T00:23:54.759368718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:54.761293 containerd[2110]: time="2024-07-02T00:23:54.761241760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:23:54.762840 containerd[2110]: time="2024-07-02T00:23:54.762800783Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:54.764561 containerd[2110]: time="2024-07-02T00:23:54.764522322Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:54.766356 containerd[2110]: time="2024-07-02T00:23:54.766222239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:23:54.768872 containerd[2110]: time="2024-07-02T00:23:54.768834521Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:54.770357 containerd[2110]: time="2024-07-02T00:23:54.770055998Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:23:54.773788 containerd[2110]: time="2024-07-02T00:23:54.773750303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:54.776715 containerd[2110]: time="2024-07-02T00:23:54.774658962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.981144ms" Jul 2 00:23:54.778469 containerd[2110]: time="2024-07-02T00:23:54.778422370Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 604.475685ms" Jul 2 00:23:54.781628 containerd[2110]: time="2024-07-02T00:23:54.781580989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.38109ms" Jul 2 00:23:54.782336 kubelet[2981]: W0702 00:23:54.782274 2981 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.16.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:54.782336 kubelet[2981]: E0702 00:23:54.782318 2981 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.250:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:54.881465 kubelet[2981]: W0702 00:23:54.881390 2981 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.16.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:54.881465 kubelet[2981]: E0702 00:23:54.881453 2981 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.250:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:54.995882 kubelet[2981]: W0702 00:23:54.995770 2981 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.16.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-250&limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:54.995882 kubelet[2981]: E0702 00:23:54.995854 2981 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.250:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-250&limit=500&resourceVersion=0": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:55.105955 kubelet[2981]: E0702 00:23:55.104864 2981 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-250?timeout=10s\": dial tcp 172.31.16.250:6443: connect: connection refused" interval="1.6s" Jul 2 00:23:55.133516 containerd[2110]: time="2024-07-02T00:23:55.133386191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:55.133732 containerd[2110]: time="2024-07-02T00:23:55.133566589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:55.133732 containerd[2110]: time="2024-07-02T00:23:55.133599279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:55.133732 containerd[2110]: time="2024-07-02T00:23:55.133619345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:55.137370 containerd[2110]: time="2024-07-02T00:23:55.137221857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:55.137494 containerd[2110]: time="2024-07-02T00:23:55.137290159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:55.137494 containerd[2110]: time="2024-07-02T00:23:55.137429435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:55.140773 containerd[2110]: time="2024-07-02T00:23:55.138045481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:55.152178 containerd[2110]: time="2024-07-02T00:23:55.152052716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:55.152337 containerd[2110]: time="2024-07-02T00:23:55.152194685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:55.152337 containerd[2110]: time="2024-07-02T00:23:55.152251781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:55.152337 containerd[2110]: time="2024-07-02T00:23:55.152290884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:55.212791 kubelet[2981]: I0702 00:23:55.212601 2981 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-250" Jul 2 00:23:55.223811 kubelet[2981]: E0702 00:23:55.219999 2981 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.250:6443/api/v1/nodes\": dial tcp 172.31.16.250:6443: connect: connection refused" node="ip-172-31-16-250" Jul 2 00:23:55.344108 containerd[2110]: time="2024-07-02T00:23:55.344064509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-250,Uid:daac697277f74ff533aaa8bd2d56f134,Namespace:kube-system,Attempt:0,} returns sandbox id \"be3e56182c53da2234d1230c3d42d55a381f8c2a408aaaf61cced289149b3820\"" Jul 2 00:23:55.364650 containerd[2110]: time="2024-07-02T00:23:55.363701890Z" level=info msg="CreateContainer within sandbox \"be3e56182c53da2234d1230c3d42d55a381f8c2a408aaaf61cced289149b3820\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:23:55.366881 containerd[2110]: time="2024-07-02T00:23:55.366758837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-250,Uid:210562c0d9ad2d5346918bef391e6220,Namespace:kube-system,Attempt:0,} returns sandbox id \"93f597f25c33f138ab2c8d3da4204bfc1a684e16cb4918a37f619ae25bcfc4ba\"" Jul 2 00:23:55.371500 containerd[2110]: time="2024-07-02T00:23:55.371463559Z" level=info msg="CreateContainer within sandbox \"93f597f25c33f138ab2c8d3da4204bfc1a684e16cb4918a37f619ae25bcfc4ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:23:55.375414 containerd[2110]: time="2024-07-02T00:23:55.375376144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-250,Uid:d50a722c17bbdb3ed7516b1b47922096,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf8d959600c95a2a3a2e6789a6ccbcaf6f6fdce77bd33149ee7d72f6c83a2b0\"" Jul 2 00:23:55.379599 containerd[2110]: time="2024-07-02T00:23:55.379460672Z" level=info msg="CreateContainer within sandbox \"0cf8d959600c95a2a3a2e6789a6ccbcaf6f6fdce77bd33149ee7d72f6c83a2b0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:23:55.435082 containerd[2110]: time="2024-07-02T00:23:55.434543806Z" level=info msg="CreateContainer within sandbox \"be3e56182c53da2234d1230c3d42d55a381f8c2a408aaaf61cced289149b3820\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d0a7a4b78e8178ee5b5523a8cbd192c93032d72edf08cbcd053076530c5eb7f9\"" Jul 2 00:23:55.435877 containerd[2110]: time="2024-07-02T00:23:55.435443096Z" level=info msg="StartContainer for \"d0a7a4b78e8178ee5b5523a8cbd192c93032d72edf08cbcd053076530c5eb7f9\"" Jul 2 00:23:55.457153 containerd[2110]: time="2024-07-02T00:23:55.457105874Z" level=info msg="CreateContainer within sandbox \"0cf8d959600c95a2a3a2e6789a6ccbcaf6f6fdce77bd33149ee7d72f6c83a2b0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f95794739f4cead65c74ad4da5d0e68d52e8f375e583c60ff25f84a5237b81a\"" Jul 2 00:23:55.458314 containerd[2110]: time="2024-07-02T00:23:55.458249008Z" level=info msg="CreateContainer within sandbox \"93f597f25c33f138ab2c8d3da4204bfc1a684e16cb4918a37f619ae25bcfc4ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"61b084df882d417d8ebdf2bb5412e22d48631d8162712a4137b1e1d1de259575\"" Jul 2 00:23:55.458848 containerd[2110]: time="2024-07-02T00:23:55.458781759Z" level=info msg="StartContainer for \"8f95794739f4cead65c74ad4da5d0e68d52e8f375e583c60ff25f84a5237b81a\"" Jul 2 00:23:55.459781 containerd[2110]: time="2024-07-02T00:23:55.459758113Z" level=info msg="StartContainer for \"61b084df882d417d8ebdf2bb5412e22d48631d8162712a4137b1e1d1de259575\"" Jul 2 00:23:55.679234 containerd[2110]: time="2024-07-02T00:23:55.677128229Z" level=info msg="StartContainer for \"d0a7a4b78e8178ee5b5523a8cbd192c93032d72edf08cbcd053076530c5eb7f9\" returns successfully" Jul 2 00:23:55.701660 containerd[2110]: time="2024-07-02T00:23:55.701606833Z" level=info msg="StartContainer for \"8f95794739f4cead65c74ad4da5d0e68d52e8f375e583c60ff25f84a5237b81a\" returns successfully" Jul 2 00:23:55.795279 containerd[2110]: time="2024-07-02T00:23:55.794513936Z" level=info msg="StartContainer for \"61b084df882d417d8ebdf2bb5412e22d48631d8162712a4137b1e1d1de259575\" returns successfully" Jul 2 00:23:55.804638 kubelet[2981]: E0702 00:23:55.804586 2981 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.250:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.250:6443: connect: connection refused Jul 2 00:23:56.826592 kubelet[2981]: I0702 00:23:56.825983 2981 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-250" Jul 2 00:23:58.959798 kubelet[2981]: E0702 00:23:58.959753 2981 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-250\" not found" node="ip-172-31-16-250" Jul 2 00:23:58.987836 kubelet[2981]: I0702 00:23:58.987797 2981 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-16-250" Jul 2 00:23:59.671342 kubelet[2981]: I0702 00:23:59.671295 2981 apiserver.go:52] "Watching apiserver" Jul 2 00:23:59.697525 kubelet[2981]: I0702 00:23:59.696310 2981 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:24:00.676246 update_engine[2086]: I0702 00:24:00.676187 2086 update_attempter.cc:509] Updating boot flags... Jul 2 00:24:00.752723 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3270) Jul 2 00:24:01.017734 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3270) Jul 2 00:24:02.458840 systemd[1]: Reloading requested from client PID 3439 ('systemctl') (unit session-7.scope)... Jul 2 00:24:02.459015 systemd[1]: Reloading... Jul 2 00:24:02.642730 zram_generator::config[3480]: No configuration found. Jul 2 00:24:02.868869 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:24:03.021278 systemd[1]: Reloading finished in 561 ms. Jul 2 00:24:03.061738 kubelet[2981]: I0702 00:24:03.061702 2981 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:24:03.062560 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:03.077025 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:24:03.077594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:03.092059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:03.767957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:03.785276 (kubelet)[3544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:24:03.904160 kubelet[3544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:03.904160 kubelet[3544]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:24:03.904160 kubelet[3544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:03.904160 kubelet[3544]: I0702 00:24:03.903670 3544 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:24:03.910764 sudo[3556]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:24:03.911442 sudo[3556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:24:03.912134 kubelet[3544]: I0702 00:24:03.911515 3544 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:24:03.912134 kubelet[3544]: I0702 00:24:03.911539 3544 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:24:03.912225 kubelet[3544]: I0702 00:24:03.912154 3544 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:24:03.915140 kubelet[3544]: I0702 00:24:03.915107 3544 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:24:03.917773 kubelet[3544]: I0702 00:24:03.917514 3544 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:24:03.931443 kubelet[3544]: I0702 00:24:03.930629 3544 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:24:03.931598 kubelet[3544]: I0702 00:24:03.931535 3544 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:24:03.940259 kubelet[3544]: I0702 00:24:03.940223 3544 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:24:03.946744 kubelet[3544]: I0702 00:24:03.943080 3544 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:24:03.946744 kubelet[3544]: I0702 00:24:03.943127 3544 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:24:03.946744 kubelet[3544]: I0702 00:24:03.944400 3544 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:03.946744 kubelet[3544]: I0702 00:24:03.944550 3544 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:24:03.946744 kubelet[3544]: I0702 00:24:03.944569 3544 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:24:03.946744 kubelet[3544]: I0702 00:24:03.944762 3544 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:24:03.946744 kubelet[3544]: I0702 00:24:03.944853 3544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:24:03.950327 kubelet[3544]: I0702 00:24:03.947911 3544 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:24:03.950327 kubelet[3544]: I0702 00:24:03.948626 3544 server.go:1232] "Started kubelet" Jul 2 00:24:03.963923 kubelet[3544]: I0702 00:24:03.959055 3544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:24:03.974726 kubelet[3544]: I0702 00:24:03.974227 3544 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:24:03.976422 kubelet[3544]: I0702 00:24:03.976401 3544 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:24:04.016885 kubelet[3544]: I0702 00:24:03.982793 3544 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:24:04.018227 kubelet[3544]: I0702 00:24:04.017931 3544 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:24:04.018227 kubelet[3544]: I0702 00:24:03.995974 3544 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:24:04.025795 kubelet[3544]: I0702 00:24:03.995999 3544 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:24:04.025795 kubelet[3544]: I0702 00:24:04.025550 3544 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:24:04.041354 kubelet[3544]: E0702 00:24:04.005562 3544 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:24:04.041668 kubelet[3544]: E0702 00:24:04.041649 3544 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:24:04.043826 kubelet[3544]: I0702 00:24:04.042970 3544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:24:04.050354 kubelet[3544]: I0702 00:24:04.050330 3544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:24:04.050809 kubelet[3544]: I0702 00:24:04.050508 3544 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:24:04.050809 kubelet[3544]: I0702 00:24:04.050536 3544 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:24:04.050809 kubelet[3544]: E0702 00:24:04.050595 3544 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:24:04.117310 kubelet[3544]: I0702 00:24:04.117284 3544 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-250" Jul 2 00:24:04.150304 kubelet[3544]: I0702 00:24:04.149873 3544 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-16-250" Jul 2 00:24:04.150304 kubelet[3544]: I0702 00:24:04.149972 3544 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-16-250" Jul 2 00:24:04.150710 kubelet[3544]: E0702 00:24:04.150667 3544 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:24:04.266838 kubelet[3544]: I0702 00:24:04.264872 3544 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:24:04.266838 kubelet[3544]: I0702 00:24:04.264896 3544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:24:04.266838 kubelet[3544]: I0702 00:24:04.264916 3544 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:04.266838 kubelet[3544]: I0702 00:24:04.265079 3544 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:24:04.266838 kubelet[3544]: I0702 00:24:04.265104 3544 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:24:04.266838 kubelet[3544]: I0702 00:24:04.265169 3544 policy_none.go:49] "None policy: Start" Jul 2 00:24:04.266838 kubelet[3544]: I0702 00:24:04.266246 3544 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:24:04.266838 kubelet[3544]: I0702 00:24:04.266278 3544 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:24:04.266838 kubelet[3544]: I0702 00:24:04.266652 3544 state_mem.go:75] "Updated machine memory state" Jul 2 00:24:04.271165 kubelet[3544]: I0702 00:24:04.269720 3544 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:24:04.273783 kubelet[3544]: I0702 00:24:04.273044 3544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:24:04.352899 kubelet[3544]: I0702 00:24:04.352862 3544 topology_manager.go:215] "Topology Admit Handler" podUID="210562c0d9ad2d5346918bef391e6220" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-250" Jul 2 00:24:04.353201 kubelet[3544]: I0702 00:24:04.352990 3544 topology_manager.go:215] "Topology Admit Handler" podUID="d50a722c17bbdb3ed7516b1b47922096" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-250" Jul 2 00:24:04.353201 kubelet[3544]: I0702 00:24:04.353039 3544 topology_manager.go:215] "Topology Admit Handler" podUID="daac697277f74ff533aaa8bd2d56f134" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-250" Jul 2 00:24:04.369076 kubelet[3544]: E0702 00:24:04.368885 3544 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-250\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:24:04.429584 kubelet[3544]: I0702 00:24:04.429355 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/210562c0d9ad2d5346918bef391e6220-ca-certs\") pod \"kube-apiserver-ip-172-31-16-250\" (UID: \"210562c0d9ad2d5346918bef391e6220\") " pod="kube-system/kube-apiserver-ip-172-31-16-250" Jul 2 00:24:04.429584 kubelet[3544]: I0702 00:24:04.429501 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:24:04.430012 kubelet[3544]: I0702 00:24:04.429805 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:24:04.430296 kubelet[3544]: I0702 00:24:04.429849 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:24:04.430296 kubelet[3544]: I0702 00:24:04.430130 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/210562c0d9ad2d5346918bef391e6220-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-250\" (UID: \"210562c0d9ad2d5346918bef391e6220\") " pod="kube-system/kube-apiserver-ip-172-31-16-250" Jul 2 00:24:04.430296 kubelet[3544]: I0702 00:24:04.430198 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/210562c0d9ad2d5346918bef391e6220-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-250\" (UID: \"210562c0d9ad2d5346918bef391e6220\") " pod="kube-system/kube-apiserver-ip-172-31-16-250" Jul 2 00:24:04.430711 kubelet[3544]: I0702 00:24:04.430505 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:24:04.430711 kubelet[3544]: I0702 00:24:04.430575 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d50a722c17bbdb3ed7516b1b47922096-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-250\" (UID: \"d50a722c17bbdb3ed7516b1b47922096\") " pod="kube-system/kube-controller-manager-ip-172-31-16-250" Jul 2 00:24:04.430711 kubelet[3544]: I0702 00:24:04.430634 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/daac697277f74ff533aaa8bd2d56f134-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-250\" (UID: \"daac697277f74ff533aaa8bd2d56f134\") " pod="kube-system/kube-scheduler-ip-172-31-16-250" Jul 2 00:24:04.947645 kubelet[3544]: I0702 00:24:04.947530 3544 apiserver.go:52] "Watching apiserver" Jul 2 00:24:04.991807 sudo[3556]: pam_unix(sudo:session): session closed for user root Jul 2 00:24:05.026023 kubelet[3544]: I0702 00:24:05.025974 3544 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:24:05.112001 kubelet[3544]: E0702 00:24:05.110928 3544 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-250\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-250" Jul 2 00:24:05.139925 kubelet[3544]: I0702 00:24:05.139885 3544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-250" podStartSLOduration=5.139793314 podCreationTimestamp="2024-07-02 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:05.131756206 +0000 UTC m=+1.333813398" watchObservedRunningTime="2024-07-02 00:24:05.139793314 +0000 UTC m=+1.341850509" Jul 2 00:24:05.150263 kubelet[3544]: I0702 00:24:05.150214 3544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-250" podStartSLOduration=1.1501623 podCreationTimestamp="2024-07-02 00:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:05.140125198 +0000 UTC m=+1.342182391" watchObservedRunningTime="2024-07-02 00:24:05.1501623 +0000 UTC m=+1.352219499" Jul 2 00:24:05.173965 kubelet[3544]: I0702 00:24:05.173814 3544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-250" podStartSLOduration=1.173752993 podCreationTimestamp="2024-07-02 00:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:05.151585608 +0000 UTC m=+1.353642802" watchObservedRunningTime="2024-07-02 00:24:05.173752993 +0000 UTC m=+1.375810188" Jul 2 00:24:07.274224 sudo[2463]: pam_unix(sudo:session): session closed for user root Jul 2 00:24:07.297791 sshd[2459]: pam_unix(sshd:session): session closed for user core Jul 2 00:24:07.301901 systemd[1]: sshd@6-172.31.16.250:22-147.75.109.163:42178.service: Deactivated successfully. Jul 2 00:24:07.309916 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:24:07.312187 systemd-logind[2080]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:24:07.315261 systemd-logind[2080]: Removed session 7. Jul 2 00:24:15.742833 kubelet[3544]: I0702 00:24:15.742718 3544 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:24:15.743720 kubelet[3544]: I0702 00:24:15.743610 3544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:24:15.744094 containerd[2110]: time="2024-07-02T00:24:15.743287772Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:24:16.024899 kubelet[3544]: I0702 00:24:16.024213 3544 topology_manager.go:215] "Topology Admit Handler" podUID="fc0c6ce2-9161-4686-b99c-19d58001c34a" podNamespace="kube-system" podName="kube-proxy-wp45w" Jul 2 00:24:16.067357 kubelet[3544]: I0702 00:24:16.064679 3544 topology_manager.go:215] "Topology Admit Handler" podUID="e5c5c798-fdda-4fa1-834a-915983dd4e31" podNamespace="kube-system" podName="cilium-72ld4" Jul 2 00:24:16.130492 kubelet[3544]: I0702 00:24:16.130449 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc0c6ce2-9161-4686-b99c-19d58001c34a-lib-modules\") pod \"kube-proxy-wp45w\" (UID: \"fc0c6ce2-9161-4686-b99c-19d58001c34a\") " pod="kube-system/kube-proxy-wp45w" Jul 2 00:24:16.130665 kubelet[3544]: I0702 00:24:16.130511 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-run\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.130665 kubelet[3544]: I0702 00:24:16.130540 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-xtables-lock\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.130665 kubelet[3544]: I0702 00:24:16.130566 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc0c6ce2-9161-4686-b99c-19d58001c34a-xtables-lock\") pod \"kube-proxy-wp45w\" (UID: \"fc0c6ce2-9161-4686-b99c-19d58001c34a\") " pod="kube-system/kube-proxy-wp45w" Jul 2 00:24:16.130665 kubelet[3544]: I0702 00:24:16.130595 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cni-path\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.130665 kubelet[3544]: I0702 00:24:16.130624 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-host-proc-sys-net\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.130665 kubelet[3544]: I0702 00:24:16.130651 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-cgroup\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.130949 kubelet[3544]: I0702 00:24:16.130682 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5c5c798-fdda-4fa1-834a-915983dd4e31-clustermesh-secrets\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.130949 kubelet[3544]: I0702 00:24:16.130731 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zfmh\" (UniqueName: \"kubernetes.io/projected/fc0c6ce2-9161-4686-b99c-19d58001c34a-kube-api-access-6zfmh\") pod \"kube-proxy-wp45w\" (UID: \"fc0c6ce2-9161-4686-b99c-19d58001c34a\") " pod="kube-system/kube-proxy-wp45w" Jul 2 00:24:16.130949 kubelet[3544]: I0702 00:24:16.130761 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-bpf-maps\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.130949 kubelet[3544]: I0702 00:24:16.130792 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-hostproc\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.130949 kubelet[3544]: I0702 00:24:16.130828 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-etc-cni-netd\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.130949 kubelet[3544]: I0702 00:24:16.130858 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-lib-modules\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.131189 kubelet[3544]: I0702 00:24:16.130889 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-config-path\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.131189 kubelet[3544]: I0702 00:24:16.130929 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-hubble-tls\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.131189 kubelet[3544]: I0702 00:24:16.130960 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-host-proc-sys-kernel\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.131189 kubelet[3544]: I0702 00:24:16.130993 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc0c6ce2-9161-4686-b99c-19d58001c34a-kube-proxy\") pod \"kube-proxy-wp45w\" (UID: \"fc0c6ce2-9161-4686-b99c-19d58001c34a\") " pod="kube-system/kube-proxy-wp45w" Jul 2 00:24:16.131189 kubelet[3544]: I0702 00:24:16.131028 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5cs9\" (UniqueName: \"kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-kube-api-access-q5cs9\") pod \"cilium-72ld4\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " pod="kube-system/cilium-72ld4" Jul 2 00:24:16.287289 kubelet[3544]: E0702 00:24:16.286843 3544 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 00:24:16.287289 kubelet[3544]: E0702 00:24:16.286897 3544 projected.go:198] Error preparing data for projected volume kube-api-access-6zfmh for pod kube-system/kube-proxy-wp45w: configmap "kube-root-ca.crt" not found Jul 2 00:24:16.299611 kubelet[3544]: E0702 00:24:16.296497 3544 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 00:24:16.299611 kubelet[3544]: E0702 00:24:16.296533 3544 projected.go:198] Error preparing data for projected volume kube-api-access-q5cs9 for pod kube-system/cilium-72ld4: configmap "kube-root-ca.crt" not found Jul 2 00:24:16.301252 kubelet[3544]: E0702 00:24:16.300744 3544 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc0c6ce2-9161-4686-b99c-19d58001c34a-kube-api-access-6zfmh podName:fc0c6ce2-9161-4686-b99c-19d58001c34a nodeName:}" failed. No retries permitted until 2024-07-02 00:24:16.786957745 +0000 UTC m=+12.989014930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6zfmh" (UniqueName: "kubernetes.io/projected/fc0c6ce2-9161-4686-b99c-19d58001c34a-kube-api-access-6zfmh") pod "kube-proxy-wp45w" (UID: "fc0c6ce2-9161-4686-b99c-19d58001c34a") : configmap "kube-root-ca.crt" not found Jul 2 00:24:16.303736 kubelet[3544]: E0702 00:24:16.302938 3544 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-kube-api-access-q5cs9 podName:e5c5c798-fdda-4fa1-834a-915983dd4e31 nodeName:}" failed. No retries permitted until 2024-07-02 00:24:16.802875919 +0000 UTC m=+13.004933102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q5cs9" (UniqueName: "kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-kube-api-access-q5cs9") pod "cilium-72ld4" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31") : configmap "kube-root-ca.crt" not found Jul 2 00:24:16.616831 kubelet[3544]: I0702 00:24:16.612523 3544 topology_manager.go:215] "Topology Admit Handler" podUID="8a020cff-d5a1-4004-b88c-e94a452d3f75" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-dm5vd" Jul 2 00:24:16.636504 kubelet[3544]: I0702 00:24:16.634978 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a020cff-d5a1-4004-b88c-e94a452d3f75-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-dm5vd\" (UID: \"8a020cff-d5a1-4004-b88c-e94a452d3f75\") " pod="kube-system/cilium-operator-6bc8ccdb58-dm5vd" Jul 2 00:24:16.636855 kubelet[3544]: I0702 00:24:16.636835 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzb22\" (UniqueName: \"kubernetes.io/projected/8a020cff-d5a1-4004-b88c-e94a452d3f75-kube-api-access-jzb22\") pod \"cilium-operator-6bc8ccdb58-dm5vd\" (UID: \"8a020cff-d5a1-4004-b88c-e94a452d3f75\") " pod="kube-system/cilium-operator-6bc8ccdb58-dm5vd" Jul 2 00:24:16.945259 containerd[2110]: time="2024-07-02T00:24:16.945155395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wp45w,Uid:fc0c6ce2-9161-4686-b99c-19d58001c34a,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:16.946865 containerd[2110]: time="2024-07-02T00:24:16.946804727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-dm5vd,Uid:8a020cff-d5a1-4004-b88c-e94a452d3f75,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:16.999721 containerd[2110]: time="2024-07-02T00:24:16.997988388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72ld4,Uid:e5c5c798-fdda-4fa1-834a-915983dd4e31,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:17.016918 containerd[2110]: time="2024-07-02T00:24:17.016820708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:17.017048 containerd[2110]: time="2024-07-02T00:24:17.016935327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:17.017048 containerd[2110]: time="2024-07-02T00:24:17.016971404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:17.017048 containerd[2110]: time="2024-07-02T00:24:17.017002161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:17.020617 containerd[2110]: time="2024-07-02T00:24:17.019935662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:17.020617 containerd[2110]: time="2024-07-02T00:24:17.020072394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:17.020617 containerd[2110]: time="2024-07-02T00:24:17.020109566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:17.020617 containerd[2110]: time="2024-07-02T00:24:17.020138934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:17.115275 containerd[2110]: time="2024-07-02T00:24:17.114874953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:17.115275 containerd[2110]: time="2024-07-02T00:24:17.114957989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:17.115275 containerd[2110]: time="2024-07-02T00:24:17.114989137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:17.115275 containerd[2110]: time="2024-07-02T00:24:17.115010665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:17.135664 containerd[2110]: time="2024-07-02T00:24:17.135623258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wp45w,Uid:fc0c6ce2-9161-4686-b99c-19d58001c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8716beadcc4a04bd8dcb389b53c7fedeebd031815deb08a3f3276707fd8ee0c1\"" Jul 2 00:24:17.151376 containerd[2110]: time="2024-07-02T00:24:17.150961331Z" level=info msg="CreateContainer within sandbox \"8716beadcc4a04bd8dcb389b53c7fedeebd031815deb08a3f3276707fd8ee0c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:24:17.181083 containerd[2110]: time="2024-07-02T00:24:17.181023710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-dm5vd,Uid:8a020cff-d5a1-4004-b88c-e94a452d3f75,Namespace:kube-system,Attempt:0,} returns sandbox id \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\"" Jul 2 00:24:17.183193 containerd[2110]: time="2024-07-02T00:24:17.183004791Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:24:17.256351 containerd[2110]: time="2024-07-02T00:24:17.256313120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72ld4,Uid:e5c5c798-fdda-4fa1-834a-915983dd4e31,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\"" Jul 2 00:24:17.264715 containerd[2110]: time="2024-07-02T00:24:17.264073288Z" level=info msg="CreateContainer within sandbox \"8716beadcc4a04bd8dcb389b53c7fedeebd031815deb08a3f3276707fd8ee0c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ee0335460c49b7973746e24689134b3ac177393d74623459ef6c5bdff8e3bbb\"" Jul 2 00:24:17.267283 containerd[2110]: time="2024-07-02T00:24:17.266325427Z" level=info msg="StartContainer for \"8ee0335460c49b7973746e24689134b3ac177393d74623459ef6c5bdff8e3bbb\"" Jul 2 00:24:17.414609 containerd[2110]: time="2024-07-02T00:24:17.414557305Z" level=info msg="StartContainer for \"8ee0335460c49b7973746e24689134b3ac177393d74623459ef6c5bdff8e3bbb\" returns successfully" Jul 2 00:24:18.438786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261624407.mount: Deactivated successfully. Jul 2 00:24:19.707119 containerd[2110]: time="2024-07-02T00:24:19.707073374Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:19.709175 containerd[2110]: time="2024-07-02T00:24:19.709110873Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907189" Jul 2 00:24:19.710900 containerd[2110]: time="2024-07-02T00:24:19.710816050Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:19.713328 containerd[2110]: time="2024-07-02T00:24:19.713288566Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.530221355s" Jul 2 00:24:19.713460 containerd[2110]: time="2024-07-02T00:24:19.713331887Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 00:24:19.722762 containerd[2110]: time="2024-07-02T00:24:19.722469726Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:24:19.743471 containerd[2110]: time="2024-07-02T00:24:19.743339840Z" level=info msg="CreateContainer within sandbox \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:24:19.765550 containerd[2110]: time="2024-07-02T00:24:19.765500822Z" level=info msg="CreateContainer within sandbox \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\"" Jul 2 00:24:19.767099 containerd[2110]: time="2024-07-02T00:24:19.766844465Z" level=info msg="StartContainer for \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\"" Jul 2 00:24:19.841314 containerd[2110]: time="2024-07-02T00:24:19.841269366Z" level=info msg="StartContainer for \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\" returns successfully" Jul 2 00:24:20.307366 kubelet[3544]: I0702 00:24:20.307321 3544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wp45w" podStartSLOduration=5.302323349 podCreationTimestamp="2024-07-02 00:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:18.198641434 +0000 UTC m=+14.400698631" watchObservedRunningTime="2024-07-02 00:24:20.302323349 +0000 UTC m=+16.504380543" Jul 2 00:24:20.311423 kubelet[3544]: I0702 00:24:20.309324 3544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-dm5vd" podStartSLOduration=1.773903961 podCreationTimestamp="2024-07-02 00:24:16 +0000 UTC" firstStartedPulling="2024-07-02 00:24:17.18234267 +0000 UTC m=+13.384399856" lastFinishedPulling="2024-07-02 00:24:19.717712903 +0000 UTC m=+15.919770095" observedRunningTime="2024-07-02 00:24:20.302117887 +0000 UTC m=+16.504175082" watchObservedRunningTime="2024-07-02 00:24:20.3092742 +0000 UTC m=+16.511331394" Jul 2 00:24:21.290782 systemd-resolved[1991]: Under memory pressure, flushing caches. Jul 2 00:24:21.294459 systemd-journald[1581]: Under memory pressure, flushing caches. Jul 2 00:24:21.291180 systemd-resolved[1991]: Flushed all caches. Jul 2 00:24:27.001758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618087010.mount: Deactivated successfully. Jul 2 00:24:30.599275 containerd[2110]: time="2024-07-02T00:24:30.599212422Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:30.600671 containerd[2110]: time="2024-07-02T00:24:30.600484996Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735267" Jul 2 00:24:30.602272 containerd[2110]: time="2024-07-02T00:24:30.602211918Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:30.604187 containerd[2110]: time="2024-07-02T00:24:30.604035536Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.881520191s" Jul 2 00:24:30.604187 containerd[2110]: time="2024-07-02T00:24:30.604086701Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 00:24:30.607890 containerd[2110]: time="2024-07-02T00:24:30.607831888Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:24:30.709374 containerd[2110]: time="2024-07-02T00:24:30.709319886Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\"" Jul 2 00:24:30.710182 containerd[2110]: time="2024-07-02T00:24:30.710147863Z" level=info msg="StartContainer for \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\"" Jul 2 00:24:30.972202 containerd[2110]: time="2024-07-02T00:24:30.972082717Z" level=info msg="StartContainer for \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\" returns successfully" Jul 2 00:24:31.672663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221-rootfs.mount: Deactivated successfully. Jul 2 00:24:31.968483 containerd[2110]: time="2024-07-02T00:24:31.932364422Z" level=info msg="shim disconnected" id=7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221 namespace=k8s.io Jul 2 00:24:31.968483 containerd[2110]: time="2024-07-02T00:24:31.968481260Z" level=warning msg="cleaning up after shim disconnected" id=7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221 namespace=k8s.io Jul 2 00:24:31.968483 containerd[2110]: time="2024-07-02T00:24:31.968520329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:24:32.000024 containerd[2110]: time="2024-07-02T00:24:31.999962137Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:24:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:24:32.266129 containerd[2110]: time="2024-07-02T00:24:32.265919747Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:24:32.314884 containerd[2110]: time="2024-07-02T00:24:32.314685427Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\"" Jul 2 00:24:32.316732 containerd[2110]: time="2024-07-02T00:24:32.315451053Z" level=info msg="StartContainer for \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\"" Jul 2 00:24:32.454084 containerd[2110]: time="2024-07-02T00:24:32.453965809Z" level=info msg="StartContainer for \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\" returns successfully" Jul 2 00:24:32.467382 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:24:32.469572 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:24:32.469683 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:24:32.488885 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:24:32.525422 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:24:32.534742 containerd[2110]: time="2024-07-02T00:24:32.534530509Z" level=info msg="shim disconnected" id=31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a namespace=k8s.io Jul 2 00:24:32.534742 containerd[2110]: time="2024-07-02T00:24:32.534713399Z" level=warning msg="cleaning up after shim disconnected" id=31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a namespace=k8s.io Jul 2 00:24:32.534742 containerd[2110]: time="2024-07-02T00:24:32.534732964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:24:32.672832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a-rootfs.mount: Deactivated successfully. Jul 2 00:24:33.261229 containerd[2110]: time="2024-07-02T00:24:33.261184539Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:24:33.377903 containerd[2110]: time="2024-07-02T00:24:33.377854402Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\"" Jul 2 00:24:33.380474 containerd[2110]: time="2024-07-02T00:24:33.380427535Z" level=info msg="StartContainer for \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\"" Jul 2 00:24:33.435100 systemd[1]: run-containerd-runc-k8s.io-de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0-runc.l1bdMj.mount: Deactivated successfully. Jul 2 00:24:33.481785 containerd[2110]: time="2024-07-02T00:24:33.481739419Z" level=info msg="StartContainer for \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\" returns successfully" Jul 2 00:24:33.570682 containerd[2110]: time="2024-07-02T00:24:33.570543873Z" level=info msg="shim disconnected" id=de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0 namespace=k8s.io Jul 2 00:24:33.570964 containerd[2110]: time="2024-07-02T00:24:33.570936999Z" level=warning msg="cleaning up after shim disconnected" id=de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0 namespace=k8s.io Jul 2 00:24:33.571060 containerd[2110]: time="2024-07-02T00:24:33.571045281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:24:33.670740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0-rootfs.mount: Deactivated successfully. Jul 2 00:24:34.263889 containerd[2110]: time="2024-07-02T00:24:34.263841088Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:24:34.322035 containerd[2110]: time="2024-07-02T00:24:34.321988745Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\"" Jul 2 00:24:34.323985 containerd[2110]: time="2024-07-02T00:24:34.322639642Z" level=info msg="StartContainer for \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\"" Jul 2 00:24:34.416069 containerd[2110]: time="2024-07-02T00:24:34.416033166Z" level=info msg="StartContainer for \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\" returns successfully" Jul 2 00:24:34.451294 kubelet[3544]: E0702 00:24:34.451265 3544 cadvisor_stats_provider.go:444] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pode5c5c798-fdda-4fa1-834a-915983dd4e31/28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\": RecentStats: unable to find data in memory cache]" Jul 2 00:24:34.481124 containerd[2110]: time="2024-07-02T00:24:34.481063679Z" level=info msg="shim disconnected" id=28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842 namespace=k8s.io Jul 2 00:24:34.481124 containerd[2110]: time="2024-07-02T00:24:34.481117597Z" level=warning msg="cleaning up after shim disconnected" id=28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842 namespace=k8s.io Jul 2 00:24:34.481124 containerd[2110]: time="2024-07-02T00:24:34.481129454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:24:34.503954 containerd[2110]: time="2024-07-02T00:24:34.503897452Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:24:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:24:34.670311 systemd[1]: run-containerd-runc-k8s.io-28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842-runc.RMP5gL.mount: Deactivated successfully. Jul 2 00:24:34.670492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842-rootfs.mount: Deactivated successfully. Jul 2 00:24:35.300190 containerd[2110]: time="2024-07-02T00:24:35.299969438Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:24:35.333133 containerd[2110]: time="2024-07-02T00:24:35.332582846Z" level=info msg="CreateContainer within sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\"" Jul 2 00:24:35.341862 containerd[2110]: time="2024-07-02T00:24:35.339373293Z" level=info msg="StartContainer for \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\"" Jul 2 00:24:35.345857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1954755225.mount: Deactivated successfully. Jul 2 00:24:35.483711 containerd[2110]: time="2024-07-02T00:24:35.482910762Z" level=info msg="StartContainer for \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\" returns successfully" Jul 2 00:24:35.718464 systemd[1]: run-containerd-runc-k8s.io-892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf-runc.ObVay4.mount: Deactivated successfully. Jul 2 00:24:35.916062 kubelet[3544]: I0702 00:24:35.915106 3544 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:24:35.965144 kubelet[3544]: I0702 00:24:35.965097 3544 topology_manager.go:215] "Topology Admit Handler" podUID="4358ecbc-1406-4761-93d1-ab2304cd9576" podNamespace="kube-system" podName="coredns-5dd5756b68-qg8kw" Jul 2 00:24:35.967758 kubelet[3544]: I0702 00:24:35.967188 3544 topology_manager.go:215] "Topology Admit Handler" podUID="7361e28d-7878-4aab-8f8e-b7db941075d5" podNamespace="kube-system" podName="coredns-5dd5756b68-w55qw" Jul 2 00:24:36.084630 kubelet[3544]: I0702 00:24:36.084492 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4358ecbc-1406-4761-93d1-ab2304cd9576-config-volume\") pod \"coredns-5dd5756b68-qg8kw\" (UID: \"4358ecbc-1406-4761-93d1-ab2304cd9576\") " pod="kube-system/coredns-5dd5756b68-qg8kw" Jul 2 00:24:36.084630 kubelet[3544]: I0702 00:24:36.084571 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dck8n\" (UniqueName: \"kubernetes.io/projected/4358ecbc-1406-4761-93d1-ab2304cd9576-kube-api-access-dck8n\") pod \"coredns-5dd5756b68-qg8kw\" (UID: \"4358ecbc-1406-4761-93d1-ab2304cd9576\") " pod="kube-system/coredns-5dd5756b68-qg8kw" Jul 2 00:24:36.085745 kubelet[3544]: I0702 00:24:36.085669 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7361e28d-7878-4aab-8f8e-b7db941075d5-config-volume\") pod \"coredns-5dd5756b68-w55qw\" (UID: \"7361e28d-7878-4aab-8f8e-b7db941075d5\") " pod="kube-system/coredns-5dd5756b68-w55qw" Jul 2 00:24:36.085859 kubelet[3544]: I0702 00:24:36.085814 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ll7\" (UniqueName: \"kubernetes.io/projected/7361e28d-7878-4aab-8f8e-b7db941075d5-kube-api-access-b6ll7\") pod \"coredns-5dd5756b68-w55qw\" (UID: \"7361e28d-7878-4aab-8f8e-b7db941075d5\") " pod="kube-system/coredns-5dd5756b68-w55qw" Jul 2 00:24:36.276144 containerd[2110]: time="2024-07-02T00:24:36.276090390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-w55qw,Uid:7361e28d-7878-4aab-8f8e-b7db941075d5,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:36.283499 containerd[2110]: time="2024-07-02T00:24:36.283444681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qg8kw,Uid:4358ecbc-1406-4761-93d1-ab2304cd9576,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:36.421795 kubelet[3544]: I0702 00:24:36.418237 3544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-72ld4" podStartSLOduration=7.072176442 podCreationTimestamp="2024-07-02 00:24:16 +0000 UTC" firstStartedPulling="2024-07-02 00:24:17.258568804 +0000 UTC m=+13.460625989" lastFinishedPulling="2024-07-02 00:24:30.604570987 +0000 UTC m=+26.806628177" observedRunningTime="2024-07-02 00:24:36.416902657 +0000 UTC m=+32.618959851" watchObservedRunningTime="2024-07-02 00:24:36.41817863 +0000 UTC m=+32.620235824" Jul 2 00:24:38.128550 (udev-worker)[4354]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:38.129570 systemd-networkd[1671]: cilium_host: Link UP Jul 2 00:24:38.130901 systemd-networkd[1671]: cilium_net: Link UP Jul 2 00:24:38.131119 systemd-networkd[1671]: cilium_net: Gained carrier Jul 2 00:24:38.131312 systemd-networkd[1671]: cilium_host: Gained carrier Jul 2 00:24:38.134936 (udev-worker)[4297]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:38.311174 (udev-worker)[4369]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:38.339278 systemd-networkd[1671]: cilium_vxlan: Link UP Jul 2 00:24:38.339292 systemd-networkd[1671]: cilium_vxlan: Gained carrier Jul 2 00:24:38.426851 systemd-networkd[1671]: cilium_host: Gained IPv6LL Jul 2 00:24:38.912763 kernel: NET: Registered PF_ALG protocol family Jul 2 00:24:39.150226 systemd-networkd[1671]: cilium_net: Gained IPv6LL Jul 2 00:24:39.663929 systemd-networkd[1671]: cilium_vxlan: Gained IPv6LL Jul 2 00:24:40.011795 systemd-networkd[1671]: lxc_health: Link UP Jul 2 00:24:40.024008 (udev-worker)[4371]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:40.025662 systemd-networkd[1671]: lxc_health: Gained carrier Jul 2 00:24:40.511276 (udev-worker)[4689]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:40.544175 kernel: eth0: renamed from tmp1e73d Jul 2 00:24:40.518203 systemd-networkd[1671]: lxcb36491ca6ac2: Link UP Jul 2 00:24:40.564110 systemd-networkd[1671]: lxcb36491ca6ac2: Gained carrier Jul 2 00:24:40.621709 systemd-networkd[1671]: lxc55f4def726f7: Link UP Jul 2 00:24:40.632187 kernel: eth0: renamed from tmpd1239 Jul 2 00:24:40.655533 systemd-networkd[1671]: lxc55f4def726f7: Gained carrier Jul 2 00:24:41.642996 systemd-networkd[1671]: lxcb36491ca6ac2: Gained IPv6LL Jul 2 00:24:41.835294 systemd-networkd[1671]: lxc_health: Gained IPv6LL Jul 2 00:24:42.538884 systemd-networkd[1671]: lxc55f4def726f7: Gained IPv6LL Jul 2 00:24:44.805337 ntpd[2062]: Listen normally on 6 cilium_host 192.168.0.180:123 Jul 2 00:24:44.806172 ntpd[2062]: 2 Jul 00:24:44 ntpd[2062]: Listen normally on 6 cilium_host 192.168.0.180:123 Jul 2 00:24:44.806172 ntpd[2062]: 2 Jul 00:24:44 ntpd[2062]: Listen normally on 7 cilium_net [fe80::cc82:f6ff:fe32:a5a2%4]:123 Jul 2 00:24:44.806172 ntpd[2062]: 2 Jul 00:24:44 ntpd[2062]: Listen normally on 8 cilium_host [fe80::20be:68ff:fe0e:acbb%5]:123 Jul 2 00:24:44.806172 ntpd[2062]: 2 Jul 00:24:44 ntpd[2062]: Listen normally on 9 cilium_vxlan [fe80::ab:e6ff:fe10:d061%6]:123 Jul 2 00:24:44.806172 ntpd[2062]: 2 Jul 00:24:44 ntpd[2062]: Listen normally on 10 lxc_health [fe80::28b1:64ff:fe49:14ce%8]:123 Jul 2 00:24:44.806172 ntpd[2062]: 2 Jul 00:24:44 ntpd[2062]: Listen normally on 11 lxcb36491ca6ac2 [fe80::8cf8:15ff:fe6a:c212%10]:123 Jul 2 00:24:44.806172 ntpd[2062]: 2 Jul 00:24:44 ntpd[2062]: Listen normally on 12 lxc55f4def726f7 [fe80::f8c0:d2ff:fe59:4c41%12]:123 Jul 2 00:24:44.805437 ntpd[2062]: Listen normally on 7 cilium_net [fe80::cc82:f6ff:fe32:a5a2%4]:123 Jul 2 00:24:44.805495 ntpd[2062]: Listen normally on 8 cilium_host [fe80::20be:68ff:fe0e:acbb%5]:123 Jul 2 00:24:44.805533 ntpd[2062]: Listen normally on 9 cilium_vxlan [fe80::ab:e6ff:fe10:d061%6]:123 Jul 2 00:24:44.805569 ntpd[2062]: Listen normally on 10 lxc_health [fe80::28b1:64ff:fe49:14ce%8]:123 Jul 2 00:24:44.805606 ntpd[2062]: Listen normally on 11 lxcb36491ca6ac2 [fe80::8cf8:15ff:fe6a:c212%10]:123 Jul 2 00:24:44.805643 ntpd[2062]: Listen normally on 12 lxc55f4def726f7 [fe80::f8c0:d2ff:fe59:4c41%12]:123 Jul 2 00:24:47.655164 containerd[2110]: time="2024-07-02T00:24:47.649537639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:47.658247 containerd[2110]: time="2024-07-02T00:24:47.657486693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:47.658247 containerd[2110]: time="2024-07-02T00:24:47.657531227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:47.658247 containerd[2110]: time="2024-07-02T00:24:47.657549874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:47.683286 containerd[2110]: time="2024-07-02T00:24:47.682482751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:47.683286 containerd[2110]: time="2024-07-02T00:24:47.682545204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:47.683286 containerd[2110]: time="2024-07-02T00:24:47.682568661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:47.683286 containerd[2110]: time="2024-07-02T00:24:47.682583000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:47.771171 systemd[1]: run-containerd-runc-k8s.io-1e73d04655dfff9c8313f62534ae7a0f5728681c09877819afb81fb5c9237b97-runc.2hfhzh.mount: Deactivated successfully. Jul 2 00:24:47.961931 containerd[2110]: time="2024-07-02T00:24:47.961838477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qg8kw,Uid:4358ecbc-1406-4761-93d1-ab2304cd9576,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e73d04655dfff9c8313f62534ae7a0f5728681c09877819afb81fb5c9237b97\"" Jul 2 00:24:48.016980 containerd[2110]: time="2024-07-02T00:24:48.016724432Z" level=info msg="CreateContainer within sandbox \"1e73d04655dfff9c8313f62534ae7a0f5728681c09877819afb81fb5c9237b97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:24:48.036152 containerd[2110]: time="2024-07-02T00:24:48.036108918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-w55qw,Uid:7361e28d-7878-4aab-8f8e-b7db941075d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1239f2b4c6bcf63ee6b4052fc07747e4c7b1f940202e150854a4e50ead8657a\"" Jul 2 00:24:48.046168 containerd[2110]: time="2024-07-02T00:24:48.046108834Z" level=info msg="CreateContainer within sandbox \"d1239f2b4c6bcf63ee6b4052fc07747e4c7b1f940202e150854a4e50ead8657a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:24:48.057122 containerd[2110]: time="2024-07-02T00:24:48.057077862Z" level=info msg="CreateContainer within sandbox \"1e73d04655dfff9c8313f62534ae7a0f5728681c09877819afb81fb5c9237b97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5c881ab55f629eb126b113fdb440295516dee762a480f957f401e055a985f83\"" Jul 2 00:24:48.060467 containerd[2110]: time="2024-07-02T00:24:48.058845582Z" level=info msg="StartContainer for \"a5c881ab55f629eb126b113fdb440295516dee762a480f957f401e055a985f83\"" Jul 2 00:24:48.123198 containerd[2110]: time="2024-07-02T00:24:48.123143317Z" level=info msg="CreateContainer within sandbox \"d1239f2b4c6bcf63ee6b4052fc07747e4c7b1f940202e150854a4e50ead8657a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e3a1d16a96c3f8a0c6c363555db548a54c1ea930bae5995e6872fbb5191ea59b\"" Jul 2 00:24:48.128271 containerd[2110]: time="2024-07-02T00:24:48.125347573Z" level=info msg="StartContainer for \"e3a1d16a96c3f8a0c6c363555db548a54c1ea930bae5995e6872fbb5191ea59b\"" Jul 2 00:24:48.286851 containerd[2110]: time="2024-07-02T00:24:48.286023490Z" level=info msg="StartContainer for \"a5c881ab55f629eb126b113fdb440295516dee762a480f957f401e055a985f83\" returns successfully" Jul 2 00:24:48.294379 containerd[2110]: time="2024-07-02T00:24:48.294309703Z" level=info msg="StartContainer for \"e3a1d16a96c3f8a0c6c363555db548a54c1ea930bae5995e6872fbb5191ea59b\" returns successfully" Jul 2 00:24:48.375083 kubelet[3544]: I0702 00:24:48.375047 3544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qg8kw" podStartSLOduration=32.374995765 podCreationTimestamp="2024-07-02 00:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:48.373322871 +0000 UTC m=+44.575380066" watchObservedRunningTime="2024-07-02 00:24:48.374995765 +0000 UTC m=+44.577052959" Jul 2 00:24:49.378988 kubelet[3544]: I0702 00:24:49.378933 3544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-w55qw" podStartSLOduration=33.378774814 podCreationTimestamp="2024-07-02 00:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:48.406722487 +0000 UTC m=+44.608779682" watchObservedRunningTime="2024-07-02 00:24:49.378774814 +0000 UTC m=+45.580832018" Jul 2 00:24:51.885032 systemd[1]: Started sshd@7-172.31.16.250:22-147.75.109.163:59988.service - OpenSSH per-connection server daemon (147.75.109.163:59988). Jul 2 00:24:52.096267 sshd[4893]: Accepted publickey for core from 147.75.109.163 port 59988 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:24:52.101215 sshd[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:24:52.119668 systemd-logind[2080]: New session 8 of user core. Jul 2 00:24:52.131291 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:24:53.066804 sshd[4893]: pam_unix(sshd:session): session closed for user core Jul 2 00:24:53.071466 systemd[1]: sshd@7-172.31.16.250:22-147.75.109.163:59988.service: Deactivated successfully. Jul 2 00:24:53.076846 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:24:53.078902 systemd-logind[2080]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:24:53.080265 systemd-logind[2080]: Removed session 8. Jul 2 00:24:58.097109 systemd[1]: Started sshd@8-172.31.16.250:22-147.75.109.163:34294.service - OpenSSH per-connection server daemon (147.75.109.163:34294). Jul 2 00:24:58.260855 sshd[4909]: Accepted publickey for core from 147.75.109.163 port 34294 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:24:58.262518 sshd[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:24:58.271153 systemd-logind[2080]: New session 9 of user core. Jul 2 00:24:58.277726 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:24:58.525209 sshd[4909]: pam_unix(sshd:session): session closed for user core Jul 2 00:24:58.531081 systemd[1]: sshd@8-172.31.16.250:22-147.75.109.163:34294.service: Deactivated successfully. Jul 2 00:24:58.537161 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:24:58.538904 systemd-logind[2080]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:24:58.540654 systemd-logind[2080]: Removed session 9. Jul 2 00:25:03.560038 systemd[1]: Started sshd@9-172.31.16.250:22-147.75.109.163:60764.service - OpenSSH per-connection server daemon (147.75.109.163:60764). Jul 2 00:25:03.769418 sshd[4924]: Accepted publickey for core from 147.75.109.163 port 60764 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:03.771095 sshd[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:03.777756 systemd-logind[2080]: New session 10 of user core. Jul 2 00:25:03.783193 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:25:04.042997 sshd[4924]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:04.052407 systemd[1]: sshd@9-172.31.16.250:22-147.75.109.163:60764.service: Deactivated successfully. Jul 2 00:25:04.058925 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:25:04.060548 systemd-logind[2080]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:25:04.063470 systemd-logind[2080]: Removed session 10. Jul 2 00:25:09.074243 systemd[1]: Started sshd@10-172.31.16.250:22-147.75.109.163:60774.service - OpenSSH per-connection server daemon (147.75.109.163:60774). Jul 2 00:25:09.243736 sshd[4940]: Accepted publickey for core from 147.75.109.163 port 60774 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:09.245746 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:09.250951 systemd-logind[2080]: New session 11 of user core. Jul 2 00:25:09.255785 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:25:09.508052 sshd[4940]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:09.516556 systemd[1]: sshd@10-172.31.16.250:22-147.75.109.163:60774.service: Deactivated successfully. Jul 2 00:25:09.536259 systemd-logind[2080]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:25:09.538880 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:25:09.551272 systemd-logind[2080]: Removed session 11. Jul 2 00:25:14.541653 systemd[1]: Started sshd@11-172.31.16.250:22-147.75.109.163:56634.service - OpenSSH per-connection server daemon (147.75.109.163:56634). Jul 2 00:25:14.733134 sshd[4954]: Accepted publickey for core from 147.75.109.163 port 56634 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:14.744121 sshd[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:14.750278 systemd-logind[2080]: New session 12 of user core. Jul 2 00:25:14.757113 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:25:14.958940 sshd[4954]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:14.964044 systemd-logind[2080]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:25:14.965177 systemd[1]: sshd@11-172.31.16.250:22-147.75.109.163:56634.service: Deactivated successfully. Jul 2 00:25:14.972148 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:25:14.975973 systemd-logind[2080]: Removed session 12. Jul 2 00:25:15.012653 systemd[1]: Started sshd@12-172.31.16.250:22-147.75.109.163:56644.service - OpenSSH per-connection server daemon (147.75.109.163:56644). Jul 2 00:25:15.214780 sshd[4969]: Accepted publickey for core from 147.75.109.163 port 56644 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:15.217139 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:15.223014 systemd-logind[2080]: New session 13 of user core. Jul 2 00:25:15.230135 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:25:16.783574 sshd[4969]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:16.800483 systemd-logind[2080]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:25:16.802386 systemd[1]: sshd@12-172.31.16.250:22-147.75.109.163:56644.service: Deactivated successfully. Jul 2 00:25:16.823905 systemd[1]: Started sshd@13-172.31.16.250:22-147.75.109.163:56648.service - OpenSSH per-connection server daemon (147.75.109.163:56648). Jul 2 00:25:16.824749 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:25:16.826783 systemd-logind[2080]: Removed session 13. Jul 2 00:25:17.000330 sshd[4981]: Accepted publickey for core from 147.75.109.163 port 56648 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:17.001949 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:17.009335 systemd-logind[2080]: New session 14 of user core. Jul 2 00:25:17.013016 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:25:17.251369 sshd[4981]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:17.255497 systemd[1]: sshd@13-172.31.16.250:22-147.75.109.163:56648.service: Deactivated successfully. Jul 2 00:25:17.262395 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:25:17.262633 systemd-logind[2080]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:25:17.265644 systemd-logind[2080]: Removed session 14. Jul 2 00:25:22.285027 systemd[1]: Started sshd@14-172.31.16.250:22-147.75.109.163:56662.service - OpenSSH per-connection server daemon (147.75.109.163:56662). Jul 2 00:25:22.480271 sshd[4998]: Accepted publickey for core from 147.75.109.163 port 56662 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:22.488094 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:22.493866 systemd-logind[2080]: New session 15 of user core. Jul 2 00:25:22.499409 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:25:22.699413 sshd[4998]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:22.705572 systemd[1]: sshd@14-172.31.16.250:22-147.75.109.163:56662.service: Deactivated successfully. Jul 2 00:25:22.715223 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:25:22.716914 systemd-logind[2080]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:25:22.720415 systemd-logind[2080]: Removed session 15. Jul 2 00:25:27.728171 systemd[1]: Started sshd@15-172.31.16.250:22-147.75.109.163:37810.service - OpenSSH per-connection server daemon (147.75.109.163:37810). Jul 2 00:25:27.921206 sshd[5013]: Accepted publickey for core from 147.75.109.163 port 37810 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:27.925613 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:27.939718 systemd-logind[2080]: New session 16 of user core. Jul 2 00:25:27.948015 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:25:28.244314 sshd[5013]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:28.249638 systemd-logind[2080]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:25:28.251966 systemd[1]: sshd@15-172.31.16.250:22-147.75.109.163:37810.service: Deactivated successfully. Jul 2 00:25:28.264016 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:25:28.266115 systemd-logind[2080]: Removed session 16. Jul 2 00:25:33.275218 systemd[1]: Started sshd@16-172.31.16.250:22-147.75.109.163:60068.service - OpenSSH per-connection server daemon (147.75.109.163:60068). Jul 2 00:25:33.453724 sshd[5028]: Accepted publickey for core from 147.75.109.163 port 60068 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:33.464151 sshd[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:33.471762 systemd-logind[2080]: New session 17 of user core. Jul 2 00:25:33.476479 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:25:33.733185 sshd[5028]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:33.740230 systemd[1]: sshd@16-172.31.16.250:22-147.75.109.163:60068.service: Deactivated successfully. Jul 2 00:25:33.746015 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:25:33.747050 systemd-logind[2080]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:25:33.748397 systemd-logind[2080]: Removed session 17. Jul 2 00:25:33.767581 systemd[1]: Started sshd@17-172.31.16.250:22-147.75.109.163:60072.service - OpenSSH per-connection server daemon (147.75.109.163:60072). Jul 2 00:25:33.941736 sshd[5042]: Accepted publickey for core from 147.75.109.163 port 60072 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:33.943137 sshd[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:33.950243 systemd-logind[2080]: New session 18 of user core. Jul 2 00:25:33.959043 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:25:34.902724 sshd[5042]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:34.908262 systemd[1]: sshd@17-172.31.16.250:22-147.75.109.163:60072.service: Deactivated successfully. Jul 2 00:25:34.913772 systemd-logind[2080]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:25:34.914957 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:25:34.918995 systemd-logind[2080]: Removed session 18. Jul 2 00:25:34.932391 systemd[1]: Started sshd@18-172.31.16.250:22-147.75.109.163:60086.service - OpenSSH per-connection server daemon (147.75.109.163:60086). Jul 2 00:25:35.161865 sshd[5054]: Accepted publickey for core from 147.75.109.163 port 60086 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:35.164174 sshd[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:35.171024 systemd-logind[2080]: New session 19 of user core. Jul 2 00:25:35.177124 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:25:35.353923 systemd[1]: Started sshd@19-172.31.16.250:22-185.116.229.69:36104.service - OpenSSH per-connection server daemon (185.116.229.69:36104). Jul 2 00:25:36.477325 sshd[5054]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:36.492935 systemd[1]: sshd@18-172.31.16.250:22-147.75.109.163:60086.service: Deactivated successfully. Jul 2 00:25:36.500528 systemd-logind[2080]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:25:36.501826 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:25:36.513056 systemd[1]: Started sshd@20-172.31.16.250:22-147.75.109.163:60100.service - OpenSSH per-connection server daemon (147.75.109.163:60100). Jul 2 00:25:36.515162 systemd-logind[2080]: Removed session 19. Jul 2 00:25:36.684849 sshd[5075]: Accepted publickey for core from 147.75.109.163 port 60100 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:36.686336 sshd[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:36.700113 systemd-logind[2080]: New session 20 of user core. Jul 2 00:25:36.706999 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:25:37.323979 systemd-resolved[1991]: Under memory pressure, flushing caches. Jul 2 00:25:37.329297 systemd-journald[1581]: Under memory pressure, flushing caches. Jul 2 00:25:37.324099 systemd-resolved[1991]: Flushed all caches. Jul 2 00:25:37.358733 sshd[5075]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:37.368607 systemd[1]: sshd@20-172.31.16.250:22-147.75.109.163:60100.service: Deactivated successfully. Jul 2 00:25:37.375577 systemd-logind[2080]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:25:37.378406 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:25:37.394189 systemd[1]: Started sshd@21-172.31.16.250:22-147.75.109.163:60102.service - OpenSSH per-connection server daemon (147.75.109.163:60102). Jul 2 00:25:37.395396 systemd-logind[2080]: Removed session 20. Jul 2 00:25:37.569183 sshd[5087]: Accepted publickey for core from 147.75.109.163 port 60102 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:37.570070 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:37.576143 systemd-logind[2080]: New session 21 of user core. Jul 2 00:25:37.582111 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:25:37.618564 sshd[5063]: Invalid user ftpuser from 185.116.229.69 port 36104 Jul 2 00:25:37.827791 sshd[5087]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:37.833374 systemd[1]: sshd@21-172.31.16.250:22-147.75.109.163:60102.service: Deactivated successfully. Jul 2 00:25:37.841851 systemd-logind[2080]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:25:37.842033 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:25:37.847418 systemd-logind[2080]: Removed session 21. Jul 2 00:25:38.105436 sshd[5101]: pam_faillock(sshd:auth): User unknown Jul 2 00:25:38.119667 sshd[5063]: Postponed keyboard-interactive for invalid user ftpuser from 185.116.229.69 port 36104 ssh2 [preauth] Jul 2 00:25:38.790893 sshd[5101]: pam_unix(sshd:auth): check pass; user unknown Jul 2 00:25:38.790962 sshd[5101]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.116.229.69 Jul 2 00:25:38.792165 sshd[5101]: pam_faillock(sshd:auth): User unknown Jul 2 00:25:40.693907 sshd[5063]: PAM: Permission denied for illegal user ftpuser from 185.116.229.69 Jul 2 00:25:40.695027 sshd[5063]: Failed keyboard-interactive/pam for invalid user ftpuser from 185.116.229.69 port 36104 ssh2 Jul 2 00:25:41.234758 sshd[5063]: Connection closed by invalid user ftpuser 185.116.229.69 port 36104 [preauth] Jul 2 00:25:41.239621 systemd[1]: sshd@19-172.31.16.250:22-185.116.229.69:36104.service: Deactivated successfully. Jul 2 00:25:42.868310 systemd[1]: Started sshd@22-172.31.16.250:22-147.75.109.163:45080.service - OpenSSH per-connection server daemon (147.75.109.163:45080). Jul 2 00:25:43.036423 sshd[5105]: Accepted publickey for core from 147.75.109.163 port 45080 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:43.037078 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:43.052494 systemd-logind[2080]: New session 22 of user core. Jul 2 00:25:43.066270 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:25:43.318478 sshd[5105]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:43.328727 systemd[1]: sshd@22-172.31.16.250:22-147.75.109.163:45080.service: Deactivated successfully. Jul 2 00:25:43.339721 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:25:43.340529 systemd-logind[2080]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:25:43.346767 systemd-logind[2080]: Removed session 22. Jul 2 00:25:48.352808 systemd[1]: Started sshd@23-172.31.16.250:22-147.75.109.163:45092.service - OpenSSH per-connection server daemon (147.75.109.163:45092). Jul 2 00:25:48.557184 sshd[5124]: Accepted publickey for core from 147.75.109.163 port 45092 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:48.558835 sshd[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:48.564032 systemd-logind[2080]: New session 23 of user core. Jul 2 00:25:48.569040 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:25:48.786422 sshd[5124]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:48.795711 systemd[1]: sshd@23-172.31.16.250:22-147.75.109.163:45092.service: Deactivated successfully. Jul 2 00:25:48.804782 systemd-logind[2080]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:25:48.805726 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:25:48.807922 systemd-logind[2080]: Removed session 23. Jul 2 00:25:53.818746 systemd[1]: Started sshd@24-172.31.16.250:22-147.75.109.163:44678.service - OpenSSH per-connection server daemon (147.75.109.163:44678). Jul 2 00:25:54.009811 sshd[5138]: Accepted publickey for core from 147.75.109.163 port 44678 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:54.011023 sshd[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:54.036813 systemd-logind[2080]: New session 24 of user core. Jul 2 00:25:54.044197 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:25:54.271062 sshd[5138]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:54.276754 systemd[1]: sshd@24-172.31.16.250:22-147.75.109.163:44678.service: Deactivated successfully. Jul 2 00:25:54.290415 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:25:54.292360 systemd-logind[2080]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:25:54.293524 systemd-logind[2080]: Removed session 24. Jul 2 00:25:59.302103 systemd[1]: Started sshd@25-172.31.16.250:22-147.75.109.163:44686.service - OpenSSH per-connection server daemon (147.75.109.163:44686). Jul 2 00:25:59.492840 sshd[5152]: Accepted publickey for core from 147.75.109.163 port 44686 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:59.494827 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:59.501589 systemd-logind[2080]: New session 25 of user core. Jul 2 00:25:59.507215 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:25:59.715552 sshd[5152]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:59.722921 systemd-logind[2080]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:25:59.723159 systemd[1]: sshd@25-172.31.16.250:22-147.75.109.163:44686.service: Deactivated successfully. Jul 2 00:25:59.730197 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:25:59.733098 systemd-logind[2080]: Removed session 25. Jul 2 00:25:59.745142 systemd[1]: Started sshd@26-172.31.16.250:22-147.75.109.163:44692.service - OpenSSH per-connection server daemon (147.75.109.163:44692). Jul 2 00:25:59.921439 sshd[5166]: Accepted publickey for core from 147.75.109.163 port 44692 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:25:59.922858 sshd[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:59.929121 systemd-logind[2080]: New session 26 of user core. Jul 2 00:25:59.935157 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:26:02.298888 containerd[2110]: time="2024-07-02T00:26:02.298822825Z" level=info msg="StopContainer for \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\" with timeout 30 (s)" Jul 2 00:26:02.423066 containerd[2110]: time="2024-07-02T00:26:02.422680922Z" level=info msg="Stop container \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\" with signal terminated" Jul 2 00:26:02.626101 systemd[1]: run-containerd-runc-k8s.io-892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf-runc.lnAJn7.mount: Deactivated successfully. Jul 2 00:26:02.733648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c-rootfs.mount: Deactivated successfully. Jul 2 00:26:02.759982 containerd[2110]: time="2024-07-02T00:26:02.759881658Z" level=info msg="shim disconnected" id=f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c namespace=k8s.io Jul 2 00:26:02.760224 containerd[2110]: time="2024-07-02T00:26:02.759994361Z" level=warning msg="cleaning up after shim disconnected" id=f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c namespace=k8s.io Jul 2 00:26:02.760224 containerd[2110]: time="2024-07-02T00:26:02.760008752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:02.778924 containerd[2110]: time="2024-07-02T00:26:02.778840967Z" level=info msg="StopContainer for \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\" with timeout 2 (s)" Jul 2 00:26:02.787154 containerd[2110]: time="2024-07-02T00:26:02.787102898Z" level=info msg="Stop container \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\" with signal terminated" Jul 2 00:26:02.801729 containerd[2110]: time="2024-07-02T00:26:02.801602061Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:26:02.836126 systemd-networkd[1671]: lxc_health: Link DOWN Jul 2 00:26:02.836139 systemd-networkd[1671]: lxc_health: Lost carrier Jul 2 00:26:02.970055 containerd[2110]: time="2024-07-02T00:26:02.969584892Z" level=info msg="StopContainer for \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\" returns successfully" Jul 2 00:26:02.975876 containerd[2110]: time="2024-07-02T00:26:02.975829397Z" level=info msg="StopPodSandbox for \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\"" Jul 2 00:26:02.991967 containerd[2110]: time="2024-07-02T00:26:02.975893850Z" level=info msg="Container to stop \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:02.990806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf-rootfs.mount: Deactivated successfully. Jul 2 00:26:02.996987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd-shm.mount: Deactivated successfully. Jul 2 00:26:03.071506 containerd[2110]: time="2024-07-02T00:26:03.071267082Z" level=info msg="shim disconnected" id=892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf namespace=k8s.io Jul 2 00:26:03.071506 containerd[2110]: time="2024-07-02T00:26:03.071331458Z" level=warning msg="cleaning up after shim disconnected" id=892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf namespace=k8s.io Jul 2 00:26:03.071506 containerd[2110]: time="2024-07-02T00:26:03.071344342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:03.072464 containerd[2110]: time="2024-07-02T00:26:03.072241597Z" level=info msg="shim disconnected" id=e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd namespace=k8s.io Jul 2 00:26:03.072464 containerd[2110]: time="2024-07-02T00:26:03.072296874Z" level=warning msg="cleaning up after shim disconnected" id=e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd namespace=k8s.io Jul 2 00:26:03.072464 containerd[2110]: time="2024-07-02T00:26:03.072308810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:03.122475 containerd[2110]: time="2024-07-02T00:26:03.122217576Z" level=info msg="TearDown network for sandbox \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\" successfully" Jul 2 00:26:03.122475 containerd[2110]: time="2024-07-02T00:26:03.122256929Z" level=info msg="StopPodSandbox for \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\" returns successfully" Jul 2 00:26:03.136148 containerd[2110]: time="2024-07-02T00:26:03.136104358Z" level=info msg="StopContainer for \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\" returns successfully" Jul 2 00:26:03.137629 containerd[2110]: time="2024-07-02T00:26:03.137594627Z" level=info msg="StopPodSandbox for \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\"" Jul 2 00:26:03.137918 containerd[2110]: time="2024-07-02T00:26:03.137848171Z" level=info msg="Container to stop \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:03.138022 containerd[2110]: time="2024-07-02T00:26:03.138005598Z" level=info msg="Container to stop \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:03.138110 containerd[2110]: time="2024-07-02T00:26:03.138094416Z" level=info msg="Container to stop \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:03.138189 containerd[2110]: time="2024-07-02T00:26:03.138174125Z" level=info msg="Container to stop \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:03.138343 containerd[2110]: time="2024-07-02T00:26:03.138253929Z" level=info msg="Container to stop \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:03.180338 containerd[2110]: time="2024-07-02T00:26:03.180114591Z" level=info msg="shim disconnected" id=8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9 namespace=k8s.io Jul 2 00:26:03.180338 containerd[2110]: time="2024-07-02T00:26:03.180183439Z" level=warning msg="cleaning up after shim disconnected" id=8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9 namespace=k8s.io Jul 2 00:26:03.180338 containerd[2110]: time="2024-07-02T00:26:03.180195823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:03.207250 containerd[2110]: time="2024-07-02T00:26:03.207162171Z" level=info msg="TearDown network for sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" successfully" Jul 2 00:26:03.207250 containerd[2110]: time="2024-07-02T00:26:03.207219564Z" level=info msg="StopPodSandbox for \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" returns successfully" Jul 2 00:26:03.279850 kubelet[3544]: I0702 00:26:03.276501 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzb22\" (UniqueName: \"kubernetes.io/projected/8a020cff-d5a1-4004-b88c-e94a452d3f75-kube-api-access-jzb22\") pod \"8a020cff-d5a1-4004-b88c-e94a452d3f75\" (UID: \"8a020cff-d5a1-4004-b88c-e94a452d3f75\") " Jul 2 00:26:03.279850 kubelet[3544]: I0702 00:26:03.276610 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a020cff-d5a1-4004-b88c-e94a452d3f75-cilium-config-path\") pod \"8a020cff-d5a1-4004-b88c-e94a452d3f75\" (UID: \"8a020cff-d5a1-4004-b88c-e94a452d3f75\") " Jul 2 00:26:03.302739 kubelet[3544]: I0702 00:26:03.300858 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a020cff-d5a1-4004-b88c-e94a452d3f75-kube-api-access-jzb22" (OuterVolumeSpecName: "kube-api-access-jzb22") pod "8a020cff-d5a1-4004-b88c-e94a452d3f75" (UID: "8a020cff-d5a1-4004-b88c-e94a452d3f75"). InnerVolumeSpecName "kube-api-access-jzb22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:26:03.303384 kubelet[3544]: I0702 00:26:03.300404 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a020cff-d5a1-4004-b88c-e94a452d3f75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a020cff-d5a1-4004-b88c-e94a452d3f75" (UID: "8a020cff-d5a1-4004-b88c-e94a452d3f75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:26:03.372457 systemd-resolved[1991]: Under memory pressure, flushing caches. Jul 2 00:26:03.372983 systemd-journald[1581]: Under memory pressure, flushing caches. Jul 2 00:26:03.372498 systemd-resolved[1991]: Flushed all caches. Jul 2 00:26:03.378504 kubelet[3544]: I0702 00:26:03.377185 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-lib-modules\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.378504 kubelet[3544]: I0702 00:26:03.377247 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-run\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.378504 kubelet[3544]: I0702 00:26:03.377280 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cni-path\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.378504 kubelet[3544]: I0702 00:26:03.377308 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-hostproc\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.378504 kubelet[3544]: I0702 00:26:03.377341 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-cgroup\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.378504 kubelet[3544]: I0702 00:26:03.377379 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5c5c798-fdda-4fa1-834a-915983dd4e31-clustermesh-secrets\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.379154 kubelet[3544]: I0702 00:26:03.377572 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-etc-cni-netd\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.379154 kubelet[3544]: I0702 00:26:03.377742 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5cs9\" (UniqueName: \"kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-kube-api-access-q5cs9\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.379154 kubelet[3544]: I0702 00:26:03.377778 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-xtables-lock\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.379154 kubelet[3544]: I0702 00:26:03.377810 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-host-proc-sys-kernel\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.379154 kubelet[3544]: I0702 00:26:03.377841 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-host-proc-sys-net\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.379154 kubelet[3544]: I0702 00:26:03.377869 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-bpf-maps\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.379563 kubelet[3544]: I0702 00:26:03.377902 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-config-path\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.379563 kubelet[3544]: I0702 00:26:03.377933 3544 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-hubble-tls\") pod \"e5c5c798-fdda-4fa1-834a-915983dd4e31\" (UID: \"e5c5c798-fdda-4fa1-834a-915983dd4e31\") " Jul 2 00:26:03.379563 kubelet[3544]: I0702 00:26:03.377985 3544 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jzb22\" (UniqueName: \"kubernetes.io/projected/8a020cff-d5a1-4004-b88c-e94a452d3f75-kube-api-access-jzb22\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.379563 kubelet[3544]: I0702 00:26:03.377999 3544 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a020cff-d5a1-4004-b88c-e94a452d3f75-cilium-config-path\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.380600 kubelet[3544]: I0702 00:26:03.380011 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.380600 kubelet[3544]: I0702 00:26:03.380083 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.380600 kubelet[3544]: I0702 00:26:03.380157 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cni-path" (OuterVolumeSpecName: "cni-path") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.380600 kubelet[3544]: I0702 00:26:03.380186 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-hostproc" (OuterVolumeSpecName: "hostproc") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.380600 kubelet[3544]: I0702 00:26:03.380209 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.395670 kubelet[3544]: I0702 00:26:03.395619 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:26:03.397885 kubelet[3544]: I0702 00:26:03.395740 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.397885 kubelet[3544]: I0702 00:26:03.395774 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.400418 kubelet[3544]: I0702 00:26:03.400375 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.401732 kubelet[3544]: I0702 00:26:03.400403 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.402452 kubelet[3544]: I0702 00:26:03.402416 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:03.405130 kubelet[3544]: I0702 00:26:03.405087 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-kube-api-access-q5cs9" (OuterVolumeSpecName: "kube-api-access-q5cs9") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "kube-api-access-q5cs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:26:03.407704 kubelet[3544]: I0702 00:26:03.407621 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:26:03.415208 kubelet[3544]: I0702 00:26:03.415152 3544 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c5c798-fdda-4fa1-834a-915983dd4e31-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e5c5c798-fdda-4fa1-834a-915983dd4e31" (UID: "e5c5c798-fdda-4fa1-834a-915983dd4e31"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:26:03.478623 kubelet[3544]: I0702 00:26:03.478568 3544 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-etc-cni-netd\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478623 kubelet[3544]: I0702 00:26:03.478609 3544 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-cgroup\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478623 kubelet[3544]: I0702 00:26:03.478630 3544 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5c5c798-fdda-4fa1-834a-915983dd4e31-clustermesh-secrets\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478883 kubelet[3544]: I0702 00:26:03.478645 3544 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-host-proc-sys-kernel\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478883 kubelet[3544]: I0702 00:26:03.478659 3544 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q5cs9\" (UniqueName: \"kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-kube-api-access-q5cs9\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478883 kubelet[3544]: I0702 00:26:03.478672 3544 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-xtables-lock\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478883 kubelet[3544]: I0702 00:26:03.478707 3544 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5c5c798-fdda-4fa1-834a-915983dd4e31-hubble-tls\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478883 kubelet[3544]: I0702 00:26:03.478724 3544 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-host-proc-sys-net\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478883 kubelet[3544]: I0702 00:26:03.478739 3544 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-bpf-maps\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478883 kubelet[3544]: I0702 00:26:03.478752 3544 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-config-path\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.478883 kubelet[3544]: I0702 00:26:03.478764 3544 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-lib-modules\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.479136 kubelet[3544]: I0702 00:26:03.478778 3544 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cilium-run\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.479136 kubelet[3544]: I0702 00:26:03.478791 3544 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-cni-path\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.479136 kubelet[3544]: I0702 00:26:03.478804 3544 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5c5c798-fdda-4fa1-834a-915983dd4e31-hostproc\") on node \"ip-172-31-16-250\" DevicePath \"\"" Jul 2 00:26:03.579868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9-rootfs.mount: Deactivated successfully. Jul 2 00:26:03.580551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9-shm.mount: Deactivated successfully. Jul 2 00:26:03.582498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd-rootfs.mount: Deactivated successfully. Jul 2 00:26:03.584115 systemd[1]: var-lib-kubelet-pods-e5c5c798\x2dfdda\x2d4fa1\x2d834a\x2d915983dd4e31-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq5cs9.mount: Deactivated successfully. Jul 2 00:26:03.584400 systemd[1]: var-lib-kubelet-pods-8a020cff\x2dd5a1\x2d4004\x2db88c\x2de94a452d3f75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djzb22.mount: Deactivated successfully. Jul 2 00:26:03.585785 systemd[1]: var-lib-kubelet-pods-e5c5c798\x2dfdda\x2d4fa1\x2d834a\x2d915983dd4e31-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:26:03.587184 systemd[1]: var-lib-kubelet-pods-e5c5c798\x2dfdda\x2d4fa1\x2d834a\x2d915983dd4e31-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:26:03.654828 kubelet[3544]: I0702 00:26:03.654145 3544 scope.go:117] "RemoveContainer" containerID="892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf" Jul 2 00:26:03.691316 containerd[2110]: time="2024-07-02T00:26:03.691278844Z" level=info msg="RemoveContainer for \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\"" Jul 2 00:26:03.715374 containerd[2110]: time="2024-07-02T00:26:03.715323379Z" level=info msg="RemoveContainer for \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\" returns successfully" Jul 2 00:26:03.718218 kubelet[3544]: I0702 00:26:03.716959 3544 scope.go:117] "RemoveContainer" containerID="28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842" Jul 2 00:26:03.720459 containerd[2110]: time="2024-07-02T00:26:03.720418974Z" level=info msg="RemoveContainer for \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\"" Jul 2 00:26:03.728433 containerd[2110]: time="2024-07-02T00:26:03.728349803Z" level=info msg="RemoveContainer for \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\" returns successfully" Jul 2 00:26:03.730189 kubelet[3544]: I0702 00:26:03.730154 3544 scope.go:117] "RemoveContainer" containerID="de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0" Jul 2 00:26:03.733090 containerd[2110]: time="2024-07-02T00:26:03.733005188Z" level=info msg="RemoveContainer for \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\"" Jul 2 00:26:03.739645 containerd[2110]: time="2024-07-02T00:26:03.739570617Z" level=info msg="RemoveContainer for \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\" returns successfully" Jul 2 00:26:03.740411 kubelet[3544]: I0702 00:26:03.740383 3544 scope.go:117] "RemoveContainer" containerID="31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a" Jul 2 00:26:03.744707 containerd[2110]: time="2024-07-02T00:26:03.744559486Z" level=info msg="RemoveContainer for \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\"" Jul 2 00:26:03.750820 containerd[2110]: time="2024-07-02T00:26:03.750777267Z" level=info msg="RemoveContainer for \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\" returns successfully" Jul 2 00:26:03.751441 kubelet[3544]: I0702 00:26:03.751408 3544 scope.go:117] "RemoveContainer" containerID="7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221" Jul 2 00:26:03.752771 containerd[2110]: time="2024-07-02T00:26:03.752740944Z" level=info msg="RemoveContainer for \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\"" Jul 2 00:26:03.758634 containerd[2110]: time="2024-07-02T00:26:03.758590415Z" level=info msg="RemoveContainer for \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\" returns successfully" Jul 2 00:26:03.758978 kubelet[3544]: I0702 00:26:03.758950 3544 scope.go:117] "RemoveContainer" containerID="892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf" Jul 2 00:26:03.759790 kubelet[3544]: E0702 00:26:03.759719 3544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\": not found" containerID="892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf" Jul 2 00:26:03.759875 containerd[2110]: time="2024-07-02T00:26:03.759518881Z" level=error msg="ContainerStatus for \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\": not found" Jul 2 00:26:03.772449 kubelet[3544]: I0702 00:26:03.772407 3544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf"} err="failed to get container status \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\": rpc error: code = NotFound desc = an error occurred when try to find container \"892ebd3c132a33ba95a9b5beb71293b7eef6d4b12eb3c7bee5628dd50cdcfadf\": not found" Jul 2 00:26:03.772449 kubelet[3544]: I0702 00:26:03.772454 3544 scope.go:117] "RemoveContainer" containerID="28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842" Jul 2 00:26:03.772929 containerd[2110]: time="2024-07-02T00:26:03.772878475Z" level=error msg="ContainerStatus for \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\": not found" Jul 2 00:26:03.773165 kubelet[3544]: E0702 00:26:03.773144 3544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\": not found" containerID="28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842" Jul 2 00:26:03.773345 kubelet[3544]: I0702 00:26:03.773188 3544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842"} err="failed to get container status \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\": rpc error: code = NotFound desc = an error occurred when try to find container \"28d5f7d75e0b99f4e2b9e94f1035b1e4cbd0b92768f53ed1ca3e1e1e882fd842\": not found" Jul 2 00:26:03.773345 kubelet[3544]: I0702 00:26:03.773205 3544 scope.go:117] "RemoveContainer" containerID="de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0" Jul 2 00:26:03.773678 containerd[2110]: time="2024-07-02T00:26:03.773623479Z" level=error msg="ContainerStatus for \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\": not found" Jul 2 00:26:03.773873 kubelet[3544]: E0702 00:26:03.773854 3544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\": not found" containerID="de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0" Jul 2 00:26:03.773955 kubelet[3544]: I0702 00:26:03.773892 3544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0"} err="failed to get container status \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"de571e294a383f5568ff35f36527b3ac26f998acd1033fb39ecbfbc5247d14e0\": not found" Jul 2 00:26:03.773955 kubelet[3544]: I0702 00:26:03.773907 3544 scope.go:117] "RemoveContainer" containerID="31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a" Jul 2 00:26:03.774172 containerd[2110]: time="2024-07-02T00:26:03.774138277Z" level=error msg="ContainerStatus for \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\": not found" Jul 2 00:26:03.774286 kubelet[3544]: E0702 00:26:03.774262 3544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\": not found" containerID="31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a" Jul 2 00:26:03.774400 kubelet[3544]: I0702 00:26:03.774365 3544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a"} err="failed to get container status \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\": rpc error: code = NotFound desc = an error occurred when try to find container \"31652df14b404c307c40ecf0e6ae6402241eafabd1539e7f2fa4f7abbe22059a\": not found" Jul 2 00:26:03.774400 kubelet[3544]: I0702 00:26:03.774384 3544 scope.go:117] "RemoveContainer" containerID="7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221" Jul 2 00:26:03.774622 containerd[2110]: time="2024-07-02T00:26:03.774591392Z" level=error msg="ContainerStatus for \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\": not found" Jul 2 00:26:03.775096 kubelet[3544]: E0702 00:26:03.775076 3544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\": not found" containerID="7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221" Jul 2 00:26:03.775178 kubelet[3544]: I0702 00:26:03.775111 3544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221"} err="failed to get container status \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ab90020e3a3e0ebff41191088027ed645b5004a03e4638fd80e6db724703221\": not found" Jul 2 00:26:03.775178 kubelet[3544]: I0702 00:26:03.775127 3544 scope.go:117] "RemoveContainer" containerID="f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c" Jul 2 00:26:03.777023 containerd[2110]: time="2024-07-02T00:26:03.776984343Z" level=info msg="RemoveContainer for \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\"" Jul 2 00:26:03.783384 containerd[2110]: time="2024-07-02T00:26:03.783338845Z" level=info msg="RemoveContainer for \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\" returns successfully" Jul 2 00:26:03.783779 kubelet[3544]: I0702 00:26:03.783749 3544 scope.go:117] "RemoveContainer" containerID="f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c" Jul 2 00:26:03.784214 containerd[2110]: time="2024-07-02T00:26:03.784169922Z" level=error msg="ContainerStatus for \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\": not found" Jul 2 00:26:03.784360 kubelet[3544]: E0702 00:26:03.784346 3544 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\": not found" containerID="f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c" Jul 2 00:26:03.784457 kubelet[3544]: I0702 00:26:03.784386 3544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c"} err="failed to get container status \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2b84302b0f4b85930894874b55ef0dce56e074c19f9e7185bb3576f6c91ed9c\": not found" Jul 2 00:26:03.993095 sshd[5166]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:04.001528 systemd[1]: sshd@26-172.31.16.250:22-147.75.109.163:44692.service: Deactivated successfully. Jul 2 00:26:04.011473 systemd-logind[2080]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:26:04.012473 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:26:04.025353 systemd[1]: Started sshd@27-172.31.16.250:22-147.75.109.163:57478.service - OpenSSH per-connection server daemon (147.75.109.163:57478). Jul 2 00:26:04.027500 systemd-logind[2080]: Removed session 26. Jul 2 00:26:04.066604 kubelet[3544]: I0702 00:26:04.066449 3544 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8a020cff-d5a1-4004-b88c-e94a452d3f75" path="/var/lib/kubelet/pods/8a020cff-d5a1-4004-b88c-e94a452d3f75/volumes" Jul 2 00:26:04.069898 kubelet[3544]: I0702 00:26:04.069662 3544 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e5c5c798-fdda-4fa1-834a-915983dd4e31" path="/var/lib/kubelet/pods/e5c5c798-fdda-4fa1-834a-915983dd4e31/volumes" Jul 2 00:26:04.094408 containerd[2110]: time="2024-07-02T00:26:04.094371290Z" level=info msg="StopPodSandbox for \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\"" Jul 2 00:26:04.094598 containerd[2110]: time="2024-07-02T00:26:04.094521411Z" level=info msg="TearDown network for sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" successfully" Jul 2 00:26:04.094598 containerd[2110]: time="2024-07-02T00:26:04.094538771Z" level=info msg="StopPodSandbox for \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" returns successfully" Jul 2 00:26:04.095086 containerd[2110]: time="2024-07-02T00:26:04.095055474Z" level=info msg="RemovePodSandbox for \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\"" Jul 2 00:26:04.095265 containerd[2110]: time="2024-07-02T00:26:04.095090384Z" level=info msg="Forcibly stopping sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\"" Jul 2 00:26:04.095422 containerd[2110]: time="2024-07-02T00:26:04.095157285Z" level=info msg="TearDown network for sandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" successfully" Jul 2 00:26:04.109759 containerd[2110]: time="2024-07-02T00:26:04.109653017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:26:04.110028 containerd[2110]: time="2024-07-02T00:26:04.109795384Z" level=info msg="RemovePodSandbox \"8fbee8ee7b45c56f2d758374b019ba75e5538444ee4b0d9a7b0d13c5672f6ae9\" returns successfully" Jul 2 00:26:04.110604 containerd[2110]: time="2024-07-02T00:26:04.110567267Z" level=info msg="StopPodSandbox for \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\"" Jul 2 00:26:04.110794 containerd[2110]: time="2024-07-02T00:26:04.110751837Z" level=info msg="TearDown network for sandbox \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\" successfully" Jul 2 00:26:04.110794 containerd[2110]: time="2024-07-02T00:26:04.110773138Z" level=info msg="StopPodSandbox for \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\" returns successfully" Jul 2 00:26:04.113710 containerd[2110]: time="2024-07-02T00:26:04.111114195Z" level=info msg="RemovePodSandbox for \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\"" Jul 2 00:26:04.113710 containerd[2110]: time="2024-07-02T00:26:04.111149952Z" level=info msg="Forcibly stopping sandbox \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\"" Jul 2 00:26:04.113710 containerd[2110]: time="2024-07-02T00:26:04.111275253Z" level=info msg="TearDown network for sandbox \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\" successfully" Jul 2 00:26:04.117538 containerd[2110]: time="2024-07-02T00:26:04.117493585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:26:04.117773 containerd[2110]: time="2024-07-02T00:26:04.117752951Z" level=info msg="RemovePodSandbox \"e98608e6ded81f02169ba0db3af262d3e3f04439de88585f2c0a0a2e2043defd\" returns successfully" Jul 2 00:26:04.231520 sshd[5334]: Accepted publickey for core from 147.75.109.163 port 57478 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:26:04.234379 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:04.247133 systemd-logind[2080]: New session 27 of user core. Jul 2 00:26:04.254531 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:26:04.324401 kubelet[3544]: E0702 00:26:04.324362 3544 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:26:05.160369 sshd[5334]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:05.171128 systemd[1]: sshd@27-172.31.16.250:22-147.75.109.163:57478.service: Deactivated successfully. Jul 2 00:26:05.174604 kubelet[3544]: I0702 00:26:05.174472 3544 topology_manager.go:215] "Topology Admit Handler" podUID="fb1d1baa-a80b-4d7f-9993-de0fa0b617ed" podNamespace="kube-system" podName="cilium-dxm2q" Jul 2 00:26:05.181761 kubelet[3544]: E0702 00:26:05.181719 3544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5c5c798-fdda-4fa1-834a-915983dd4e31" containerName="cilium-agent" Jul 2 00:26:05.181909 kubelet[3544]: E0702 00:26:05.181798 3544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5c5c798-fdda-4fa1-834a-915983dd4e31" containerName="clean-cilium-state" Jul 2 00:26:05.181909 kubelet[3544]: E0702 00:26:05.181813 3544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5c5c798-fdda-4fa1-834a-915983dd4e31" containerName="mount-cgroup" Jul 2 00:26:05.181909 kubelet[3544]: E0702 00:26:05.181828 3544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5c5c798-fdda-4fa1-834a-915983dd4e31" containerName="apply-sysctl-overwrites" Jul 2 00:26:05.181909 kubelet[3544]: E0702 00:26:05.181838 3544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5c5c798-fdda-4fa1-834a-915983dd4e31" containerName="mount-bpf-fs" Jul 2 00:26:05.181909 kubelet[3544]: E0702 00:26:05.181863 3544 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a020cff-d5a1-4004-b88c-e94a452d3f75" containerName="cilium-operator" Jul 2 00:26:05.184833 kubelet[3544]: I0702 00:26:05.184798 3544 memory_manager.go:346] "RemoveStaleState removing state" podUID="8a020cff-d5a1-4004-b88c-e94a452d3f75" containerName="cilium-operator" Jul 2 00:26:05.184949 kubelet[3544]: I0702 00:26:05.184848 3544 memory_manager.go:346] "RemoveStaleState removing state" podUID="e5c5c798-fdda-4fa1-834a-915983dd4e31" containerName="cilium-agent" Jul 2 00:26:05.185642 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:26:05.195026 systemd-logind[2080]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:26:05.229003 systemd[1]: Started sshd@28-172.31.16.250:22-147.75.109.163:57480.service - OpenSSH per-connection server daemon (147.75.109.163:57480). Jul 2 00:26:05.233408 systemd-logind[2080]: Removed session 27. Jul 2 00:26:05.302954 kubelet[3544]: I0702 00:26:05.302927 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-etc-cni-netd\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.303762 kubelet[3544]: I0702 00:26:05.303122 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-host-proc-sys-kernel\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.303762 kubelet[3544]: I0702 00:26:05.303149 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-cilium-config-path\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305556 kubelet[3544]: I0702 00:26:05.304917 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scgt7\" (UniqueName: \"kubernetes.io/projected/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-kube-api-access-scgt7\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305556 kubelet[3544]: I0702 00:26:05.304984 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-cilium-run\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305556 kubelet[3544]: I0702 00:26:05.305020 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-bpf-maps\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305556 kubelet[3544]: I0702 00:26:05.305050 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-hostproc\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305556 kubelet[3544]: I0702 00:26:05.305078 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-cni-path\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305556 kubelet[3544]: I0702 00:26:05.305107 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-xtables-lock\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305923 kubelet[3544]: I0702 00:26:05.305136 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-lib-modules\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305923 kubelet[3544]: I0702 00:26:05.305163 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-host-proc-sys-net\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305923 kubelet[3544]: I0702 00:26:05.305191 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-cilium-cgroup\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305923 kubelet[3544]: I0702 00:26:05.305221 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-clustermesh-secrets\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305923 kubelet[3544]: I0702 00:26:05.305247 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-cilium-ipsec-secrets\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.305923 kubelet[3544]: I0702 00:26:05.305319 3544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb1d1baa-a80b-4d7f-9993-de0fa0b617ed-hubble-tls\") pod \"cilium-dxm2q\" (UID: \"fb1d1baa-a80b-4d7f-9993-de0fa0b617ed\") " pod="kube-system/cilium-dxm2q" Jul 2 00:26:05.428175 systemd-journald[1581]: Under memory pressure, flushing caches. Jul 2 00:26:05.418985 systemd-resolved[1991]: Under memory pressure, flushing caches. Jul 2 00:26:05.418994 systemd-resolved[1991]: Flushed all caches. Jul 2 00:26:05.499856 sshd[5348]: Accepted publickey for core from 147.75.109.163 port 57480 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:26:05.502016 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:05.519391 systemd-logind[2080]: New session 28 of user core. Jul 2 00:26:05.523216 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:26:05.554853 containerd[2110]: time="2024-07-02T00:26:05.554356965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxm2q,Uid:fb1d1baa-a80b-4d7f-9993-de0fa0b617ed,Namespace:kube-system,Attempt:0,}" Jul 2 00:26:05.597008 containerd[2110]: time="2024-07-02T00:26:05.596668290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:26:05.597008 containerd[2110]: time="2024-07-02T00:26:05.596763182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:05.597008 containerd[2110]: time="2024-07-02T00:26:05.596778947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:26:05.597008 containerd[2110]: time="2024-07-02T00:26:05.596788350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:05.664454 sshd[5348]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:05.676120 systemd[1]: sshd@28-172.31.16.250:22-147.75.109.163:57480.service: Deactivated successfully. Jul 2 00:26:05.683493 systemd-logind[2080]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:26:05.693038 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:26:05.698841 containerd[2110]: time="2024-07-02T00:26:05.698800223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxm2q,Uid:fb1d1baa-a80b-4d7f-9993-de0fa0b617ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\"" Jul 2 00:26:05.705141 systemd[1]: Started sshd@29-172.31.16.250:22-147.75.109.163:57486.service - OpenSSH per-connection server daemon (147.75.109.163:57486). Jul 2 00:26:05.707977 systemd-logind[2080]: Removed session 28. Jul 2 00:26:05.728571 containerd[2110]: time="2024-07-02T00:26:05.727802020Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:26:05.781349 containerd[2110]: time="2024-07-02T00:26:05.781210536Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cbc5b715b0ef51e15f187794242a2ecc14b7cc8afa942e6187b4b1a51cfbcf02\"" Jul 2 00:26:05.784782 containerd[2110]: time="2024-07-02T00:26:05.784742762Z" level=info msg="StartContainer for \"cbc5b715b0ef51e15f187794242a2ecc14b7cc8afa942e6187b4b1a51cfbcf02\"" Jul 2 00:26:05.805905 ntpd[2062]: Deleting interface #10 lxc_health, fe80::28b1:64ff:fe49:14ce%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Jul 2 00:26:05.806764 ntpd[2062]: 2 Jul 00:26:05 ntpd[2062]: Deleting interface #10 lxc_health, fe80::28b1:64ff:fe49:14ce%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Jul 2 00:26:05.876372 containerd[2110]: time="2024-07-02T00:26:05.876175593Z" level=info msg="StartContainer for \"cbc5b715b0ef51e15f187794242a2ecc14b7cc8afa942e6187b4b1a51cfbcf02\" returns successfully" Jul 2 00:26:05.906850 sshd[5405]: Accepted publickey for core from 147.75.109.163 port 57486 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:26:05.920183 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:05.949143 systemd-logind[2080]: New session 29 of user core. Jul 2 00:26:05.956259 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:26:06.038542 containerd[2110]: time="2024-07-02T00:26:06.038354756Z" level=info msg="shim disconnected" id=cbc5b715b0ef51e15f187794242a2ecc14b7cc8afa942e6187b4b1a51cfbcf02 namespace=k8s.io Jul 2 00:26:06.038542 containerd[2110]: time="2024-07-02T00:26:06.038536423Z" level=warning msg="cleaning up after shim disconnected" id=cbc5b715b0ef51e15f187794242a2ecc14b7cc8afa942e6187b4b1a51cfbcf02 namespace=k8s.io Jul 2 00:26:06.038542 containerd[2110]: time="2024-07-02T00:26:06.038551384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:06.760136 containerd[2110]: time="2024-07-02T00:26:06.760090619Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:26:06.797067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286899883.mount: Deactivated successfully. Jul 2 00:26:06.801683 containerd[2110]: time="2024-07-02T00:26:06.798066683Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"30c0f56d171b3f6e90b66efe0489ba66ab4dfe8519371dc5e484aaf56e916a2a\"" Jul 2 00:26:06.805044 containerd[2110]: time="2024-07-02T00:26:06.802044571Z" level=info msg="StartContainer for \"30c0f56d171b3f6e90b66efe0489ba66ab4dfe8519371dc5e484aaf56e916a2a\"" Jul 2 00:26:06.914050 containerd[2110]: time="2024-07-02T00:26:06.913938779Z" level=info msg="StartContainer for \"30c0f56d171b3f6e90b66efe0489ba66ab4dfe8519371dc5e484aaf56e916a2a\" returns successfully" Jul 2 00:26:06.971032 containerd[2110]: time="2024-07-02T00:26:06.970556553Z" level=info msg="shim disconnected" id=30c0f56d171b3f6e90b66efe0489ba66ab4dfe8519371dc5e484aaf56e916a2a namespace=k8s.io Jul 2 00:26:06.971032 containerd[2110]: time="2024-07-02T00:26:06.970627325Z" level=warning msg="cleaning up after shim disconnected" id=30c0f56d171b3f6e90b66efe0489ba66ab4dfe8519371dc5e484aaf56e916a2a namespace=k8s.io Jul 2 00:26:06.971614 containerd[2110]: time="2024-07-02T00:26:06.971040753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:07.074011 kubelet[3544]: I0702 00:26:07.073302 3544 setters.go:552] "Node became not ready" node="ip-172-31-16-250" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:26:07Z","lastTransitionTime":"2024-07-02T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:26:07.414138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30c0f56d171b3f6e90b66efe0489ba66ab4dfe8519371dc5e484aaf56e916a2a-rootfs.mount: Deactivated successfully. Jul 2 00:26:07.755725 containerd[2110]: time="2024-07-02T00:26:07.755367018Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:26:07.797697 containerd[2110]: time="2024-07-02T00:26:07.797587847Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba8e24fa1a26f85d79787d22e50b5edb8e442e211166a66546c5e04891f707c7\"" Jul 2 00:26:07.798759 containerd[2110]: time="2024-07-02T00:26:07.798725614Z" level=info msg="StartContainer for \"ba8e24fa1a26f85d79787d22e50b5edb8e442e211166a66546c5e04891f707c7\"" Jul 2 00:26:07.980081 containerd[2110]: time="2024-07-02T00:26:07.977770892Z" level=info msg="StartContainer for \"ba8e24fa1a26f85d79787d22e50b5edb8e442e211166a66546c5e04891f707c7\" returns successfully" Jul 2 00:26:08.068133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba8e24fa1a26f85d79787d22e50b5edb8e442e211166a66546c5e04891f707c7-rootfs.mount: Deactivated successfully. Jul 2 00:26:08.085926 containerd[2110]: time="2024-07-02T00:26:08.085852955Z" level=info msg="shim disconnected" id=ba8e24fa1a26f85d79787d22e50b5edb8e442e211166a66546c5e04891f707c7 namespace=k8s.io Jul 2 00:26:08.086266 containerd[2110]: time="2024-07-02T00:26:08.085925990Z" level=warning msg="cleaning up after shim disconnected" id=ba8e24fa1a26f85d79787d22e50b5edb8e442e211166a66546c5e04891f707c7 namespace=k8s.io Jul 2 00:26:08.086266 containerd[2110]: time="2024-07-02T00:26:08.086018502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:08.782546 containerd[2110]: time="2024-07-02T00:26:08.776635790Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:26:08.819386 containerd[2110]: time="2024-07-02T00:26:08.819342327Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c41fa1ecc952ef22b81e6729369b3d3eab583af86e911d22ec367bf93baf32e3\"" Jul 2 00:26:08.822207 containerd[2110]: time="2024-07-02T00:26:08.820431778Z" level=info msg="StartContainer for \"c41fa1ecc952ef22b81e6729369b3d3eab583af86e911d22ec367bf93baf32e3\"" Jul 2 00:26:08.910813 containerd[2110]: time="2024-07-02T00:26:08.908928461Z" level=info msg="StartContainer for \"c41fa1ecc952ef22b81e6729369b3d3eab583af86e911d22ec367bf93baf32e3\" returns successfully" Jul 2 00:26:08.934710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c41fa1ecc952ef22b81e6729369b3d3eab583af86e911d22ec367bf93baf32e3-rootfs.mount: Deactivated successfully. Jul 2 00:26:08.945342 containerd[2110]: time="2024-07-02T00:26:08.945274724Z" level=info msg="shim disconnected" id=c41fa1ecc952ef22b81e6729369b3d3eab583af86e911d22ec367bf93baf32e3 namespace=k8s.io Jul 2 00:26:08.945342 containerd[2110]: time="2024-07-02T00:26:08.945334174Z" level=warning msg="cleaning up after shim disconnected" id=c41fa1ecc952ef22b81e6729369b3d3eab583af86e911d22ec367bf93baf32e3 namespace=k8s.io Jul 2 00:26:08.945342 containerd[2110]: time="2024-07-02T00:26:08.945345385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:09.326471 kubelet[3544]: E0702 00:26:09.326297 3544 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:26:09.783168 containerd[2110]: time="2024-07-02T00:26:09.782868166Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:26:09.820991 containerd[2110]: time="2024-07-02T00:26:09.820944472Z" level=info msg="CreateContainer within sandbox \"19921b403da87590cbb611fb3a1c109769bfa97375edbb0cf5bab899af19f006\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a4b5a11e53ed2287ae7e56ed0e9b3ffe2e667c8f1b9a9b49c08893f8b4944468\"" Jul 2 00:26:09.822835 containerd[2110]: time="2024-07-02T00:26:09.822783396Z" level=info msg="StartContainer for \"a4b5a11e53ed2287ae7e56ed0e9b3ffe2e667c8f1b9a9b49c08893f8b4944468\"" Jul 2 00:26:09.974044 containerd[2110]: time="2024-07-02T00:26:09.971869536Z" level=info msg="StartContainer for \"a4b5a11e53ed2287ae7e56ed0e9b3ffe2e667c8f1b9a9b49c08893f8b4944468\" returns successfully" Jul 2 00:26:10.856849 kubelet[3544]: I0702 00:26:10.855533 3544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dxm2q" podStartSLOduration=5.855451071 podCreationTimestamp="2024-07-02 00:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:26:10.839948959 +0000 UTC m=+127.042006153" watchObservedRunningTime="2024-07-02 00:26:10.855451071 +0000 UTC m=+127.057508264" Jul 2 00:26:11.021724 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 00:26:14.073128 kubelet[3544]: E0702 00:26:14.071175 3544 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-w55qw" podUID="7361e28d-7878-4aab-8f8e-b7db941075d5" Jul 2 00:26:14.958257 systemd-networkd[1671]: lxc_health: Link UP Jul 2 00:26:14.959203 systemd-networkd[1671]: lxc_health: Gained carrier Jul 2 00:26:14.977474 (udev-worker)[6214]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:26:16.303013 systemd-networkd[1671]: lxc_health: Gained IPv6LL Jul 2 00:26:17.606358 systemd[1]: run-containerd-runc-k8s.io-a4b5a11e53ed2287ae7e56ed0e9b3ffe2e667c8f1b9a9b49c08893f8b4944468-runc.2FivrN.mount: Deactivated successfully. Jul 2 00:26:18.805490 ntpd[2062]: Listen normally on 13 lxc_health [fe80::70f6:bcff:fe21:e01c%14]:123 Jul 2 00:26:18.806103 ntpd[2062]: 2 Jul 00:26:18 ntpd[2062]: Listen normally on 13 lxc_health [fe80::70f6:bcff:fe21:e01c%14]:123 Jul 2 00:26:22.433971 kubelet[3544]: E0702 00:26:22.433924 3544 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55790->127.0.0.1:40613: write tcp 127.0.0.1:55790->127.0.0.1:40613: write: broken pipe Jul 2 00:26:22.541220 sshd[5405]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:22.557439 systemd[1]: sshd@29-172.31.16.250:22-147.75.109.163:57486.service: Deactivated successfully. Jul 2 00:26:22.568548 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:26:22.578154 systemd-logind[2080]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:26:22.580461 systemd-logind[2080]: Removed session 29. Jul 2 00:26:37.469315 kubelet[3544]: E0702 00:26:37.469266 3544 controller.go:193] "Failed to update lease" err="Put \"https://172.31.16.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-250?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:26:37.709223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f95794739f4cead65c74ad4da5d0e68d52e8f375e583c60ff25f84a5237b81a-rootfs.mount: Deactivated successfully. Jul 2 00:26:37.726610 containerd[2110]: time="2024-07-02T00:26:37.726455823Z" level=info msg="shim disconnected" id=8f95794739f4cead65c74ad4da5d0e68d52e8f375e583c60ff25f84a5237b81a namespace=k8s.io Jul 2 00:26:37.726610 containerd[2110]: time="2024-07-02T00:26:37.726520914Z" level=warning msg="cleaning up after shim disconnected" id=8f95794739f4cead65c74ad4da5d0e68d52e8f375e583c60ff25f84a5237b81a namespace=k8s.io Jul 2 00:26:37.726610 containerd[2110]: time="2024-07-02T00:26:37.726532621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:37.896174 kubelet[3544]: I0702 00:26:37.895937 3544 scope.go:117] "RemoveContainer" containerID="8f95794739f4cead65c74ad4da5d0e68d52e8f375e583c60ff25f84a5237b81a" Jul 2 00:26:37.900059 containerd[2110]: time="2024-07-02T00:26:37.900019073Z" level=info msg="CreateContainer within sandbox \"0cf8d959600c95a2a3a2e6789a6ccbcaf6f6fdce77bd33149ee7d72f6c83a2b0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:26:37.922846 containerd[2110]: time="2024-07-02T00:26:37.922793895Z" level=info msg="CreateContainer within sandbox \"0cf8d959600c95a2a3a2e6789a6ccbcaf6f6fdce77bd33149ee7d72f6c83a2b0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7061f03b364ddfd65676b793bc55ca588587f86e333409e36e2c7d79ee39d4e7\"" Jul 2 00:26:37.923390 containerd[2110]: time="2024-07-02T00:26:37.923356713Z" level=info msg="StartContainer for \"7061f03b364ddfd65676b793bc55ca588587f86e333409e36e2c7d79ee39d4e7\"" Jul 2 00:26:37.924316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200272427.mount: Deactivated successfully. Jul 2 00:26:38.072606 containerd[2110]: time="2024-07-02T00:26:38.072481664Z" level=info msg="StartContainer for \"7061f03b364ddfd65676b793bc55ca588587f86e333409e36e2c7d79ee39d4e7\" returns successfully" Jul 2 00:26:43.253992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0a7a4b78e8178ee5b5523a8cbd192c93032d72edf08cbcd053076530c5eb7f9-rootfs.mount: Deactivated successfully. Jul 2 00:26:43.318151 containerd[2110]: time="2024-07-02T00:26:43.318074590Z" level=info msg="shim disconnected" id=d0a7a4b78e8178ee5b5523a8cbd192c93032d72edf08cbcd053076530c5eb7f9 namespace=k8s.io Jul 2 00:26:43.318151 containerd[2110]: time="2024-07-02T00:26:43.318144890Z" level=warning msg="cleaning up after shim disconnected" id=d0a7a4b78e8178ee5b5523a8cbd192c93032d72edf08cbcd053076530c5eb7f9 namespace=k8s.io Jul 2 00:26:43.318151 containerd[2110]: time="2024-07-02T00:26:43.318156584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:43.336891 containerd[2110]: time="2024-07-02T00:26:43.336842823Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:26:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:26:43.912489 kubelet[3544]: I0702 00:26:43.912455 3544 scope.go:117] "RemoveContainer" containerID="d0a7a4b78e8178ee5b5523a8cbd192c93032d72edf08cbcd053076530c5eb7f9" Jul 2 00:26:43.915903 containerd[2110]: time="2024-07-02T00:26:43.915861074Z" level=info msg="CreateContainer within sandbox \"be3e56182c53da2234d1230c3d42d55a381f8c2a408aaaf61cced289149b3820\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:26:43.944437 containerd[2110]: time="2024-07-02T00:26:43.944388211Z" level=info msg="CreateContainer within sandbox \"be3e56182c53da2234d1230c3d42d55a381f8c2a408aaaf61cced289149b3820\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1bbef8d548de56f6b830ce474ace73d6e0c4fc30066ba3cf07a1404c5eae7d50\"" Jul 2 00:26:43.947393 containerd[2110]: time="2024-07-02T00:26:43.947263873Z" level=info msg="StartContainer for \"1bbef8d548de56f6b830ce474ace73d6e0c4fc30066ba3cf07a1404c5eae7d50\"" Jul 2 00:26:44.112947 containerd[2110]: time="2024-07-02T00:26:44.112899007Z" level=info msg="StartContainer for \"1bbef8d548de56f6b830ce474ace73d6e0c4fc30066ba3cf07a1404c5eae7d50\" returns successfully" Jul 2 00:26:47.470625 kubelet[3544]: E0702 00:26:47.470201 3544 controller.go:193] "Failed to update lease" err="Put \"https://172.31.16.250:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-250?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"