Oct 8 19:57:19.152151 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 19:57:19.152194 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:57:19.152209 kernel: BIOS-provided physical RAM map: Oct 8 19:57:19.152220 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 19:57:19.152230 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 19:57:19.152241 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 19:57:19.152256 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Oct 8 19:57:19.152330 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Oct 8 19:57:19.152343 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Oct 8 19:57:19.152355 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 19:57:19.152366 kernel: NX (Execute Disable) protection: active Oct 8 19:57:19.152377 kernel: APIC: Static calls initialized Oct 8 19:57:19.152388 kernel: SMBIOS 2.7 present. Oct 8 19:57:19.152400 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Oct 8 19:57:19.152418 kernel: Hypervisor detected: KVM Oct 8 19:57:19.152430 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 19:57:19.152443 kernel: kvm-clock: using sched offset of 6110194605 cycles Oct 8 19:57:19.152457 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 19:57:19.152470 kernel: tsc: Detected 2499.994 MHz processor Oct 8 19:57:19.152483 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 19:57:19.152496 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 19:57:19.152511 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Oct 8 19:57:19.152524 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 19:57:19.152537 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 19:57:19.152549 kernel: Using GB pages for direct mapping Oct 8 19:57:19.152562 kernel: ACPI: Early table checksum verification disabled Oct 8 19:57:19.152575 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Oct 8 19:57:19.152587 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Oct 8 19:57:19.152664 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 8 19:57:19.152677 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Oct 8 19:57:19.152693 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Oct 8 19:57:19.152706 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 8 19:57:19.152718 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 8 19:57:19.152731 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Oct 8 19:57:19.152744 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 8 19:57:19.152756 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Oct 8 19:57:19.152769 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Oct 8 19:57:19.152782 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 8 19:57:19.152794 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Oct 8 19:57:19.152811 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Oct 8 19:57:19.152829 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Oct 8 19:57:19.152842 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Oct 8 19:57:19.152855 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Oct 8 19:57:19.152869 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Oct 8 19:57:19.152885 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Oct 8 19:57:19.152898 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Oct 8 19:57:19.152912 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Oct 8 19:57:19.152925 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Oct 8 19:57:19.152939 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 8 19:57:19.152953 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 8 19:57:19.152966 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Oct 8 19:57:19.152980 kernel: NUMA: Initialized distance table, cnt=1 Oct 8 19:57:19.152993 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Oct 8 19:57:19.153009 kernel: Zone ranges: Oct 8 19:57:19.153022 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 19:57:19.153037 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Oct 8 19:57:19.153050 kernel: Normal empty Oct 8 19:57:19.153064 kernel: Movable zone start for each node Oct 8 19:57:19.153077 kernel: Early memory node ranges Oct 8 19:57:19.153092 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 19:57:19.153106 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Oct 8 19:57:19.153120 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Oct 8 19:57:19.153142 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:57:19.153156 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 19:57:19.153179 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Oct 8 19:57:19.153207 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 8 19:57:19.153230 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 19:57:19.153245 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Oct 8 19:57:19.153260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 19:57:19.157471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 19:57:19.157499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 19:57:19.157515 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 19:57:19.157537 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 19:57:19.157553 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 19:57:19.157569 kernel: TSC deadline timer available Oct 8 19:57:19.157585 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 19:57:19.157601 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 19:57:19.157614 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Oct 8 19:57:19.157629 kernel: Booting paravirtualized kernel on KVM Oct 8 19:57:19.157643 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 19:57:19.157656 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 19:57:19.157675 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 19:57:19.157689 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 19:57:19.157704 kernel: pcpu-alloc: [0] 0 1 Oct 8 19:57:19.157900 kernel: kvm-guest: PV spinlocks enabled Oct 8 19:57:19.157916 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 19:57:19.157934 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:57:19.157950 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:57:19.157965 kernel: random: crng init done Oct 8 19:57:19.157985 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:57:19.157999 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 19:57:19.158012 kernel: Fallback order for Node 0: 0 Oct 8 19:57:19.158027 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Oct 8 19:57:19.158041 kernel: Policy zone: DMA32 Oct 8 19:57:19.158056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:57:19.158071 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125152K reserved, 0K cma-reserved) Oct 8 19:57:19.158085 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 19:57:19.158103 kernel: Kernel/User page tables isolation: enabled Oct 8 19:57:19.158118 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 19:57:19.158132 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 19:57:19.158147 kernel: Dynamic Preempt: voluntary Oct 8 19:57:19.158161 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:57:19.158177 kernel: rcu: RCU event tracing is enabled. Oct 8 19:57:19.158192 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 19:57:19.158206 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:57:19.158221 kernel: Rude variant of Tasks RCU enabled. Oct 8 19:57:19.158236 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:57:19.158253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:57:19.158279 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 19:57:19.158886 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 8 19:57:19.158902 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:57:19.158917 kernel: Console: colour VGA+ 80x25 Oct 8 19:57:19.158931 kernel: printk: console [ttyS0] enabled Oct 8 19:57:19.158945 kernel: ACPI: Core revision 20230628 Oct 8 19:57:19.158959 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Oct 8 19:57:19.158973 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 19:57:19.158992 kernel: x2apic enabled Oct 8 19:57:19.159007 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 19:57:19.159032 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Oct 8 19:57:19.159056 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Oct 8 19:57:19.159146 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 8 19:57:19.159162 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 8 19:57:19.159175 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 19:57:19.159191 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 19:57:19.159206 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 19:57:19.159219 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 19:57:19.159237 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 8 19:57:19.159250 kernel: RETBleed: Vulnerable Oct 8 19:57:19.159324 kernel: Speculative Store Bypass: Vulnerable Oct 8 19:57:19.159342 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Oct 8 19:57:19.159358 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 8 19:57:19.159374 kernel: GDS: Unknown: Dependent on hypervisor status Oct 8 19:57:19.159390 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 19:57:19.159406 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 19:57:19.159427 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 19:57:19.159442 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Oct 8 19:57:19.159459 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Oct 8 19:57:19.159475 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 8 19:57:19.159492 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 8 19:57:19.159506 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 8 19:57:19.159521 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Oct 8 19:57:19.159537 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 19:57:19.159553 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Oct 8 19:57:19.159568 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Oct 8 19:57:19.159584 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Oct 8 19:57:19.159602 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Oct 8 19:57:19.159616 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Oct 8 19:57:19.159632 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Oct 8 19:57:19.159647 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Oct 8 19:57:19.159662 kernel: Freeing SMP alternatives memory: 32K Oct 8 19:57:19.159677 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:57:19.159692 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:57:19.159708 kernel: landlock: Up and running. Oct 8 19:57:19.159723 kernel: SELinux: Initializing. Oct 8 19:57:19.159738 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 19:57:19.159754 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 19:57:19.159769 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 8 19:57:19.159787 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:57:19.159803 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:57:19.159819 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:57:19.159835 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 8 19:57:19.159850 kernel: signal: max sigframe size: 3632 Oct 8 19:57:19.159865 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:57:19.159882 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:57:19.159897 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 8 19:57:19.159913 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:57:19.159931 kernel: smpboot: x86: Booting SMP configuration: Oct 8 19:57:19.159946 kernel: .... node #0, CPUs: #1 Oct 8 19:57:19.159963 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 8 19:57:19.159979 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 8 19:57:19.159995 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 19:57:19.160010 kernel: smpboot: Max logical packages: 1 Oct 8 19:57:19.160026 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Oct 8 19:57:19.160042 kernel: devtmpfs: initialized Oct 8 19:57:19.160060 kernel: x86/mm: Memory block size: 128MB Oct 8 19:57:19.160075 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:57:19.160091 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 19:57:19.160106 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:57:19.160121 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:57:19.160137 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:57:19.160152 kernel: audit: type=2000 audit(1728417438.067:1): state=initialized audit_enabled=0 res=1 Oct 8 19:57:19.160167 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:57:19.160182 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 19:57:19.160200 kernel: cpuidle: using governor menu Oct 8 19:57:19.160216 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:57:19.160231 kernel: dca service started, version 1.12.1 Oct 8 19:57:19.160246 kernel: PCI: Using configuration type 1 for base access Oct 8 19:57:19.160262 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 19:57:19.165326 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:57:19.165352 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:57:19.165369 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:57:19.165386 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:57:19.165408 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:57:19.165425 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:57:19.165442 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:57:19.165458 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:57:19.165474 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 8 19:57:19.165491 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 19:57:19.165507 kernel: ACPI: Interpreter enabled Oct 8 19:57:19.165524 kernel: ACPI: PM: (supports S0 S5) Oct 8 19:57:19.165540 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 19:57:19.165557 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 19:57:19.165577 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 19:57:19.165593 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 8 19:57:19.165610 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:57:19.165951 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:57:19.166117 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 8 19:57:19.166255 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 8 19:57:19.166304 kernel: acpiphp: Slot [3] registered Oct 8 19:57:19.166327 kernel: acpiphp: Slot [4] registered Oct 8 19:57:19.166343 kernel: acpiphp: Slot [5] registered Oct 8 19:57:19.166359 kernel: acpiphp: Slot [6] registered Oct 8 19:57:19.166376 kernel: acpiphp: Slot [7] registered Oct 8 19:57:19.166392 kernel: acpiphp: Slot [8] registered Oct 8 19:57:19.166409 kernel: acpiphp: Slot [9] registered Oct 8 19:57:19.166425 kernel: acpiphp: Slot [10] registered Oct 8 19:57:19.166515 kernel: acpiphp: Slot [11] registered Oct 8 19:57:19.166532 kernel: acpiphp: Slot [12] registered Oct 8 19:57:19.166554 kernel: acpiphp: Slot [13] registered Oct 8 19:57:19.166570 kernel: acpiphp: Slot [14] registered Oct 8 19:57:19.166587 kernel: acpiphp: Slot [15] registered Oct 8 19:57:19.166603 kernel: acpiphp: Slot [16] registered Oct 8 19:57:19.166619 kernel: acpiphp: Slot [17] registered Oct 8 19:57:19.166634 kernel: acpiphp: Slot [18] registered Oct 8 19:57:19.166651 kernel: acpiphp: Slot [19] registered Oct 8 19:57:19.166667 kernel: acpiphp: Slot [20] registered Oct 8 19:57:19.166737 kernel: acpiphp: Slot [21] registered Oct 8 19:57:19.166755 kernel: acpiphp: Slot [22] registered Oct 8 19:57:19.166776 kernel: acpiphp: Slot [23] registered Oct 8 19:57:19.166792 kernel: acpiphp: Slot [24] registered Oct 8 19:57:19.166808 kernel: acpiphp: Slot [25] registered Oct 8 19:57:19.166823 kernel: acpiphp: Slot [26] registered Oct 8 19:57:19.166900 kernel: acpiphp: Slot [27] registered Oct 8 19:57:19.166920 kernel: acpiphp: Slot [28] registered Oct 8 19:57:19.166937 kernel: acpiphp: Slot [29] registered Oct 8 19:57:19.166954 kernel: acpiphp: Slot [30] registered Oct 8 19:57:19.167152 kernel: acpiphp: Slot [31] registered Oct 8 19:57:19.167177 kernel: PCI host bridge to bus 0000:00 Oct 8 19:57:19.169400 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 19:57:19.170119 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 19:57:19.170282 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 19:57:19.170507 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 8 19:57:19.170637 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:57:19.170858 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 8 19:57:19.171019 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 8 19:57:19.171238 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Oct 8 19:57:19.173594 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 8 19:57:19.173835 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Oct 8 19:57:19.173982 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Oct 8 19:57:19.174123 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Oct 8 19:57:19.174670 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Oct 8 19:57:19.174906 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Oct 8 19:57:19.175047 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Oct 8 19:57:19.175249 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Oct 8 19:57:19.175428 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Oct 8 19:57:19.175569 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Oct 8 19:57:19.175706 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 8 19:57:19.175846 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 19:57:19.176067 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 8 19:57:19.185007 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Oct 8 19:57:19.185215 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 8 19:57:19.185373 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Oct 8 19:57:19.185395 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 19:57:19.185412 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 19:57:19.185428 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 19:57:19.186832 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 19:57:19.186850 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 8 19:57:19.186866 kernel: iommu: Default domain type: Translated Oct 8 19:57:19.186882 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 19:57:19.186897 kernel: PCI: Using ACPI for IRQ routing Oct 8 19:57:19.186914 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 19:57:19.186930 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 19:57:19.186945 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Oct 8 19:57:19.187318 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Oct 8 19:57:19.187465 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Oct 8 19:57:19.187602 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 19:57:19.187622 kernel: vgaarb: loaded Oct 8 19:57:19.187639 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Oct 8 19:57:19.187655 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Oct 8 19:57:19.187671 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 19:57:19.187686 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:57:19.187702 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:57:19.187725 kernel: pnp: PnP ACPI init Oct 8 19:57:19.187740 kernel: pnp: PnP ACPI: found 5 devices Oct 8 19:57:19.187756 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 19:57:19.187772 kernel: NET: Registered PF_INET protocol family Oct 8 19:57:19.187788 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:57:19.187804 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 8 19:57:19.187820 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:57:19.187835 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 19:57:19.187854 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 8 19:57:19.187870 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 8 19:57:19.187886 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 19:57:19.187901 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 19:57:19.187917 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:57:19.187933 kernel: NET: Registered PF_XDP protocol family Oct 8 19:57:19.189633 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 19:57:19.189780 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 19:57:19.189902 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 19:57:19.190033 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 8 19:57:19.190180 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 8 19:57:19.190201 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:57:19.190218 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 8 19:57:19.190234 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Oct 8 19:57:19.190250 kernel: clocksource: Switched to clocksource tsc Oct 8 19:57:19.190266 kernel: Initialise system trusted keyrings Oct 8 19:57:19.190300 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 8 19:57:19.190322 kernel: Key type asymmetric registered Oct 8 19:57:19.190338 kernel: Asymmetric key parser 'x509' registered Oct 8 19:57:19.190353 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 19:57:19.190370 kernel: io scheduler mq-deadline registered Oct 8 19:57:19.190385 kernel: io scheduler kyber registered Oct 8 19:57:19.190401 kernel: io scheduler bfq registered Oct 8 19:57:19.190417 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 19:57:19.190433 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:57:19.190448 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 19:57:19.190467 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 19:57:19.190483 kernel: i8042: Warning: Keylock active Oct 8 19:57:19.190498 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 19:57:19.190514 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 19:57:19.190659 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 8 19:57:19.194911 kernel: rtc_cmos 00:00: registered as rtc0 Oct 8 19:57:19.195315 kernel: rtc_cmos 00:00: setting system clock to 2024-10-08T19:57:18 UTC (1728417438) Oct 8 19:57:19.195457 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 8 19:57:19.195584 kernel: intel_pstate: CPU model not supported Oct 8 19:57:19.195601 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:57:19.195617 kernel: Segment Routing with IPv6 Oct 8 19:57:19.195633 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:57:19.195649 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:57:19.195665 kernel: Key type dns_resolver registered Oct 8 19:57:19.195681 kernel: IPI shorthand broadcast: enabled Oct 8 19:57:19.195697 kernel: sched_clock: Marking stable (605002659, 289080971)->(975163557, -81079927) Oct 8 19:57:19.195713 kernel: registered taskstats version 1 Oct 8 19:57:19.195733 kernel: Loading compiled-in X.509 certificates Oct 8 19:57:19.195749 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 19:57:19.195763 kernel: Key type .fscrypt registered Oct 8 19:57:19.195776 kernel: Key type fscrypt-provisioning registered Oct 8 19:57:19.195791 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:57:19.195807 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:57:19.195823 kernel: ima: No architecture policies found Oct 8 19:57:19.195839 kernel: clk: Disabling unused clocks Oct 8 19:57:19.195855 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 19:57:19.195874 kernel: Write protecting the kernel read-only data: 36864k Oct 8 19:57:19.195889 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 19:57:19.195905 kernel: Run /init as init process Oct 8 19:57:19.195921 kernel: with arguments: Oct 8 19:57:19.195936 kernel: /init Oct 8 19:57:19.195951 kernel: with environment: Oct 8 19:57:19.195967 kernel: HOME=/ Oct 8 19:57:19.195982 kernel: TERM=linux Oct 8 19:57:19.195997 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:57:19.196022 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:57:19.196055 systemd[1]: Detected virtualization amazon. Oct 8 19:57:19.196075 systemd[1]: Detected architecture x86-64. Oct 8 19:57:19.196092 systemd[1]: Running in initrd. Oct 8 19:57:19.196108 systemd[1]: No hostname configured, using default hostname. Oct 8 19:57:19.196128 systemd[1]: Hostname set to . Oct 8 19:57:19.196146 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:57:19.196163 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:57:19.196180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:57:19.196197 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:57:19.196216 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:57:19.196233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:57:19.196253 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:57:19.196283 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:57:19.196303 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:57:19.196320 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:57:19.196337 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:57:19.196354 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:57:19.196442 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:57:19.196466 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:57:19.196484 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:57:19.196501 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:57:19.196519 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:57:19.196536 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:57:19.196552 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 19:57:19.196573 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:57:19.196595 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:57:19.196613 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:57:19.196631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:57:19.196647 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:57:19.196669 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:57:19.196686 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:57:19.196703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:57:19.196719 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:57:19.196735 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:57:19.196757 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:57:19.196772 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:57:19.196790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:57:19.196805 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:57:19.196821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:57:19.196836 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:57:19.196857 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:57:19.196916 systemd-journald[178]: Collecting audit messages is disabled. Oct 8 19:57:19.196955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:57:19.196976 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:57:19.196995 systemd-journald[178]: Journal started Oct 8 19:57:19.197029 systemd-journald[178]: Runtime Journal (/run/log/journal/ec21fea6edddf3d1c6778fd7d4528013) is 4.8M, max 38.6M, 33.7M free. Oct 8 19:57:19.156977 systemd-modules-load[179]: Inserted module 'overlay' Oct 8 19:57:19.326800 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:57:19.326854 kernel: Bridge firewalling registered Oct 8 19:57:19.326874 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:57:19.216978 systemd-modules-load[179]: Inserted module 'br_netfilter' Oct 8 19:57:19.329313 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:57:19.329755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:19.359815 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:57:19.364949 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:57:19.380826 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:57:19.381502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:57:19.412518 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:57:19.424599 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:57:19.435063 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:57:19.435615 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:57:19.449688 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:57:19.468723 dracut-cmdline[212]: dracut-dracut-053 Oct 8 19:57:19.475586 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:57:19.501033 systemd-resolved[214]: Positive Trust Anchors: Oct 8 19:57:19.501048 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:57:19.501114 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:57:19.520856 systemd-resolved[214]: Defaulting to hostname 'linux'. Oct 8 19:57:19.524590 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:57:19.527111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:57:19.600298 kernel: SCSI subsystem initialized Oct 8 19:57:19.610328 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:57:19.623296 kernel: iscsi: registered transport (tcp) Oct 8 19:57:19.653653 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:57:19.653738 kernel: QLogic iSCSI HBA Driver Oct 8 19:57:19.718463 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:57:19.725571 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:57:19.762320 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:57:19.762402 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:57:19.762424 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:57:19.818342 kernel: raid6: avx512x4 gen() 13902 MB/s Oct 8 19:57:19.835321 kernel: raid6: avx512x2 gen() 13318 MB/s Oct 8 19:57:19.852298 kernel: raid6: avx512x1 gen() 13294 MB/s Oct 8 19:57:19.869308 kernel: raid6: avx2x4 gen() 14589 MB/s Oct 8 19:57:19.886384 kernel: raid6: avx2x2 gen() 11979 MB/s Oct 8 19:57:19.903301 kernel: raid6: avx2x1 gen() 11416 MB/s Oct 8 19:57:19.903398 kernel: raid6: using algorithm avx2x4 gen() 14589 MB/s Oct 8 19:57:19.920363 kernel: raid6: .... xor() 3451 MB/s, rmw enabled Oct 8 19:57:19.920456 kernel: raid6: using avx512x2 recovery algorithm Oct 8 19:57:19.948375 kernel: xor: automatically using best checksumming function avx Oct 8 19:57:20.173300 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:57:20.191826 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:57:20.199964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:57:20.236359 systemd-udevd[397]: Using default interface naming scheme 'v255'. Oct 8 19:57:20.243658 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:57:20.252859 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:57:20.279320 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Oct 8 19:57:20.353481 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:57:20.363498 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:57:20.473619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:57:20.492725 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:57:20.556395 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:57:20.563956 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:57:20.567546 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:57:20.575323 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:57:20.593650 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:57:20.636687 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:57:20.661296 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 19:57:20.661359 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 8 19:57:20.662697 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 8 19:57:20.677331 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Oct 8 19:57:20.679714 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 19:57:20.679792 kernel: AES CTR mode by8 optimization enabled Oct 8 19:57:20.696331 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:e0:15:f0:9d:a5 Oct 8 19:57:20.702260 (udev-worker)[453]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:57:20.704008 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:57:20.704193 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:57:20.705745 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:57:20.712632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:57:20.713031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:20.717785 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:57:20.729646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:57:20.739668 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 8 19:57:20.739926 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 8 19:57:20.750312 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 8 19:57:20.753783 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:57:20.753844 kernel: GPT:9289727 != 16777215 Oct 8 19:57:20.753871 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:57:20.753890 kernel: GPT:9289727 != 16777215 Oct 8 19:57:20.753906 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:57:20.753925 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:57:20.867293 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (444) Oct 8 19:57:20.875321 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (447) Oct 8 19:57:20.877332 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:20.885849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:57:20.947065 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:57:20.983514 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 8 19:57:21.009024 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 8 19:57:21.024187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:57:21.030897 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 8 19:57:21.031037 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Oct 8 19:57:21.045630 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:57:21.052596 disk-uuid[627]: Primary Header is updated. Oct 8 19:57:21.052596 disk-uuid[627]: Secondary Entries is updated. Oct 8 19:57:21.052596 disk-uuid[627]: Secondary Header is updated. Oct 8 19:57:21.060322 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:57:21.068379 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:57:21.075298 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:57:22.072292 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:57:22.076863 disk-uuid[628]: The operation has completed successfully. Oct 8 19:57:22.286716 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:57:22.286861 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:57:22.314459 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:57:22.328162 sh[972]: Success Oct 8 19:57:22.351310 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 8 19:57:22.455933 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:57:22.470546 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:57:22.476708 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:57:22.503301 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 19:57:22.503366 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:57:22.510086 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:57:22.510175 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:57:22.512265 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:57:22.600358 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 19:57:22.638937 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:57:22.642028 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:57:22.651495 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:57:22.659512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:57:22.687577 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:57:22.687921 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:57:22.687953 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:57:22.694314 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:57:22.715306 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:57:22.714992 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:57:22.738004 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:57:22.750576 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:57:22.811234 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:57:22.820598 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:57:22.860785 systemd-networkd[1164]: lo: Link UP Oct 8 19:57:22.860812 systemd-networkd[1164]: lo: Gained carrier Oct 8 19:57:22.863886 systemd-networkd[1164]: Enumeration completed Oct 8 19:57:22.864016 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:57:22.865730 systemd[1]: Reached target network.target - Network. Oct 8 19:57:22.866435 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:22.866439 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:57:22.873824 systemd-networkd[1164]: eth0: Link UP Oct 8 19:57:22.873829 systemd-networkd[1164]: eth0: Gained carrier Oct 8 19:57:22.873842 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:22.892383 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.20.47/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:57:23.185244 ignition[1091]: Ignition 2.19.0 Oct 8 19:57:23.185259 ignition[1091]: Stage: fetch-offline Oct 8 19:57:23.185895 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:23.185909 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:57:23.190425 ignition[1091]: Ignition finished successfully Oct 8 19:57:23.195013 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:57:23.206145 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 19:57:23.256724 ignition[1173]: Ignition 2.19.0 Oct 8 19:57:23.256743 ignition[1173]: Stage: fetch Oct 8 19:57:23.258198 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:23.259141 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:57:23.259376 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:57:23.312924 ignition[1173]: PUT result: OK Oct 8 19:57:23.324921 ignition[1173]: parsed url from cmdline: "" Oct 8 19:57:23.324935 ignition[1173]: no config URL provided Oct 8 19:57:23.324946 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:57:23.324964 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:57:23.324989 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:57:23.329916 ignition[1173]: PUT result: OK Oct 8 19:57:23.329984 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 8 19:57:23.342355 ignition[1173]: GET result: OK Oct 8 19:57:23.342463 ignition[1173]: parsing config with SHA512: 7970fddea4463a48ea4f340730e5bc778b4ddedd98974fab64cebade942bd07e1863e0db38bede3768d80c4bbafda29fd506016674b5098ce641c9c5a6ccc09b Oct 8 19:57:23.353139 unknown[1173]: fetched base config from "system" Oct 8 19:57:23.353153 unknown[1173]: fetched base config from "system" Oct 8 19:57:23.353239 unknown[1173]: fetched user config from "aws" Oct 8 19:57:23.360122 ignition[1173]: fetch: fetch complete Oct 8 19:57:23.360136 ignition[1173]: fetch: fetch passed Oct 8 19:57:23.360232 ignition[1173]: Ignition finished successfully Oct 8 19:57:23.366724 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 19:57:23.375923 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:57:23.420236 ignition[1180]: Ignition 2.19.0 Oct 8 19:57:23.420251 ignition[1180]: Stage: kargs Oct 8 19:57:23.420862 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:23.420876 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:57:23.421061 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:57:23.422820 ignition[1180]: PUT result: OK Oct 8 19:57:23.429725 ignition[1180]: kargs: kargs passed Oct 8 19:57:23.429805 ignition[1180]: Ignition finished successfully Oct 8 19:57:23.436066 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:57:23.443472 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:57:23.484881 ignition[1186]: Ignition 2.19.0 Oct 8 19:57:23.484895 ignition[1186]: Stage: disks Oct 8 19:57:23.486155 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:23.486173 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:57:23.486374 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:57:23.490221 ignition[1186]: PUT result: OK Oct 8 19:57:23.496195 ignition[1186]: disks: disks passed Oct 8 19:57:23.496261 ignition[1186]: Ignition finished successfully Oct 8 19:57:23.501654 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:57:23.504992 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:57:23.508460 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:57:23.513458 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:57:23.514955 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:57:23.516184 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:57:23.522603 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:57:23.570905 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:57:23.577954 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:57:23.602923 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:57:23.738336 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 19:57:23.738954 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:57:23.742139 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:57:23.759693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:57:23.767554 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:57:23.772164 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:57:23.775128 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:57:23.778424 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:57:23.783298 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:57:23.796311 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) Oct 8 19:57:23.796796 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:57:23.807418 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:57:23.807458 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:57:23.807480 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:57:23.811370 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:57:23.813971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:57:24.158632 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:57:24.166532 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:57:24.176523 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:57:24.183636 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:57:24.543950 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:57:24.565454 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:57:24.576094 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:57:24.592468 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:57:24.592541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:57:24.627296 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:57:24.638922 ignition[1327]: INFO : Ignition 2.19.0 Oct 8 19:57:24.638922 ignition[1327]: INFO : Stage: mount Oct 8 19:57:24.640879 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:24.640879 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:57:24.640879 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:57:24.646515 ignition[1327]: INFO : PUT result: OK Oct 8 19:57:24.650197 ignition[1327]: INFO : mount: mount passed Oct 8 19:57:24.650197 ignition[1327]: INFO : Ignition finished successfully Oct 8 19:57:24.657534 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:57:24.669428 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:57:24.750578 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:57:24.767796 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1337) Oct 8 19:57:24.771100 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:57:24.771177 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:57:24.771197 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:57:24.778339 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:57:24.780753 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:57:24.834160 ignition[1354]: INFO : Ignition 2.19.0 Oct 8 19:57:24.834160 ignition[1354]: INFO : Stage: files Oct 8 19:57:24.836530 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:24.836530 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:57:24.840503 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:57:24.846641 ignition[1354]: INFO : PUT result: OK Oct 8 19:57:24.848727 systemd-networkd[1164]: eth0: Gained IPv6LL Oct 8 19:57:24.856505 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:57:24.862843 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:57:24.862843 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:57:24.879625 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:57:24.881433 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:57:24.883377 unknown[1354]: wrote ssh authorized keys file for user: core Oct 8 19:57:24.885571 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:57:24.889392 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:57:24.891753 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:57:24.901311 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:57:24.901311 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 19:57:25.036664 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 19:57:25.159546 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:57:25.159546 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:57:25.164400 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 8 19:57:25.635587 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 8 19:57:25.779016 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:57:25.782030 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:57:25.784405 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:57:25.784405 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:57:25.789447 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:57:25.789447 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:57:25.789447 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:57:25.789447 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:57:25.799059 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:57:25.799059 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:57:25.803796 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:57:25.803796 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:57:25.803796 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:57:25.803796 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:57:25.821485 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 8 19:57:26.193555 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Oct 8 19:57:26.572415 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:57:26.572415 ignition[1354]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 8 19:57:26.576675 ignition[1354]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:57:26.579692 ignition[1354]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:57:26.579692 ignition[1354]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 8 19:57:26.579692 ignition[1354]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 8 19:57:26.586027 ignition[1354]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:57:26.586027 ignition[1354]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:57:26.593220 ignition[1354]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 8 19:57:26.593220 ignition[1354]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:57:26.593220 ignition[1354]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:57:26.593220 ignition[1354]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:57:26.593220 ignition[1354]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:57:26.593220 ignition[1354]: INFO : files: files passed Oct 8 19:57:26.593220 ignition[1354]: INFO : Ignition finished successfully Oct 8 19:57:26.612082 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:57:26.620827 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:57:26.628871 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:57:26.641582 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:57:26.641751 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:57:26.655702 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:57:26.655702 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:57:26.660894 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:57:26.664930 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:57:26.670097 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:57:26.679666 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:57:26.744201 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:57:26.744427 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:57:26.745737 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:57:26.746212 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:57:26.746605 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:57:26.757626 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:57:26.779942 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:57:26.799563 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:57:26.841535 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:57:26.844361 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:57:26.846888 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:57:26.848426 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:57:26.848601 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:57:26.853793 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:57:26.855852 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:57:26.859192 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:57:26.860666 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:57:26.865313 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:57:26.865465 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:57:26.868975 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:57:26.872878 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:57:26.875400 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:57:26.877472 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:57:26.879508 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:57:26.886154 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:57:26.890958 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:57:26.893653 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:57:26.902886 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:57:26.904062 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:57:26.908589 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:57:26.908798 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:57:26.912973 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:57:26.913117 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:57:26.922449 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:57:26.922892 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:57:26.932626 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:57:26.933923 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:57:26.934108 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:57:26.938016 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:57:26.943434 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:57:26.943615 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:57:26.948906 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:57:26.949164 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:57:26.962057 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:57:26.962217 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:57:26.978947 ignition[1406]: INFO : Ignition 2.19.0 Oct 8 19:57:26.980216 ignition[1406]: INFO : Stage: umount Oct 8 19:57:26.982729 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:57:26.982729 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:57:26.982729 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:57:26.988158 ignition[1406]: INFO : PUT result: OK Oct 8 19:57:26.992103 ignition[1406]: INFO : umount: umount passed Oct 8 19:57:26.992103 ignition[1406]: INFO : Ignition finished successfully Oct 8 19:57:26.994649 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:57:26.996316 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:57:26.997301 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:57:27.000893 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:57:27.000998 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:57:27.002797 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:57:27.002860 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:57:27.002952 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 19:57:27.002983 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 19:57:27.006898 systemd[1]: Stopped target network.target - Network. Oct 8 19:57:27.010032 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:57:27.012857 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:57:27.014514 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:57:27.015810 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:57:27.020410 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:57:27.027266 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:57:27.028646 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:57:27.029869 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:57:27.029921 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:57:27.031254 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:57:27.031306 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:57:27.035001 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:57:27.036869 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:57:27.046220 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:57:27.046329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:57:27.048880 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:57:27.051250 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:57:27.055402 systemd-networkd[1164]: eth0: DHCPv6 lease lost Oct 8 19:57:27.056944 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:57:27.057075 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:57:27.060209 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:57:27.060285 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:57:27.070833 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:57:27.072633 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:57:27.072759 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:57:27.080879 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:57:27.088519 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:57:27.088829 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:57:27.095173 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:57:27.095253 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:57:27.098611 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:57:27.098684 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:57:27.099995 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:57:27.100059 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:57:27.102921 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:57:27.104150 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:57:27.118977 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:57:27.119089 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:57:27.120482 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:57:27.120524 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:57:27.122202 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:57:27.122255 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:57:27.130507 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:57:27.130603 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:57:27.133736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:57:27.133795 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:57:27.148616 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:57:27.148715 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:57:27.148793 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:57:27.155013 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:57:27.155101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:27.164995 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:57:27.165204 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:57:27.167672 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:57:27.167801 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:57:27.170974 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:57:27.171089 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:57:27.180046 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:57:27.182524 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:57:27.182609 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:57:27.196692 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:57:27.250800 systemd[1]: Switching root. Oct 8 19:57:27.304446 systemd-journald[178]: Journal stopped Oct 8 19:57:30.455847 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Oct 8 19:57:30.455932 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:57:30.455953 kernel: SELinux: policy capability open_perms=1 Oct 8 19:57:30.455970 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:57:30.455986 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:57:30.456003 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:57:30.456025 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:57:30.456041 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:57:30.456058 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:57:30.456075 kernel: audit: type=1403 audit(1728417448.699:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:57:30.456100 systemd[1]: Successfully loaded SELinux policy in 68.546ms. Oct 8 19:57:30.456129 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.040ms. Oct 8 19:57:30.456149 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:57:30.456168 systemd[1]: Detected virtualization amazon. Oct 8 19:57:30.456187 systemd[1]: Detected architecture x86-64. Oct 8 19:57:30.456206 systemd[1]: Detected first boot. Oct 8 19:57:30.456225 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:57:30.456246 zram_generator::config[1465]: No configuration found. Oct 8 19:57:30.456330 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:57:30.456360 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:57:30.456379 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 8 19:57:30.456399 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:57:30.456419 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:57:30.456437 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:57:30.456456 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:57:30.456476 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:57:30.456499 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:57:30.456517 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:57:30.456534 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:57:30.456553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:57:30.456572 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:57:30.456594 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:57:30.456614 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:57:30.456638 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:57:30.456663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:57:30.456696 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:57:30.456715 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:57:30.456735 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:57:30.456756 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:57:30.456779 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:57:30.456796 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:57:30.456815 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:57:30.456834 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:57:30.456856 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:57:30.456877 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:57:30.456902 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:57:30.456922 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:57:30.456943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:57:30.456963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:57:30.456983 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:57:30.457004 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:57:30.457024 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:57:30.457044 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:57:30.457069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:30.457090 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:57:30.457112 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:57:30.457135 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:57:30.457156 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:57:30.457179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:57:30.457200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:57:30.457221 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:57:30.457251 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:57:30.457512 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:57:30.457541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:57:30.457563 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:57:30.457583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:57:30.457607 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:57:30.457629 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 8 19:57:30.457655 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 8 19:57:30.457681 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:57:30.457704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:57:30.457726 kernel: loop: module loaded Oct 8 19:57:30.457748 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:57:30.457768 kernel: fuse: init (API version 7.39) Oct 8 19:57:30.457787 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:57:30.457806 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:57:30.457824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:30.457844 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:57:30.457867 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:57:30.457939 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:57:30.457959 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:57:30.457979 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:57:30.457999 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:57:30.458018 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:57:30.458038 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:57:30.458057 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:57:30.458076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:57:30.458100 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:57:30.459906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:57:30.459935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:57:30.459959 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:57:30.459984 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:57:30.460011 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:57:30.460033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:57:30.460056 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:57:30.460078 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:57:30.460136 systemd-journald[1562]: Collecting audit messages is disabled. Oct 8 19:57:30.460178 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:57:30.460202 systemd-journald[1562]: Journal started Oct 8 19:57:30.460247 systemd-journald[1562]: Runtime Journal (/run/log/journal/ec21fea6edddf3d1c6778fd7d4528013) is 4.8M, max 38.6M, 33.7M free. Oct 8 19:57:30.468405 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:57:30.496378 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:57:30.496464 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:57:30.525292 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:57:30.533294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:57:30.546408 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:57:30.553296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:57:30.604619 kernel: ACPI: bus type drm_connector registered Oct 8 19:57:30.611432 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:57:30.618316 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:57:30.626393 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:57:30.628712 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:57:30.629138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:57:30.631223 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:57:30.634396 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:57:30.636064 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:57:30.637872 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:57:30.676437 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:57:30.686738 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:57:30.691187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:57:30.726026 systemd-journald[1562]: Time spent on flushing to /var/log/journal/ec21fea6edddf3d1c6778fd7d4528013 is 61.054ms for 953 entries. Oct 8 19:57:30.726026 systemd-journald[1562]: System Journal (/var/log/journal/ec21fea6edddf3d1c6778fd7d4528013) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:57:30.795310 systemd-journald[1562]: Received client request to flush runtime journal. Oct 8 19:57:30.749050 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Oct 8 19:57:30.749075 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Oct 8 19:57:30.751767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:57:30.763616 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:57:30.765457 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:57:30.782761 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:57:30.791532 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:57:30.806492 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:57:30.814557 udevadm[1628]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:57:30.851037 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:57:30.859549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:57:30.883989 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Oct 8 19:57:30.884018 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Oct 8 19:57:30.891223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:57:31.530639 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:57:31.545562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:57:31.596715 systemd-udevd[1645]: Using default interface naming scheme 'v255'. Oct 8 19:57:31.639789 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:57:31.652457 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:57:31.692465 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:57:31.714302 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1657) Oct 8 19:57:31.741950 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 8 19:57:31.745325 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1657) Oct 8 19:57:31.785351 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:57:31.814567 (udev-worker)[1647]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:57:31.896292 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Oct 8 19:57:31.911636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:57:31.918640 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 8 19:57:31.921299 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Oct 8 19:57:31.929326 kernel: ACPI: button: Power Button [PWRF] Oct 8 19:57:31.933308 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Oct 8 19:57:31.934288 kernel: ACPI: button: Sleep Button [SLPF] Oct 8 19:57:31.940331 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:57:31.946940 systemd-networkd[1650]: lo: Link UP Oct 8 19:57:31.949367 systemd-networkd[1650]: lo: Gained carrier Oct 8 19:57:31.953665 systemd-networkd[1650]: Enumeration completed Oct 8 19:57:31.953916 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:57:31.957352 systemd-networkd[1650]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:31.957361 systemd-networkd[1650]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:57:31.960929 systemd-networkd[1650]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:31.960983 systemd-networkd[1650]: eth0: Link UP Oct 8 19:57:31.966570 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:57:31.966646 systemd-networkd[1650]: eth0: Gained carrier Oct 8 19:57:31.966673 systemd-networkd[1650]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:31.985415 systemd-networkd[1650]: eth0: DHCPv4 address 172.31.20.47/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:57:32.025091 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1660) Oct 8 19:57:32.254554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:57:32.315847 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:57:32.328226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:32.336457 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:57:32.371787 lvm[1769]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:57:32.404284 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:57:32.406211 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:57:32.416533 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:57:32.436687 lvm[1772]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:57:32.491620 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:57:32.496032 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:57:32.504860 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:57:32.504913 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:57:32.509478 systemd[1]: Reached target machines.target - Containers. Oct 8 19:57:32.518930 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:57:32.527527 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:57:32.533555 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:57:32.535502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:57:32.543547 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:57:32.548077 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:57:32.566467 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:57:32.595050 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:57:32.618207 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:57:32.632302 kernel: loop0: detected capacity change from 0 to 140768 Oct 8 19:57:32.678804 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:57:32.710423 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:57:32.763812 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:57:32.799329 kernel: loop1: detected capacity change from 0 to 61336 Oct 8 19:57:32.892303 kernel: loop2: detected capacity change from 0 to 211296 Oct 8 19:57:32.949316 kernel: loop3: detected capacity change from 0 to 142488 Oct 8 19:57:33.064531 kernel: loop4: detected capacity change from 0 to 140768 Oct 8 19:57:33.108306 kernel: loop5: detected capacity change from 0 to 61336 Oct 8 19:57:33.124315 kernel: loop6: detected capacity change from 0 to 211296 Oct 8 19:57:33.142303 kernel: loop7: detected capacity change from 0 to 142488 Oct 8 19:57:33.165563 (sd-merge)[1794]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Oct 8 19:57:33.166242 (sd-merge)[1794]: Merged extensions into '/usr'. Oct 8 19:57:33.172000 systemd[1]: Reloading requested from client PID 1780 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:57:33.172019 systemd[1]: Reloading... Oct 8 19:57:33.266350 zram_generator::config[1818]: No configuration found. Oct 8 19:57:33.528036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:33.673118 systemd[1]: Reloading finished in 500 ms. Oct 8 19:57:33.692995 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:57:33.706579 systemd[1]: Starting ensure-sysext.service... Oct 8 19:57:33.725460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:57:33.737809 systemd[1]: Reloading requested from client PID 1876 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:57:33.737830 systemd[1]: Reloading... Oct 8 19:57:33.807488 systemd-networkd[1650]: eth0: Gained IPv6LL Oct 8 19:57:33.820665 systemd-tmpfiles[1877]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:57:33.821739 systemd-tmpfiles[1877]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:57:33.823373 systemd-tmpfiles[1877]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:57:33.823793 systemd-tmpfiles[1877]: ACLs are not supported, ignoring. Oct 8 19:57:33.823879 systemd-tmpfiles[1877]: ACLs are not supported, ignoring. Oct 8 19:57:33.835399 systemd-tmpfiles[1877]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:57:33.835416 systemd-tmpfiles[1877]: Skipping /boot Oct 8 19:57:33.853493 systemd-tmpfiles[1877]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:57:33.853513 systemd-tmpfiles[1877]: Skipping /boot Oct 8 19:57:33.931311 zram_generator::config[1903]: No configuration found. Oct 8 19:57:34.040514 ldconfig[1776]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:57:34.094062 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:34.168549 systemd[1]: Reloading finished in 429 ms. Oct 8 19:57:34.185792 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:57:34.187610 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:57:34.210777 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:57:34.231540 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:57:34.253491 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:57:34.261592 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:57:34.273001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:57:34.294177 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:57:34.335811 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:34.336154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:57:34.352979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:57:34.363946 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:57:34.390734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:57:34.399502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:57:34.399707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:34.401109 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:57:34.406111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:57:34.406470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:57:34.418108 augenrules[1991]: No rules Oct 8 19:57:34.422940 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:57:34.423162 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:57:34.427720 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:57:34.433951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:57:34.436591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:57:34.455041 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:57:34.481966 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:57:34.493458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:34.494156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:57:34.508760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:57:34.521724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:57:34.526623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:57:34.550167 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:57:34.551563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:57:34.552004 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:57:34.559843 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:57:34.561220 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:57:34.561426 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:34.569342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:57:34.569615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:57:34.571972 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:57:34.572216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:57:34.576071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:57:34.576499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:57:34.579238 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:57:34.582921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:57:34.604673 systemd[1]: Finished ensure-sysext.service. Oct 8 19:57:34.611319 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:57:34.611426 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:57:34.626934 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:57:34.632814 systemd-resolved[1972]: Positive Trust Anchors: Oct 8 19:57:34.632830 systemd-resolved[1972]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:57:34.632886 systemd-resolved[1972]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:57:34.638066 systemd-resolved[1972]: Defaulting to hostname 'linux'. Oct 8 19:57:34.641004 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:57:34.642903 systemd[1]: Reached target network.target - Network. Oct 8 19:57:34.644007 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:57:34.648539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:57:34.649897 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:57:34.651146 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:57:34.652631 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:57:34.654873 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:57:34.656105 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:57:34.657576 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:57:34.659119 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:57:34.659156 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:57:34.660249 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:57:34.661764 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:57:34.669175 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:57:34.672008 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:57:34.675732 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:57:34.676998 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:57:34.678777 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:57:34.680317 systemd[1]: System is tainted: cgroupsv1 Oct 8 19:57:34.680362 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:57:34.680384 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:57:34.685078 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:57:34.689570 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 19:57:34.693611 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:57:34.704628 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:57:34.712670 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:57:34.716510 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:57:34.727342 jq[2036]: false Oct 8 19:57:34.733412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:34.745777 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:57:34.756804 systemd[1]: Started ntpd.service - Network Time Service. Oct 8 19:57:34.792150 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:57:34.807875 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:57:34.830585 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 8 19:57:34.836904 dbus-daemon[2035]: [system] SELinux support is enabled Oct 8 19:57:34.839822 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:57:34.851527 extend-filesystems[2037]: Found loop4 Oct 8 19:57:34.852664 dbus-daemon[2035]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1650 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 8 19:57:34.863566 extend-filesystems[2037]: Found loop5 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found loop6 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found loop7 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found nvme0n1 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found nvme0n1p1 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found nvme0n1p2 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found nvme0n1p3 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found usr Oct 8 19:57:34.863566 extend-filesystems[2037]: Found nvme0n1p4 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found nvme0n1p6 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found nvme0n1p7 Oct 8 19:57:34.863566 extend-filesystems[2037]: Found nvme0n1p9 Oct 8 19:57:34.863566 extend-filesystems[2037]: Checking size of /dev/nvme0n1p9 Oct 8 19:57:34.863844 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:57:34.902570 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:52:25 UTC 2024 (1): Starting Oct 8 19:57:34.902570 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:57:34.902570 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: ---------------------------------------------------- Oct 8 19:57:34.902570 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:57:34.902570 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:57:34.902570 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: corporation. Support and training for ntp-4 are Oct 8 19:57:34.902570 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: available at https://www.nwtime.org/support Oct 8 19:57:34.902570 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: ---------------------------------------------------- Oct 8 19:57:34.880293 ntpd[2041]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:52:25 UTC 2024 (1): Starting Oct 8 19:57:34.939372 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: proto: precision = 0.093 usec (-23) Oct 8 19:57:34.939372 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: basedate set to 2024-09-26 Oct 8 19:57:34.939372 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: gps base set to 2024-09-29 (week 2334) Oct 8 19:57:34.939492 extend-filesystems[2037]: Resized partition /dev/nvme0n1p9 Oct 8 19:57:34.903507 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:57:34.880326 ntpd[2041]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:57:34.905061 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:57:34.880338 ntpd[2041]: ---------------------------------------------------- Oct 8 19:57:34.923512 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:57:34.880348 ntpd[2041]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:57:34.880359 ntpd[2041]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:57:34.880369 ntpd[2041]: corporation. Support and training for ntp-4 are Oct 8 19:57:34.880380 ntpd[2041]: available at https://www.nwtime.org/support Oct 8 19:57:34.880391 ntpd[2041]: ---------------------------------------------------- Oct 8 19:57:34.912333 ntpd[2041]: proto: precision = 0.093 usec (-23) Oct 8 19:57:34.912764 ntpd[2041]: basedate set to 2024-09-26 Oct 8 19:57:34.912781 ntpd[2041]: gps base set to 2024-09-29 (week 2334) Oct 8 19:57:34.947427 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:57:34.951412 extend-filesystems[2075]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:57:34.965671 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 8 19:57:34.962209 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:57:34.953997 ntpd[2041]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:57:34.965872 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:57:34.965872 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:57:34.954061 ntpd[2041]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:57:34.966720 ntpd[2041]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:57:34.966822 ntpd[2041]: Listen normally on 3 eth0 172.31.20.47:123 Oct 8 19:57:34.966893 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:57:34.966893 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: Listen normally on 3 eth0 172.31.20.47:123 Oct 8 19:57:34.966893 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: Listen normally on 4 lo [::1]:123 Oct 8 19:57:34.966870 ntpd[2041]: Listen normally on 4 lo [::1]:123 Oct 8 19:57:34.967042 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: Listen normally on 5 eth0 [fe80::4e0:15ff:fef0:9da5%2]:123 Oct 8 19:57:34.967042 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: Listening on routing socket on fd #22 for interface updates Oct 8 19:57:34.966923 ntpd[2041]: Listen normally on 5 eth0 [fe80::4e0:15ff:fef0:9da5%2]:123 Oct 8 19:57:34.966970 ntpd[2041]: Listening on routing socket on fd #22 for interface updates Oct 8 19:57:34.980457 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:57:34.981448 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:57:34.981448 ntpd[2041]: 8 Oct 19:57:34 ntpd[2041]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:57:34.981154 ntpd[2041]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:57:34.980822 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:57:34.981189 ntpd[2041]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:57:34.993246 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:57:35.022912 jq[2074]: true Oct 8 19:57:35.021869 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:57:35.022238 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:57:35.042977 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:57:35.044078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:57:35.080502 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 8 19:57:35.105753 dbus-daemon[2035]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 8 19:57:35.112428 (ntainerd)[2101]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:57:35.115407 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:57:35.134962 extend-filesystems[2075]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 8 19:57:35.134962 extend-filesystems[2075]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:57:35.134962 extend-filesystems[2075]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 8 19:57:35.115443 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:57:35.153069 jq[2094]: true Oct 8 19:57:35.153501 extend-filesystems[2037]: Resized filesystem in /dev/nvme0n1p9 Oct 8 19:57:35.169164 update_engine[2068]: I20241008 19:57:35.141386 2068 main.cc:92] Flatcar Update Engine starting Oct 8 19:57:35.132644 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 8 19:57:35.174016 update_engine[2068]: I20241008 19:57:35.171151 2068 update_check_scheduler.cc:74] Next update check in 2m39s Oct 8 19:57:35.134450 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:57:35.134490 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:57:35.138920 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:57:35.145979 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:57:35.183939 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:57:35.191529 tar[2083]: linux-amd64/helm Oct 8 19:57:35.239638 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:57:35.248483 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:57:35.260298 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2125) Oct 8 19:57:35.282758 coreos-metadata[2033]: Oct 08 19:57:35.282 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:57:35.288944 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 8 19:57:35.311697 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 8 19:57:35.319208 coreos-metadata[2033]: Oct 08 19:57:35.317 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 8 19:57:35.319208 coreos-metadata[2033]: Oct 08 19:57:35.318 INFO Fetch successful Oct 8 19:57:35.321348 coreos-metadata[2033]: Oct 08 19:57:35.321 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 8 19:57:35.323868 coreos-metadata[2033]: Oct 08 19:57:35.322 INFO Fetch successful Oct 8 19:57:35.323868 coreos-metadata[2033]: Oct 08 19:57:35.322 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 8 19:57:35.339375 coreos-metadata[2033]: Oct 08 19:57:35.329 INFO Fetch successful Oct 8 19:57:35.339375 coreos-metadata[2033]: Oct 08 19:57:35.329 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 8 19:57:35.348995 coreos-metadata[2033]: Oct 08 19:57:35.343 INFO Fetch successful Oct 8 19:57:35.348995 coreos-metadata[2033]: Oct 08 19:57:35.343 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 8 19:57:35.349601 coreos-metadata[2033]: Oct 08 19:57:35.349 INFO Fetch failed with 404: resource not found Oct 8 19:57:35.349601 coreos-metadata[2033]: Oct 08 19:57:35.349 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 8 19:57:35.351603 coreos-metadata[2033]: Oct 08 19:57:35.351 INFO Fetch successful Oct 8 19:57:35.351603 coreos-metadata[2033]: Oct 08 19:57:35.351 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 8 19:57:35.363562 coreos-metadata[2033]: Oct 08 19:57:35.363 INFO Fetch successful Oct 8 19:57:35.363562 coreos-metadata[2033]: Oct 08 19:57:35.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 8 19:57:35.364996 coreos-metadata[2033]: Oct 08 19:57:35.364 INFO Fetch successful Oct 8 19:57:35.372389 coreos-metadata[2033]: Oct 08 19:57:35.369 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 8 19:57:35.372844 coreos-metadata[2033]: Oct 08 19:57:35.372 INFO Fetch successful Oct 8 19:57:35.372844 coreos-metadata[2033]: Oct 08 19:57:35.372 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 8 19:57:35.393472 coreos-metadata[2033]: Oct 08 19:57:35.387 INFO Fetch successful Oct 8 19:57:35.441981 bash[2160]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:57:35.434007 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:57:35.450949 systemd[1]: Starting sshkeys.service... Oct 8 19:57:35.578057 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 19:57:35.582756 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:57:35.661119 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 19:57:35.677426 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 19:57:35.772032 systemd-logind[2065]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 19:57:35.793432 systemd-logind[2065]: Watching system buttons on /dev/input/event3 (Sleep Button) Oct 8 19:57:35.793466 systemd-logind[2065]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 19:57:35.801442 systemd-logind[2065]: New seat seat0. Oct 8 19:57:35.813432 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: Initializing new seelog logger Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: New Seelog Logger Creation Complete Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: 2024/10/08 19:57:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: 2024/10/08 19:57:35 processing appconfig overrides Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: 2024/10/08 19:57:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: 2024/10/08 19:57:35 processing appconfig overrides Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: 2024/10/08 19:57:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: 2024-10-08 19:57:35 INFO Proxy environment variables: Oct 8 19:57:35.878984 amazon-ssm-agent[2143]: 2024/10/08 19:57:35 processing appconfig overrides Oct 8 19:57:35.888077 amazon-ssm-agent[2143]: 2024/10/08 19:57:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:57:35.888077 amazon-ssm-agent[2143]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:57:35.888077 amazon-ssm-agent[2143]: 2024/10/08 19:57:35 processing appconfig overrides Oct 8 19:57:35.890228 locksmithd[2130]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:57:35.981742 amazon-ssm-agent[2143]: 2024-10-08 19:57:35 INFO no_proxy: Oct 8 19:57:36.078260 amazon-ssm-agent[2143]: 2024-10-08 19:57:35 INFO https_proxy: Oct 8 19:57:36.186777 coreos-metadata[2226]: Oct 08 19:57:36.186 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:57:36.193450 amazon-ssm-agent[2143]: 2024-10-08 19:57:35 INFO http_proxy: Oct 8 19:57:36.196301 coreos-metadata[2226]: Oct 08 19:57:36.195 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 8 19:57:36.197197 coreos-metadata[2226]: Oct 08 19:57:36.197 INFO Fetch successful Oct 8 19:57:36.197301 coreos-metadata[2226]: Oct 08 19:57:36.197 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 8 19:57:36.201734 coreos-metadata[2226]: Oct 08 19:57:36.201 INFO Fetch successful Oct 8 19:57:36.209963 unknown[2226]: wrote ssh authorized keys file for user: core Oct 8 19:57:36.295309 amazon-ssm-agent[2143]: 2024-10-08 19:57:35 INFO Checking if agent identity type OnPrem can be assumed Oct 8 19:57:36.313336 update-ssh-keys[2276]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:57:36.314662 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 19:57:36.336058 systemd[1]: Finished sshkeys.service. Oct 8 19:57:36.336906 sshd_keygen[2076]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:57:36.360699 dbus-daemon[2035]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 8 19:57:36.362151 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 8 19:57:36.372436 dbus-daemon[2035]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2113 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 8 19:57:36.384883 systemd[1]: Starting polkit.service - Authorization Manager... Oct 8 19:57:36.399603 amazon-ssm-agent[2143]: 2024-10-08 19:57:35 INFO Checking if agent identity type EC2 can be assumed Oct 8 19:57:36.439221 polkitd[2288]: Started polkitd version 121 Oct 8 19:57:36.478108 polkitd[2288]: Loading rules from directory /etc/polkit-1/rules.d Oct 8 19:57:36.481120 polkitd[2288]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 8 19:57:36.484073 polkitd[2288]: Finished loading, compiling and executing 2 rules Oct 8 19:57:36.488573 dbus-daemon[2035]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 8 19:57:36.489578 systemd[1]: Started polkit.service - Authorization Manager. Oct 8 19:57:36.488905 polkitd[2288]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 8 19:57:36.501175 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO Agent will take identity from EC2 Oct 8 19:57:36.513672 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:57:36.529949 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:57:36.589790 systemd-resolved[1972]: System hostname changed to 'ip-172-31-20-47'. Oct 8 19:57:36.589793 systemd-hostnamed[2113]: Hostname set to (transient) Oct 8 19:57:36.592598 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:57:36.593415 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:57:36.610040 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:57:36.624394 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:57:36.665773 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:57:36.698521 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:57:36.713329 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:57:36.717175 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:57:36.725162 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:57:36.787350 containerd[2101]: time="2024-10-08T19:57:36.786394088Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:57:36.830182 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:57:36.906527 containerd[2101]: time="2024-10-08T19:57:36.906383899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:36.909303 containerd[2101]: time="2024-10-08T19:57:36.908830246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:36.909303 containerd[2101]: time="2024-10-08T19:57:36.908883639Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:57:36.909303 containerd[2101]: time="2024-10-08T19:57:36.908908528Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:57:36.909303 containerd[2101]: time="2024-10-08T19:57:36.909098582Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:57:36.909303 containerd[2101]: time="2024-10-08T19:57:36.909119264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:36.909303 containerd[2101]: time="2024-10-08T19:57:36.909188431Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:36.909303 containerd[2101]: time="2024-10-08T19:57:36.909209747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:36.910044 containerd[2101]: time="2024-10-08T19:57:36.909993057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:36.911682 containerd[2101]: time="2024-10-08T19:57:36.910145205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:36.911682 containerd[2101]: time="2024-10-08T19:57:36.911317916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:36.911682 containerd[2101]: time="2024-10-08T19:57:36.911340635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:36.911682 containerd[2101]: time="2024-10-08T19:57:36.911504176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:36.912426 containerd[2101]: time="2024-10-08T19:57:36.912394400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:36.913418 containerd[2101]: time="2024-10-08T19:57:36.913389645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:36.913575 containerd[2101]: time="2024-10-08T19:57:36.913508320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:57:36.913734 containerd[2101]: time="2024-10-08T19:57:36.913716160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:57:36.914504 containerd[2101]: time="2024-10-08T19:57:36.913934041Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:57:36.929404 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Oct 8 19:57:36.939060 containerd[2101]: time="2024-10-08T19:57:36.935394998Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:57:36.939060 containerd[2101]: time="2024-10-08T19:57:36.935582690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:57:36.939060 containerd[2101]: time="2024-10-08T19:57:36.935619049Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:57:36.939060 containerd[2101]: time="2024-10-08T19:57:36.935670310Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:57:36.939060 containerd[2101]: time="2024-10-08T19:57:36.935694842Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:57:36.939060 containerd[2101]: time="2024-10-08T19:57:36.935967913Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:57:36.954206 containerd[2101]: time="2024-10-08T19:57:36.954133671Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967368610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967583252Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967625005Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967651556Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967674384Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967708664Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967734406Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967816928Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967839958Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967859158Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967894069Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967935282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967969166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.968785 containerd[2101]: time="2024-10-08T19:57:36.967987755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968008217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968045251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968067243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968098940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968134542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968154971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968383507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968420782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968440156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968461262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968501941Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968540469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968572250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.970845 containerd[2101]: time="2024-10-08T19:57:36.968589788Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:57:36.972660 containerd[2101]: time="2024-10-08T19:57:36.968671927Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:57:36.992195 containerd[2101]: time="2024-10-08T19:57:36.968698379Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:57:36.992195 containerd[2101]: time="2024-10-08T19:57:36.973985825Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:57:36.992195 containerd[2101]: time="2024-10-08T19:57:36.974047683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:57:36.992195 containerd[2101]: time="2024-10-08T19:57:36.974065238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.992195 containerd[2101]: time="2024-10-08T19:57:36.974107521Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:57:36.992195 containerd[2101]: time="2024-10-08T19:57:36.974132337Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:57:36.992195 containerd[2101]: time="2024-10-08T19:57:36.974163585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:57:36.992754 containerd[2101]: time="2024-10-08T19:57:36.974726914Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:57:36.992754 containerd[2101]: time="2024-10-08T19:57:36.984497565Z" level=info msg="Connect containerd service" Oct 8 19:57:36.992754 containerd[2101]: time="2024-10-08T19:57:36.984591501Z" level=info msg="using legacy CRI server" Oct 8 19:57:36.992754 containerd[2101]: time="2024-10-08T19:57:36.984622048Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:57:36.992754 containerd[2101]: time="2024-10-08T19:57:36.984779146Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:57:37.005292 containerd[2101]: time="2024-10-08T19:57:37.005202702Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:57:37.009915 containerd[2101]: time="2024-10-08T19:57:37.009762482Z" level=info msg="Start subscribing containerd event" Oct 8 19:57:37.016332 containerd[2101]: time="2024-10-08T19:57:37.012553476Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:57:37.016332 containerd[2101]: time="2024-10-08T19:57:37.012635596Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:57:37.016332 containerd[2101]: time="2024-10-08T19:57:37.012684586Z" level=info msg="Start recovering state" Oct 8 19:57:37.016332 containerd[2101]: time="2024-10-08T19:57:37.012896754Z" level=info msg="Start event monitor" Oct 8 19:57:37.016332 containerd[2101]: time="2024-10-08T19:57:37.012922555Z" level=info msg="Start snapshots syncer" Oct 8 19:57:37.016332 containerd[2101]: time="2024-10-08T19:57:37.012937065Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:57:37.016332 containerd[2101]: time="2024-10-08T19:57:37.012948062Z" level=info msg="Start streaming server" Oct 8 19:57:37.016332 containerd[2101]: time="2024-10-08T19:57:37.015959572Z" level=info msg="containerd successfully booted in 0.237109s" Oct 8 19:57:37.013181 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:57:37.030165 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Oct 8 19:57:37.132351 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO [amazon-ssm-agent] Starting Core Agent Oct 8 19:57:37.214267 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO [amazon-ssm-agent] registrar detected. Attempting registration Oct 8 19:57:37.214267 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO [Registrar] Starting registrar module Oct 8 19:57:37.214267 amazon-ssm-agent[2143]: 2024-10-08 19:57:36 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Oct 8 19:57:37.214267 amazon-ssm-agent[2143]: 2024-10-08 19:57:37 INFO [EC2Identity] EC2 registration was successful. Oct 8 19:57:37.214267 amazon-ssm-agent[2143]: 2024-10-08 19:57:37 INFO [CredentialRefresher] credentialRefresher has started Oct 8 19:57:37.214267 amazon-ssm-agent[2143]: 2024-10-08 19:57:37 INFO [CredentialRefresher] Starting credentials refresher loop Oct 8 19:57:37.214267 amazon-ssm-agent[2143]: 2024-10-08 19:57:37 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 8 19:57:37.229980 amazon-ssm-agent[2143]: 2024-10-08 19:57:37 INFO [CredentialRefresher] Next credential rotation will be in 31.658327938283332 minutes Oct 8 19:57:37.410363 tar[2083]: linux-amd64/LICENSE Oct 8 19:57:37.410855 tar[2083]: linux-amd64/README.md Oct 8 19:57:37.433832 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:57:37.889512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:37.892225 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:57:37.893909 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:37.896636 systemd[1]: Startup finished in 10.697s (kernel) + 9.264s (userspace) = 19.962s. Oct 8 19:57:38.251288 amazon-ssm-agent[2143]: 2024-10-08 19:57:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 8 19:57:38.356120 amazon-ssm-agent[2143]: 2024-10-08 19:57:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2341) started Oct 8 19:57:38.455641 amazon-ssm-agent[2143]: 2024-10-08 19:57:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 8 19:57:38.911139 kubelet[2330]: E1008 19:57:38.910990 2330 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:38.915400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:38.916330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:42.331028 systemd-resolved[1972]: Clock change detected. Flushing caches. Oct 8 19:57:43.387096 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:57:43.392808 systemd[1]: Started sshd@0-172.31.20.47:22-139.178.68.195:59904.service - OpenSSH per-connection server daemon (139.178.68.195:59904). Oct 8 19:57:43.577746 sshd[2356]: Accepted publickey for core from 139.178.68.195 port 59904 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:57:43.579682 sshd[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:43.611722 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:57:43.625164 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:57:43.634484 systemd-logind[2065]: New session 1 of user core. Oct 8 19:57:43.651687 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:57:43.662920 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:57:43.678328 (systemd)[2362]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:57:43.917172 systemd[2362]: Queued start job for default target default.target. Oct 8 19:57:43.918159 systemd[2362]: Created slice app.slice - User Application Slice. Oct 8 19:57:43.918194 systemd[2362]: Reached target paths.target - Paths. Oct 8 19:57:43.918213 systemd[2362]: Reached target timers.target - Timers. Oct 8 19:57:43.925537 systemd[2362]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:57:43.941724 systemd[2362]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:57:43.942059 systemd[2362]: Reached target sockets.target - Sockets. Oct 8 19:57:43.942117 systemd[2362]: Reached target basic.target - Basic System. Oct 8 19:57:43.942293 systemd[2362]: Reached target default.target - Main User Target. Oct 8 19:57:43.942770 systemd[2362]: Startup finished in 254ms. Oct 8 19:57:43.943502 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:57:43.952148 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:57:44.111151 systemd[1]: Started sshd@1-172.31.20.47:22-139.178.68.195:59910.service - OpenSSH per-connection server daemon (139.178.68.195:59910). Oct 8 19:57:44.291736 sshd[2374]: Accepted publickey for core from 139.178.68.195 port 59910 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:57:44.294028 sshd[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:44.299893 systemd-logind[2065]: New session 2 of user core. Oct 8 19:57:44.305145 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:57:44.431780 sshd[2374]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:44.437114 systemd[1]: sshd@1-172.31.20.47:22-139.178.68.195:59910.service: Deactivated successfully. Oct 8 19:57:44.445056 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:57:44.445903 systemd-logind[2065]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:57:44.447049 systemd-logind[2065]: Removed session 2. Oct 8 19:57:44.460961 systemd[1]: Started sshd@2-172.31.20.47:22-139.178.68.195:59920.service - OpenSSH per-connection server daemon (139.178.68.195:59920). Oct 8 19:57:44.648040 sshd[2382]: Accepted publickey for core from 139.178.68.195 port 59920 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:57:44.650534 sshd[2382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:44.666790 systemd-logind[2065]: New session 3 of user core. Oct 8 19:57:44.675505 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:57:44.816804 sshd[2382]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:44.820759 systemd[1]: sshd@2-172.31.20.47:22-139.178.68.195:59920.service: Deactivated successfully. Oct 8 19:57:44.831987 systemd-logind[2065]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:57:44.833605 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:57:44.835930 systemd-logind[2065]: Removed session 3. Oct 8 19:57:44.851838 systemd[1]: Started sshd@3-172.31.20.47:22-139.178.68.195:59926.service - OpenSSH per-connection server daemon (139.178.68.195:59926). Oct 8 19:57:45.024091 sshd[2390]: Accepted publickey for core from 139.178.68.195 port 59926 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:57:45.026035 sshd[2390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:45.037953 systemd-logind[2065]: New session 4 of user core. Oct 8 19:57:45.044855 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:57:45.200520 sshd[2390]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:45.205869 systemd[1]: sshd@3-172.31.20.47:22-139.178.68.195:59926.service: Deactivated successfully. Oct 8 19:57:45.211779 systemd-logind[2065]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:57:45.212721 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:57:45.214738 systemd-logind[2065]: Removed session 4. Oct 8 19:57:45.229805 systemd[1]: Started sshd@4-172.31.20.47:22-139.178.68.195:59934.service - OpenSSH per-connection server daemon (139.178.68.195:59934). Oct 8 19:57:45.433939 sshd[2398]: Accepted publickey for core from 139.178.68.195 port 59934 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:57:45.435173 sshd[2398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:45.448718 systemd-logind[2065]: New session 5 of user core. Oct 8 19:57:45.451793 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:57:45.604622 sudo[2402]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:57:45.606484 sudo[2402]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:45.639261 sudo[2402]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:45.663030 sshd[2398]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:45.670630 systemd[1]: sshd@4-172.31.20.47:22-139.178.68.195:59934.service: Deactivated successfully. Oct 8 19:57:45.678536 systemd-logind[2065]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:57:45.680083 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:57:45.696038 systemd[1]: Started sshd@5-172.31.20.47:22-139.178.68.195:59942.service - OpenSSH per-connection server daemon (139.178.68.195:59942). Oct 8 19:57:45.698907 systemd-logind[2065]: Removed session 5. Oct 8 19:57:45.880812 sshd[2407]: Accepted publickey for core from 139.178.68.195 port 59942 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:57:45.885281 sshd[2407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:45.911406 systemd-logind[2065]: New session 6 of user core. Oct 8 19:57:45.922843 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:57:46.028289 sudo[2412]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:57:46.028729 sudo[2412]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:46.048872 sudo[2412]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:46.056340 sudo[2411]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:57:46.056851 sudo[2411]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:46.077922 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:57:46.088775 auditctl[2415]: No rules Oct 8 19:57:46.089284 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:57:46.089671 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:57:46.095107 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:57:46.141396 augenrules[2434]: No rules Oct 8 19:57:46.143764 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:57:46.153899 sudo[2411]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:46.178457 sshd[2407]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:46.185806 systemd[1]: sshd@5-172.31.20.47:22-139.178.68.195:59942.service: Deactivated successfully. Oct 8 19:57:46.192387 systemd-logind[2065]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:57:46.192757 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:57:46.194348 systemd-logind[2065]: Removed session 6. Oct 8 19:57:46.207236 systemd[1]: Started sshd@6-172.31.20.47:22-139.178.68.195:59944.service - OpenSSH per-connection server daemon (139.178.68.195:59944). Oct 8 19:57:46.381562 sshd[2443]: Accepted publickey for core from 139.178.68.195 port 59944 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:57:46.384525 sshd[2443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:46.392954 systemd-logind[2065]: New session 7 of user core. Oct 8 19:57:46.409879 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:57:46.515903 sudo[2447]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:57:46.516588 sudo[2447]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:47.348835 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:57:47.351290 (dockerd)[2463]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:57:48.035241 dockerd[2463]: time="2024-10-08T19:57:48.035180614Z" level=info msg="Starting up" Oct 8 19:57:49.099266 dockerd[2463]: time="2024-10-08T19:57:49.099207005Z" level=info msg="Loading containers: start." Oct 8 19:57:49.298393 kernel: Initializing XFRM netlink socket Oct 8 19:57:49.348271 (udev-worker)[2484]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:57:49.371869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:57:49.377696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:49.481061 systemd-networkd[1650]: docker0: Link UP Oct 8 19:57:49.530685 dockerd[2463]: time="2024-10-08T19:57:49.530641405Z" level=info msg="Loading containers: done." Oct 8 19:57:49.597998 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck883327464-merged.mount: Deactivated successfully. Oct 8 19:57:49.657881 dockerd[2463]: time="2024-10-08T19:57:49.657472732Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:57:49.657881 dockerd[2463]: time="2024-10-08T19:57:49.657598944Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:57:49.657881 dockerd[2463]: time="2024-10-08T19:57:49.657736175Z" level=info msg="Daemon has completed initialization" Oct 8 19:57:49.790079 dockerd[2463]: time="2024-10-08T19:57:49.789914888Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:57:49.790237 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:57:49.868190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:49.879457 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:49.974463 kubelet[2609]: E1008 19:57:49.974239 2609 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:49.984509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:49.984801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:51.235651 containerd[2101]: time="2024-10-08T19:57:51.235603096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:57:52.053520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311696384.mount: Deactivated successfully. Oct 8 19:57:55.130207 containerd[2101]: time="2024-10-08T19:57:55.130148426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:55.132959 containerd[2101]: time="2024-10-08T19:57:55.132894569Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 8 19:57:55.136409 containerd[2101]: time="2024-10-08T19:57:55.135545286Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:55.141204 containerd[2101]: time="2024-10-08T19:57:55.141152656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:55.144233 containerd[2101]: time="2024-10-08T19:57:55.143955000Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 3.908307257s" Oct 8 19:57:55.144617 containerd[2101]: time="2024-10-08T19:57:55.144586151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 8 19:57:55.179492 containerd[2101]: time="2024-10-08T19:57:55.179448089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:57:58.873853 containerd[2101]: time="2024-10-08T19:57:58.873795480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:58.875615 containerd[2101]: time="2024-10-08T19:57:58.875175073Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 8 19:57:58.877778 containerd[2101]: time="2024-10-08T19:57:58.877384122Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:58.881060 containerd[2101]: time="2024-10-08T19:57:58.881020581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:58.882335 containerd[2101]: time="2024-10-08T19:57:58.882296675Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 3.702800757s" Oct 8 19:57:58.882493 containerd[2101]: time="2024-10-08T19:57:58.882472000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 8 19:57:58.912253 containerd[2101]: time="2024-10-08T19:57:58.912128361Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:58:00.234930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:58:00.252065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:01.330890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:01.349318 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:58:01.590650 kubelet[2711]: E1008 19:58:01.590332 2711 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:58:01.599302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:58:01.599604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:58:02.008012 containerd[2101]: time="2024-10-08T19:58:02.007869292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:02.009824 containerd[2101]: time="2024-10-08T19:58:02.009560222Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 8 19:58:02.017605 containerd[2101]: time="2024-10-08T19:58:02.017554610Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:02.037237 containerd[2101]: time="2024-10-08T19:58:02.029908029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:02.042947 containerd[2101]: time="2024-10-08T19:58:02.042892881Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 3.1306186s" Oct 8 19:58:02.043166 containerd[2101]: time="2024-10-08T19:58:02.043141657Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 8 19:58:02.106668 containerd[2101]: time="2024-10-08T19:58:02.106628387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:58:03.796000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3600797467.mount: Deactivated successfully. Oct 8 19:58:04.518314 containerd[2101]: time="2024-10-08T19:58:04.518259288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:04.520038 containerd[2101]: time="2024-10-08T19:58:04.519784045Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 8 19:58:04.524624 containerd[2101]: time="2024-10-08T19:58:04.521446666Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:04.527932 containerd[2101]: time="2024-10-08T19:58:04.527856933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:04.529293 containerd[2101]: time="2024-10-08T19:58:04.528837901Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 2.422158656s" Oct 8 19:58:04.529293 containerd[2101]: time="2024-10-08T19:58:04.528953585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 8 19:58:04.564041 containerd[2101]: time="2024-10-08T19:58:04.563999475Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:58:05.334023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178107844.mount: Deactivated successfully. Oct 8 19:58:07.095924 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 8 19:58:07.555206 containerd[2101]: time="2024-10-08T19:58:07.555056424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:07.556540 containerd[2101]: time="2024-10-08T19:58:07.556488047Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 8 19:58:07.557391 containerd[2101]: time="2024-10-08T19:58:07.557338197Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:07.561299 containerd[2101]: time="2024-10-08T19:58:07.561234415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:07.562834 containerd[2101]: time="2024-10-08T19:58:07.562655825Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.998610504s" Oct 8 19:58:07.562834 containerd[2101]: time="2024-10-08T19:58:07.562705799Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 19:58:07.592022 containerd[2101]: time="2024-10-08T19:58:07.591974412Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:58:08.250427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836561684.mount: Deactivated successfully. Oct 8 19:58:08.260566 containerd[2101]: time="2024-10-08T19:58:08.260489708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:08.261937 containerd[2101]: time="2024-10-08T19:58:08.261877482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 8 19:58:08.264173 containerd[2101]: time="2024-10-08T19:58:08.264037474Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:08.268386 containerd[2101]: time="2024-10-08T19:58:08.268134436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:08.268990 containerd[2101]: time="2024-10-08T19:58:08.268955761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 676.925352ms" Oct 8 19:58:08.269087 containerd[2101]: time="2024-10-08T19:58:08.269000352Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 19:58:08.296931 containerd[2101]: time="2024-10-08T19:58:08.296885676Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:58:08.915492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839931113.mount: Deactivated successfully. Oct 8 19:58:11.626776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:58:11.641872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:12.562530 containerd[2101]: time="2024-10-08T19:58:12.562306562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:12.565862 containerd[2101]: time="2024-10-08T19:58:12.565787978Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 8 19:58:12.569297 containerd[2101]: time="2024-10-08T19:58:12.568193384Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:12.572813 containerd[2101]: time="2024-10-08T19:58:12.572747324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:12.577980 containerd[2101]: time="2024-10-08T19:58:12.577900948Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.280972961s" Oct 8 19:58:12.580390 containerd[2101]: time="2024-10-08T19:58:12.578335942Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 8 19:58:12.611605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:12.633430 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:58:12.806851 kubelet[2862]: E1008 19:58:12.804285 2862 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:58:12.812450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:58:12.813229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:58:16.416794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:16.425085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:16.471674 systemd[1]: Reloading requested from client PID 2934 ('systemctl') (unit session-7.scope)... Oct 8 19:58:16.471717 systemd[1]: Reloading... Oct 8 19:58:16.617392 zram_generator::config[2971]: No configuration found. Oct 8 19:58:16.865160 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:58:17.015120 systemd[1]: Reloading finished in 542 ms. Oct 8 19:58:17.119247 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:58:17.119564 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:58:17.120160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:17.146305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:18.163600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:18.177045 (kubelet)[3041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:58:18.248568 kubelet[3041]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:58:18.248568 kubelet[3041]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:58:18.248568 kubelet[3041]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:58:18.252092 kubelet[3041]: I1008 19:58:18.251979 3041 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:58:18.753240 kubelet[3041]: I1008 19:58:18.753111 3041 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:58:18.753240 kubelet[3041]: I1008 19:58:18.753242 3041 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:58:18.753890 kubelet[3041]: I1008 19:58:18.753863 3041 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:58:18.815154 kubelet[3041]: I1008 19:58:18.814308 3041 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:58:18.815774 kubelet[3041]: E1008 19:58:18.815747 3041 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:18.828723 kubelet[3041]: I1008 19:58:18.828687 3041 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:58:18.831561 kubelet[3041]: I1008 19:58:18.831527 3041 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:58:18.833469 kubelet[3041]: I1008 19:58:18.833432 3041 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:58:18.834117 kubelet[3041]: I1008 19:58:18.834050 3041 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:58:18.834117 kubelet[3041]: I1008 19:58:18.834119 3041 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:58:18.834373 kubelet[3041]: I1008 19:58:18.834337 3041 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:58:18.835182 kubelet[3041]: I1008 19:58:18.835163 3041 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:58:18.835262 kubelet[3041]: I1008 19:58:18.835192 3041 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:58:18.837387 kubelet[3041]: I1008 19:58:18.837259 3041 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:58:18.838801 kubelet[3041]: I1008 19:58:18.838606 3041 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:58:18.840897 kubelet[3041]: W1008 19:58:18.840700 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-47&limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:18.840897 kubelet[3041]: E1008 19:58:18.840763 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-47&limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:18.840897 kubelet[3041]: W1008 19:58:18.840838 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.20.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:18.840897 kubelet[3041]: E1008 19:58:18.840876 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:18.843460 kubelet[3041]: I1008 19:58:18.843435 3041 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:58:18.856394 kubelet[3041]: I1008 19:58:18.854103 3041 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:58:18.856394 kubelet[3041]: W1008 19:58:18.856087 3041 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:58:18.857601 kubelet[3041]: I1008 19:58:18.857576 3041 server.go:1256] "Started kubelet" Oct 8 19:58:18.867915 kubelet[3041]: I1008 19:58:18.867877 3041 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:58:18.869433 kubelet[3041]: I1008 19:58:18.869313 3041 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:58:18.869883 kubelet[3041]: I1008 19:58:18.869852 3041 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:58:18.875703 kubelet[3041]: I1008 19:58:18.874628 3041 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:58:18.882584 kubelet[3041]: I1008 19:58:18.882556 3041 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:58:18.885419 kubelet[3041]: E1008 19:58:18.885386 3041 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.47:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-47.17fc928ec2f9113d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-47,UID:ip-172-31-20-47,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-47,},FirstTimestamp:2024-10-08 19:58:18.857541949 +0000 UTC m=+0.674216514,LastTimestamp:2024-10-08 19:58:18.857541949 +0000 UTC m=+0.674216514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-47,}" Oct 8 19:58:18.895627 kubelet[3041]: E1008 19:58:18.895593 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:18.895774 kubelet[3041]: I1008 19:58:18.895651 3041 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:58:18.904266 kubelet[3041]: I1008 19:58:18.903709 3041 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:58:18.904266 kubelet[3041]: I1008 19:58:18.903804 3041 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:58:18.905306 kubelet[3041]: I1008 19:58:18.905274 3041 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:58:18.905773 kubelet[3041]: W1008 19:58:18.905650 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:18.905891 kubelet[3041]: E1008 19:58:18.905791 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:18.906022 kubelet[3041]: E1008 19:58:18.905905 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-47?timeout=10s\": dial tcp 172.31.20.47:6443: connect: connection refused" interval="200ms" Oct 8 19:58:18.912164 kubelet[3041]: E1008 19:58:18.912088 3041 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:58:18.916480 kubelet[3041]: I1008 19:58:18.914649 3041 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:58:18.916480 kubelet[3041]: I1008 19:58:18.914669 3041 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:58:18.938086 kubelet[3041]: I1008 19:58:18.938056 3041 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:58:18.940287 kubelet[3041]: I1008 19:58:18.940222 3041 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:58:18.940287 kubelet[3041]: I1008 19:58:18.940268 3041 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:58:18.940287 kubelet[3041]: I1008 19:58:18.940296 3041 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:58:18.940719 kubelet[3041]: E1008 19:58:18.940352 3041 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:58:18.952583 kubelet[3041]: W1008 19:58:18.952518 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:18.952728 kubelet[3041]: E1008 19:58:18.952594 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:18.980083 kubelet[3041]: I1008 19:58:18.979851 3041 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:58:18.980083 kubelet[3041]: I1008 19:58:18.979879 3041 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:58:18.980083 kubelet[3041]: I1008 19:58:18.979896 3041 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:58:18.982597 kubelet[3041]: I1008 19:58:18.982489 3041 policy_none.go:49] "None policy: Start" Oct 8 19:58:18.983932 kubelet[3041]: I1008 19:58:18.983559 3041 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:58:18.983932 kubelet[3041]: I1008 19:58:18.983582 3041 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:58:19.005881 kubelet[3041]: I1008 19:58:19.005354 3041 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-47" Oct 8 19:58:19.008294 kubelet[3041]: I1008 19:58:19.008185 3041 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:58:19.008735 kubelet[3041]: I1008 19:58:19.008709 3041 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:58:19.011218 kubelet[3041]: E1008 19:58:19.011193 3041 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.47:6443/api/v1/nodes\": dial tcp 172.31.20.47:6443: connect: connection refused" node="ip-172-31-20-47" Oct 8 19:58:19.018469 kubelet[3041]: E1008 19:58:19.018437 3041 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-47\" not found" Oct 8 19:58:19.040706 kubelet[3041]: I1008 19:58:19.040662 3041 topology_manager.go:215] "Topology Admit Handler" podUID="7974061e92dc0dc484c02dc467a38c78" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-47" Oct 8 19:58:19.042428 kubelet[3041]: I1008 19:58:19.042402 3041 topology_manager.go:215] "Topology Admit Handler" podUID="e4b239e331c060d5479a03d4278ebe73" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-47" Oct 8 19:58:19.048835 kubelet[3041]: I1008 19:58:19.048795 3041 topology_manager.go:215] "Topology Admit Handler" podUID="818ec0933c983f53cae68dad25f94161" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:19.106343 kubelet[3041]: I1008 19:58:19.106054 3041 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:19.106343 kubelet[3041]: I1008 19:58:19.106108 3041 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:19.106343 kubelet[3041]: I1008 19:58:19.106142 3041 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:19.106343 kubelet[3041]: I1008 19:58:19.106168 3041 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7974061e92dc0dc484c02dc467a38c78-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-47\" (UID: \"7974061e92dc0dc484c02dc467a38c78\") " pod="kube-system/kube-scheduler-ip-172-31-20-47" Oct 8 19:58:19.106343 kubelet[3041]: I1008 19:58:19.106194 3041 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4b239e331c060d5479a03d4278ebe73-ca-certs\") pod \"kube-apiserver-ip-172-31-20-47\" (UID: \"e4b239e331c060d5479a03d4278ebe73\") " pod="kube-system/kube-apiserver-ip-172-31-20-47" Oct 8 19:58:19.106756 kubelet[3041]: I1008 19:58:19.106218 3041 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4b239e331c060d5479a03d4278ebe73-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-47\" (UID: \"e4b239e331c060d5479a03d4278ebe73\") " pod="kube-system/kube-apiserver-ip-172-31-20-47" Oct 8 19:58:19.106756 kubelet[3041]: I1008 19:58:19.106245 3041 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4b239e331c060d5479a03d4278ebe73-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-47\" (UID: \"e4b239e331c060d5479a03d4278ebe73\") " pod="kube-system/kube-apiserver-ip-172-31-20-47" Oct 8 19:58:19.106756 kubelet[3041]: I1008 19:58:19.106273 3041 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:19.106756 kubelet[3041]: I1008 19:58:19.106318 3041 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:19.107080 kubelet[3041]: E1008 19:58:19.107053 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-47?timeout=10s\": dial tcp 172.31.20.47:6443: connect: connection refused" interval="400ms" Oct 8 19:58:19.222998 kubelet[3041]: I1008 19:58:19.222950 3041 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-47" Oct 8 19:58:19.223634 kubelet[3041]: E1008 19:58:19.223601 3041 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.47:6443/api/v1/nodes\": dial tcp 172.31.20.47:6443: connect: connection refused" node="ip-172-31-20-47" Oct 8 19:58:19.349632 containerd[2101]: time="2024-10-08T19:58:19.349509591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-47,Uid:7974061e92dc0dc484c02dc467a38c78,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:19.366244 containerd[2101]: time="2024-10-08T19:58:19.366205584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-47,Uid:e4b239e331c060d5479a03d4278ebe73,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:19.374309 containerd[2101]: time="2024-10-08T19:58:19.374269062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-47,Uid:818ec0933c983f53cae68dad25f94161,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:19.509382 kubelet[3041]: E1008 19:58:19.508191 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-47?timeout=10s\": dial tcp 172.31.20.47:6443: connect: connection refused" interval="800ms" Oct 8 19:58:19.626327 kubelet[3041]: I1008 19:58:19.626291 3041 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-47" Oct 8 19:58:19.626963 kubelet[3041]: E1008 19:58:19.626936 3041 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.47:6443/api/v1/nodes\": dial tcp 172.31.20.47:6443: connect: connection refused" node="ip-172-31-20-47" Oct 8 19:58:19.866533 kubelet[3041]: W1008 19:58:19.866472 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:19.866533 kubelet[3041]: E1008 19:58:19.866537 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:19.906298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320502004.mount: Deactivated successfully. Oct 8 19:58:19.920969 containerd[2101]: time="2024-10-08T19:58:19.920914042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:19.923292 containerd[2101]: time="2024-10-08T19:58:19.923233570Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 8 19:58:19.924463 containerd[2101]: time="2024-10-08T19:58:19.924332128Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:19.927726 containerd[2101]: time="2024-10-08T19:58:19.927263563Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:58:19.927925 containerd[2101]: time="2024-10-08T19:58:19.927746382Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:19.930175 containerd[2101]: time="2024-10-08T19:58:19.930016476Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:19.931537 containerd[2101]: time="2024-10-08T19:58:19.931082831Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:58:19.933593 containerd[2101]: time="2024-10-08T19:58:19.932895919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:58:19.936693 containerd[2101]: time="2024-10-08T19:58:19.936317726Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.962245ms" Oct 8 19:58:19.940515 kubelet[3041]: W1008 19:58:19.938836 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:19.940515 kubelet[3041]: E1008 19:58:19.938910 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:19.941224 containerd[2101]: time="2024-10-08T19:58:19.941115538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.498633ms" Oct 8 19:58:19.944679 containerd[2101]: time="2024-10-08T19:58:19.944635011Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 578.340263ms" Oct 8 19:58:20.107850 kubelet[3041]: W1008 19:58:20.107313 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-47&limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:20.110844 kubelet[3041]: E1008 19:58:20.110787 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-47&limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:20.311317 kubelet[3041]: E1008 19:58:20.311115 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-47?timeout=10s\": dial tcp 172.31.20.47:6443: connect: connection refused" interval="1.6s" Oct 8 19:58:20.327169 containerd[2101]: time="2024-10-08T19:58:20.327067845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:20.327556 containerd[2101]: time="2024-10-08T19:58:20.327464534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:20.327848 containerd[2101]: time="2024-10-08T19:58:20.327810123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:20.328217 containerd[2101]: time="2024-10-08T19:58:20.328180871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:20.331580 containerd[2101]: time="2024-10-08T19:58:20.331488452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:20.332010 containerd[2101]: time="2024-10-08T19:58:20.331975882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:20.332517 containerd[2101]: time="2024-10-08T19:58:20.332217449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:20.332517 containerd[2101]: time="2024-10-08T19:58:20.332298995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:20.332517 containerd[2101]: time="2024-10-08T19:58:20.332322595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:20.332517 containerd[2101]: time="2024-10-08T19:58:20.332447546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:20.332879 containerd[2101]: time="2024-10-08T19:58:20.332848228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:20.334554 containerd[2101]: time="2024-10-08T19:58:20.334437360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:20.397634 update_engine[2068]: I20241008 19:58:20.397410 2068 update_attempter.cc:509] Updating boot flags... Oct 8 19:58:20.442131 kubelet[3041]: W1008 19:58:20.434025 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.20.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:20.442131 kubelet[3041]: E1008 19:58:20.441549 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:20.445938 kubelet[3041]: I1008 19:58:20.445831 3041 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-47" Oct 8 19:58:20.449237 kubelet[3041]: E1008 19:58:20.449195 3041 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.47:6443/api/v1/nodes\": dial tcp 172.31.20.47:6443: connect: connection refused" node="ip-172-31-20-47" Oct 8 19:58:20.553123 containerd[2101]: time="2024-10-08T19:58:20.553080821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-47,Uid:7974061e92dc0dc484c02dc467a38c78,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1b192a9e90635d23dc92700e6a94b56af8ccf2dfbaaadba40747c8fc83f6b7e\"" Oct 8 19:58:20.586939 containerd[2101]: time="2024-10-08T19:58:20.585641546Z" level=info msg="CreateContainer within sandbox \"d1b192a9e90635d23dc92700e6a94b56af8ccf2dfbaaadba40747c8fc83f6b7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:58:20.606392 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3209) Oct 8 19:58:20.613053 containerd[2101]: time="2024-10-08T19:58:20.613012510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-47,Uid:818ec0933c983f53cae68dad25f94161,Namespace:kube-system,Attempt:0,} returns sandbox id \"179b33629f0ab79071e1cea59daef629d47509b5d19c2e4cf51c4a87c21ae341\"" Oct 8 19:58:20.615508 containerd[2101]: time="2024-10-08T19:58:20.614766912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-47,Uid:e4b239e331c060d5479a03d4278ebe73,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e8b692e4a8fd1540d966f1395e92f9325b2c1dc62cd41cee997f145e286091f\"" Oct 8 19:58:20.620232 containerd[2101]: time="2024-10-08T19:58:20.620180708Z" level=info msg="CreateContainer within sandbox \"179b33629f0ab79071e1cea59daef629d47509b5d19c2e4cf51c4a87c21ae341\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:58:20.634657 containerd[2101]: time="2024-10-08T19:58:20.634606922Z" level=info msg="CreateContainer within sandbox \"4e8b692e4a8fd1540d966f1395e92f9325b2c1dc62cd41cee997f145e286091f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:58:20.652425 containerd[2101]: time="2024-10-08T19:58:20.652152206Z" level=info msg="CreateContainer within sandbox \"d1b192a9e90635d23dc92700e6a94b56af8ccf2dfbaaadba40747c8fc83f6b7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36a0262f451969b0a9221628c119307df2978cbb5f94959ba023868d3c271bad\"" Oct 8 19:58:20.654973 containerd[2101]: time="2024-10-08T19:58:20.654930804Z" level=info msg="StartContainer for \"36a0262f451969b0a9221628c119307df2978cbb5f94959ba023868d3c271bad\"" Oct 8 19:58:20.682881 containerd[2101]: time="2024-10-08T19:58:20.682827744Z" level=info msg="CreateContainer within sandbox \"179b33629f0ab79071e1cea59daef629d47509b5d19c2e4cf51c4a87c21ae341\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0698bc137091b0afaad3b85d33566597a4ff29471a7f9409138cf7fb5748ab6e\"" Oct 8 19:58:20.684596 containerd[2101]: time="2024-10-08T19:58:20.684546573Z" level=info msg="StartContainer for \"0698bc137091b0afaad3b85d33566597a4ff29471a7f9409138cf7fb5748ab6e\"" Oct 8 19:58:20.692457 containerd[2101]: time="2024-10-08T19:58:20.692236558Z" level=info msg="CreateContainer within sandbox \"4e8b692e4a8fd1540d966f1395e92f9325b2c1dc62cd41cee997f145e286091f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ea96fbff0a411cb10d0b2a078571d477ab729a2043dfdab7e11262a4413ef1f6\"" Oct 8 19:58:20.693922 containerd[2101]: time="2024-10-08T19:58:20.693882064Z" level=info msg="StartContainer for \"ea96fbff0a411cb10d0b2a078571d477ab729a2043dfdab7e11262a4413ef1f6\"" Oct 8 19:58:20.861495 kubelet[3041]: E1008 19:58:20.861041 3041 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:20.999232 containerd[2101]: time="2024-10-08T19:58:20.997624395Z" level=info msg="StartContainer for \"0698bc137091b0afaad3b85d33566597a4ff29471a7f9409138cf7fb5748ab6e\" returns successfully" Oct 8 19:58:21.142866 containerd[2101]: time="2024-10-08T19:58:21.141059492Z" level=info msg="StartContainer for \"36a0262f451969b0a9221628c119307df2978cbb5f94959ba023868d3c271bad\" returns successfully" Oct 8 19:58:21.257491 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3208) Oct 8 19:58:21.359350 containerd[2101]: time="2024-10-08T19:58:21.358879520Z" level=info msg="StartContainer for \"ea96fbff0a411cb10d0b2a078571d477ab729a2043dfdab7e11262a4413ef1f6\" returns successfully" Oct 8 19:58:21.808819 kubelet[3041]: W1008 19:58:21.808457 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:21.808819 kubelet[3041]: E1008 19:58:21.808501 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:21.915158 kubelet[3041]: E1008 19:58:21.915115 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-47?timeout=10s\": dial tcp 172.31.20.47:6443: connect: connection refused" interval="3.2s" Oct 8 19:58:21.941799 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3208) Oct 8 19:58:22.060002 kubelet[3041]: I1008 19:58:22.059487 3041 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-47" Oct 8 19:58:22.066204 kubelet[3041]: E1008 19:58:22.064969 3041 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.47:6443/api/v1/nodes\": dial tcp 172.31.20.47:6443: connect: connection refused" node="ip-172-31-20-47" Oct 8 19:58:22.194271 kubelet[3041]: W1008 19:58:22.194023 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:22.196631 kubelet[3041]: E1008 19:58:22.196520 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:22.241505 kubelet[3041]: W1008 19:58:22.240778 3041 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-47&limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:22.241505 kubelet[3041]: E1008 19:58:22.240829 3041 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-47&limit=500&resourceVersion=0": dial tcp 172.31.20.47:6443: connect: connection refused Oct 8 19:58:25.078742 kubelet[3041]: E1008 19:58:25.078707 3041 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-47.17fc928ec2f9113d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-47,UID:ip-172-31-20-47,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-47,},FirstTimestamp:2024-10-08 19:58:18.857541949 +0000 UTC m=+0.674216514,LastTimestamp:2024-10-08 19:58:18.857541949 +0000 UTC m=+0.674216514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-47,}" Oct 8 19:58:25.134749 kubelet[3041]: E1008 19:58:25.134679 3041 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-47\" not found" node="ip-172-31-20-47" Oct 8 19:58:25.291814 kubelet[3041]: I1008 19:58:25.279337 3041 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-47" Oct 8 19:58:25.304669 kubelet[3041]: I1008 19:58:25.304544 3041 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-47" Oct 8 19:58:25.325558 kubelet[3041]: E1008 19:58:25.325513 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:25.425866 kubelet[3041]: E1008 19:58:25.425836 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:25.526450 kubelet[3041]: E1008 19:58:25.526411 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:25.627025 kubelet[3041]: E1008 19:58:25.626990 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:25.728837 kubelet[3041]: E1008 19:58:25.728318 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:25.830276 kubelet[3041]: E1008 19:58:25.830237 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:25.931599 kubelet[3041]: E1008 19:58:25.930627 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:26.031565 kubelet[3041]: E1008 19:58:26.031453 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:26.132331 kubelet[3041]: E1008 19:58:26.132293 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:26.232638 kubelet[3041]: E1008 19:58:26.232594 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:26.334119 kubelet[3041]: E1008 19:58:26.333629 3041 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-47\" not found" Oct 8 19:58:26.847536 kubelet[3041]: I1008 19:58:26.845930 3041 apiserver.go:52] "Watching apiserver" Oct 8 19:58:26.904126 kubelet[3041]: I1008 19:58:26.904085 3041 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:58:28.392445 systemd[1]: Reloading requested from client PID 3582 ('systemctl') (unit session-7.scope)... Oct 8 19:58:28.392468 systemd[1]: Reloading... Oct 8 19:58:28.567521 zram_generator::config[3623]: No configuration found. Oct 8 19:58:28.839498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:58:29.016408 systemd[1]: Reloading finished in 623 ms. Oct 8 19:58:29.087699 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:29.088034 kubelet[3041]: I1008 19:58:29.087719 3041 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:58:29.114038 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:58:29.115863 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:29.156887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:58:30.258813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:58:30.283049 (kubelet)[3690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:58:30.404246 kubelet[3690]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:58:30.404246 kubelet[3690]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:58:30.404246 kubelet[3690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:58:30.404246 kubelet[3690]: I1008 19:58:30.404216 3690 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:58:30.415757 kubelet[3690]: I1008 19:58:30.415719 3690 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:58:30.415757 kubelet[3690]: I1008 19:58:30.415747 3690 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:58:30.416120 kubelet[3690]: I1008 19:58:30.416096 3690 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:58:30.421532 kubelet[3690]: I1008 19:58:30.421493 3690 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:58:30.426448 kubelet[3690]: I1008 19:58:30.426409 3690 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:58:30.452401 sudo[3703]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 19:58:30.453228 sudo[3703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 8 19:58:30.460382 kubelet[3690]: I1008 19:58:30.456255 3690 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:58:30.460382 kubelet[3690]: I1008 19:58:30.457139 3690 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:58:30.460382 kubelet[3690]: I1008 19:58:30.457422 3690 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:58:30.460382 kubelet[3690]: I1008 19:58:30.457456 3690 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:58:30.460382 kubelet[3690]: I1008 19:58:30.457471 3690 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:58:30.460382 kubelet[3690]: I1008 19:58:30.457525 3690 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:58:30.460806 kubelet[3690]: I1008 19:58:30.457666 3690 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:58:30.460806 kubelet[3690]: I1008 19:58:30.457690 3690 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:58:30.460806 kubelet[3690]: I1008 19:58:30.458816 3690 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:58:30.464648 kubelet[3690]: I1008 19:58:30.461117 3690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:58:30.464648 kubelet[3690]: I1008 19:58:30.462872 3690 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:58:30.464648 kubelet[3690]: I1008 19:58:30.463433 3690 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:58:30.464648 kubelet[3690]: I1008 19:58:30.464594 3690 server.go:1256] "Started kubelet" Oct 8 19:58:30.478385 kubelet[3690]: I1008 19:58:30.475196 3690 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:58:30.478385 kubelet[3690]: I1008 19:58:30.475580 3690 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:58:30.484872 kubelet[3690]: I1008 19:58:30.479562 3690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:58:30.484872 kubelet[3690]: I1008 19:58:30.479963 3690 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:58:30.484872 kubelet[3690]: I1008 19:58:30.481793 3690 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:58:30.500392 kubelet[3690]: I1008 19:58:30.499723 3690 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:58:30.500392 kubelet[3690]: I1008 19:58:30.500294 3690 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:58:30.500580 kubelet[3690]: I1008 19:58:30.500481 3690 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:58:30.527416 kubelet[3690]: I1008 19:58:30.526820 3690 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:58:30.527416 kubelet[3690]: I1008 19:58:30.526940 3690 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:58:30.540214 kubelet[3690]: E1008 19:58:30.539848 3690 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:58:30.541273 kubelet[3690]: I1008 19:58:30.540159 3690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:58:30.544228 kubelet[3690]: I1008 19:58:30.542887 3690 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:58:30.548099 kubelet[3690]: I1008 19:58:30.547196 3690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:58:30.548099 kubelet[3690]: I1008 19:58:30.547255 3690 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:58:30.548099 kubelet[3690]: I1008 19:58:30.547281 3690 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:58:30.548099 kubelet[3690]: E1008 19:58:30.547339 3690 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:58:30.616545 kubelet[3690]: I1008 19:58:30.616141 3690 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-47" Oct 8 19:58:30.636849 kubelet[3690]: I1008 19:58:30.636522 3690 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-47" Oct 8 19:58:30.637342 kubelet[3690]: I1008 19:58:30.637310 3690 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-47" Oct 8 19:58:30.649892 kubelet[3690]: E1008 19:58:30.649856 3690 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:58:30.717489 kubelet[3690]: I1008 19:58:30.717074 3690 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:58:30.717489 kubelet[3690]: I1008 19:58:30.717098 3690 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:58:30.717489 kubelet[3690]: I1008 19:58:30.717116 3690 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:58:30.717489 kubelet[3690]: I1008 19:58:30.717288 3690 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:58:30.717489 kubelet[3690]: I1008 19:58:30.717315 3690 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:58:30.717489 kubelet[3690]: I1008 19:58:30.717324 3690 policy_none.go:49] "None policy: Start" Oct 8 19:58:30.719060 kubelet[3690]: I1008 19:58:30.719023 3690 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:58:30.719437 kubelet[3690]: I1008 19:58:30.719249 3690 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:58:30.720056 kubelet[3690]: I1008 19:58:30.719916 3690 state_mem.go:75] "Updated machine memory state" Oct 8 19:58:30.729755 kubelet[3690]: I1008 19:58:30.729723 3690 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:58:30.732548 kubelet[3690]: I1008 19:58:30.732339 3690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:58:30.854949 kubelet[3690]: I1008 19:58:30.853254 3690 topology_manager.go:215] "Topology Admit Handler" podUID="e4b239e331c060d5479a03d4278ebe73" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-47" Oct 8 19:58:30.854949 kubelet[3690]: I1008 19:58:30.853384 3690 topology_manager.go:215] "Topology Admit Handler" podUID="818ec0933c983f53cae68dad25f94161" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:30.854949 kubelet[3690]: I1008 19:58:30.853437 3690 topology_manager.go:215] "Topology Admit Handler" podUID="7974061e92dc0dc484c02dc467a38c78" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-47" Oct 8 19:58:30.900283 kubelet[3690]: E1008 19:58:30.899519 3690 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-20-47\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:30.905871 kubelet[3690]: I1008 19:58:30.905835 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4b239e331c060d5479a03d4278ebe73-ca-certs\") pod \"kube-apiserver-ip-172-31-20-47\" (UID: \"e4b239e331c060d5479a03d4278ebe73\") " pod="kube-system/kube-apiserver-ip-172-31-20-47" Oct 8 19:58:30.906142 kubelet[3690]: I1008 19:58:30.905898 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4b239e331c060d5479a03d4278ebe73-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-47\" (UID: \"e4b239e331c060d5479a03d4278ebe73\") " pod="kube-system/kube-apiserver-ip-172-31-20-47" Oct 8 19:58:30.906142 kubelet[3690]: I1008 19:58:30.905930 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:30.906142 kubelet[3690]: I1008 19:58:30.906090 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:30.906142 kubelet[3690]: I1008 19:58:30.906128 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7974061e92dc0dc484c02dc467a38c78-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-47\" (UID: \"7974061e92dc0dc484c02dc467a38c78\") " pod="kube-system/kube-scheduler-ip-172-31-20-47" Oct 8 19:58:30.906985 kubelet[3690]: I1008 19:58:30.906339 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4b239e331c060d5479a03d4278ebe73-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-47\" (UID: \"e4b239e331c060d5479a03d4278ebe73\") " pod="kube-system/kube-apiserver-ip-172-31-20-47" Oct 8 19:58:30.906985 kubelet[3690]: I1008 19:58:30.906448 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:30.906985 kubelet[3690]: I1008 19:58:30.906841 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:30.906985 kubelet[3690]: I1008 19:58:30.906897 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/818ec0933c983f53cae68dad25f94161-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-47\" (UID: \"818ec0933c983f53cae68dad25f94161\") " pod="kube-system/kube-controller-manager-ip-172-31-20-47" Oct 8 19:58:31.463112 kubelet[3690]: I1008 19:58:31.461789 3690 apiserver.go:52] "Watching apiserver" Oct 8 19:58:31.483905 sudo[3703]: pam_unix(sudo:session): session closed for user root Oct 8 19:58:31.501442 kubelet[3690]: I1008 19:58:31.501392 3690 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:58:31.722102 kubelet[3690]: I1008 19:58:31.719873 3690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-47" podStartSLOduration=1.7193258139999998 podStartE2EDuration="1.719325814s" podCreationTimestamp="2024-10-08 19:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:31.686913964 +0000 UTC m=+1.376594190" watchObservedRunningTime="2024-10-08 19:58:31.719325814 +0000 UTC m=+1.409006028" Oct 8 19:58:31.775561 kubelet[3690]: I1008 19:58:31.774789 3690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-47" podStartSLOduration=1.7747388979999998 podStartE2EDuration="1.774738898s" podCreationTimestamp="2024-10-08 19:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:31.738274235 +0000 UTC m=+1.427954458" watchObservedRunningTime="2024-10-08 19:58:31.774738898 +0000 UTC m=+1.464419122" Oct 8 19:58:33.407662 sudo[2447]: pam_unix(sudo:session): session closed for user root Oct 8 19:58:33.430754 sshd[2443]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:33.437025 systemd-logind[2065]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:58:33.437544 systemd[1]: sshd@6-172.31.20.47:22-139.178.68.195:59944.service: Deactivated successfully. Oct 8 19:58:33.445612 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:58:33.447768 systemd-logind[2065]: Removed session 7. Oct 8 19:58:41.783528 kubelet[3690]: I1008 19:58:41.783494 3690 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:58:41.786085 kubelet[3690]: I1008 19:58:41.785415 3690 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:58:41.786333 containerd[2101]: time="2024-10-08T19:58:41.783981821Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:58:42.481227 kubelet[3690]: I1008 19:58:42.476606 3690 topology_manager.go:215] "Topology Admit Handler" podUID="eaee80d0-dda8-4432-8c75-e461411d8ba1" podNamespace="kube-system" podName="kube-proxy-x8lrj" Oct 8 19:58:42.485813 kubelet[3690]: I1008 19:58:42.485634 3690 topology_manager.go:215] "Topology Admit Handler" podUID="a3d4cc3b-eee5-4a16-83b6-e1a826b12006" podNamespace="kube-system" podName="cilium-bzh44" Oct 8 19:58:42.509037 kubelet[3690]: I1008 19:58:42.508977 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eaee80d0-dda8-4432-8c75-e461411d8ba1-kube-proxy\") pod \"kube-proxy-x8lrj\" (UID: \"eaee80d0-dda8-4432-8c75-e461411d8ba1\") " pod="kube-system/kube-proxy-x8lrj" Oct 8 19:58:42.509037 kubelet[3690]: I1008 19:58:42.509026 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eaee80d0-dda8-4432-8c75-e461411d8ba1-lib-modules\") pod \"kube-proxy-x8lrj\" (UID: \"eaee80d0-dda8-4432-8c75-e461411d8ba1\") " pod="kube-system/kube-proxy-x8lrj" Oct 8 19:58:42.510423 kubelet[3690]: I1008 19:58:42.509058 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eaee80d0-dda8-4432-8c75-e461411d8ba1-xtables-lock\") pod \"kube-proxy-x8lrj\" (UID: \"eaee80d0-dda8-4432-8c75-e461411d8ba1\") " pod="kube-system/kube-proxy-x8lrj" Oct 8 19:58:42.510423 kubelet[3690]: I1008 19:58:42.509086 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54t8f\" (UniqueName: \"kubernetes.io/projected/eaee80d0-dda8-4432-8c75-e461411d8ba1-kube-api-access-54t8f\") pod \"kube-proxy-x8lrj\" (UID: \"eaee80d0-dda8-4432-8c75-e461411d8ba1\") " pod="kube-system/kube-proxy-x8lrj" Oct 8 19:58:42.609980 kubelet[3690]: I1008 19:58:42.609932 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-run\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610134 kubelet[3690]: I1008 19:58:42.609998 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-etc-cni-netd\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610134 kubelet[3690]: I1008 19:58:42.610024 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-cgroup\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610134 kubelet[3690]: I1008 19:58:42.610049 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cni-path\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610134 kubelet[3690]: I1008 19:58:42.610075 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-xtables-lock\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610134 kubelet[3690]: I1008 19:58:42.610102 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmm2q\" (UniqueName: \"kubernetes.io/projected/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-kube-api-access-xmm2q\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610383 kubelet[3690]: I1008 19:58:42.610139 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-clustermesh-secrets\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610383 kubelet[3690]: I1008 19:58:42.610170 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-hubble-tls\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610383 kubelet[3690]: I1008 19:58:42.610205 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-hostproc\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610383 kubelet[3690]: I1008 19:58:42.610236 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-config-path\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610383 kubelet[3690]: I1008 19:58:42.610269 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-host-proc-sys-net\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610383 kubelet[3690]: I1008 19:58:42.610332 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-bpf-maps\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610793 kubelet[3690]: I1008 19:58:42.610378 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-lib-modules\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.610793 kubelet[3690]: I1008 19:58:42.610413 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-host-proc-sys-kernel\") pod \"cilium-bzh44\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " pod="kube-system/cilium-bzh44" Oct 8 19:58:42.652000 kubelet[3690]: E1008 19:58:42.645214 3690 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 19:58:42.652000 kubelet[3690]: E1008 19:58:42.645276 3690 projected.go:200] Error preparing data for projected volume kube-api-access-54t8f for pod kube-system/kube-proxy-x8lrj: configmap "kube-root-ca.crt" not found Oct 8 19:58:42.652000 kubelet[3690]: E1008 19:58:42.649977 3690 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaee80d0-dda8-4432-8c75-e461411d8ba1-kube-api-access-54t8f podName:eaee80d0-dda8-4432-8c75-e461411d8ba1 nodeName:}" failed. No retries permitted until 2024-10-08 19:58:43.14993122 +0000 UTC m=+12.839611441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-54t8f" (UniqueName: "kubernetes.io/projected/eaee80d0-dda8-4432-8c75-e461411d8ba1-kube-api-access-54t8f") pod "kube-proxy-x8lrj" (UID: "eaee80d0-dda8-4432-8c75-e461411d8ba1") : configmap "kube-root-ca.crt" not found Oct 8 19:58:42.804026 containerd[2101]: time="2024-10-08T19:58:42.803895497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bzh44,Uid:a3d4cc3b-eee5-4a16-83b6-e1a826b12006,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:42.877979 containerd[2101]: time="2024-10-08T19:58:42.877057295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:42.880324 containerd[2101]: time="2024-10-08T19:58:42.879787409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:42.880324 containerd[2101]: time="2024-10-08T19:58:42.879826799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:42.880324 containerd[2101]: time="2024-10-08T19:58:42.880073323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:42.892867 kubelet[3690]: I1008 19:58:42.886764 3690 topology_manager.go:215] "Topology Admit Handler" podUID="a784a2a1-ca12-4350-98ec-e9034e5f0ab6" podNamespace="kube-system" podName="cilium-operator-5cc964979-h8ktd" Oct 8 19:58:42.918129 kubelet[3690]: I1008 19:58:42.912927 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a784a2a1-ca12-4350-98ec-e9034e5f0ab6-cilium-config-path\") pod \"cilium-operator-5cc964979-h8ktd\" (UID: \"a784a2a1-ca12-4350-98ec-e9034e5f0ab6\") " pod="kube-system/cilium-operator-5cc964979-h8ktd" Oct 8 19:58:42.920392 kubelet[3690]: I1008 19:58:42.918467 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxlp\" (UniqueName: \"kubernetes.io/projected/a784a2a1-ca12-4350-98ec-e9034e5f0ab6-kube-api-access-fpxlp\") pod \"cilium-operator-5cc964979-h8ktd\" (UID: \"a784a2a1-ca12-4350-98ec-e9034e5f0ab6\") " pod="kube-system/cilium-operator-5cc964979-h8ktd" Oct 8 19:58:42.994168 containerd[2101]: time="2024-10-08T19:58:42.994109957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bzh44,Uid:a3d4cc3b-eee5-4a16-83b6-e1a826b12006,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\"" Oct 8 19:58:43.009416 containerd[2101]: time="2024-10-08T19:58:43.007953926Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 19:58:43.215544 containerd[2101]: time="2024-10-08T19:58:43.215459892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-h8ktd,Uid:a784a2a1-ca12-4350-98ec-e9034e5f0ab6,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:43.297933 containerd[2101]: time="2024-10-08T19:58:43.297311911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:43.297933 containerd[2101]: time="2024-10-08T19:58:43.297836033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:43.297933 containerd[2101]: time="2024-10-08T19:58:43.297861221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:43.298764 containerd[2101]: time="2024-10-08T19:58:43.298294234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:43.396321 containerd[2101]: time="2024-10-08T19:58:43.396216922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-h8ktd,Uid:a784a2a1-ca12-4350-98ec-e9034e5f0ab6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b\"" Oct 8 19:58:43.454838 containerd[2101]: time="2024-10-08T19:58:43.454792904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8lrj,Uid:eaee80d0-dda8-4432-8c75-e461411d8ba1,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:43.493016 containerd[2101]: time="2024-10-08T19:58:43.492330079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:43.493016 containerd[2101]: time="2024-10-08T19:58:43.492416400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:43.493468 containerd[2101]: time="2024-10-08T19:58:43.492432783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:43.493468 containerd[2101]: time="2024-10-08T19:58:43.492541994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:43.558937 containerd[2101]: time="2024-10-08T19:58:43.558897925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8lrj,Uid:eaee80d0-dda8-4432-8c75-e461411d8ba1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4949492d43a019076cbd91b0ee5f9432d07c10e035c872d987bc18ed9abce5e3\"" Oct 8 19:58:43.568223 containerd[2101]: time="2024-10-08T19:58:43.567530825Z" level=info msg="CreateContainer within sandbox \"4949492d43a019076cbd91b0ee5f9432d07c10e035c872d987bc18ed9abce5e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:58:43.592636 containerd[2101]: time="2024-10-08T19:58:43.592589695Z" level=info msg="CreateContainer within sandbox \"4949492d43a019076cbd91b0ee5f9432d07c10e035c872d987bc18ed9abce5e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"afe1e119ad6edb6289a948038bbdabcaeb20b55491a65e7e11ba95e2b15eaf08\"" Oct 8 19:58:43.593865 containerd[2101]: time="2024-10-08T19:58:43.593671713Z" level=info msg="StartContainer for \"afe1e119ad6edb6289a948038bbdabcaeb20b55491a65e7e11ba95e2b15eaf08\"" Oct 8 19:58:43.743720 containerd[2101]: time="2024-10-08T19:58:43.743511622Z" level=info msg="StartContainer for \"afe1e119ad6edb6289a948038bbdabcaeb20b55491a65e7e11ba95e2b15eaf08\" returns successfully" Oct 8 19:58:45.077678 kubelet[3690]: I1008 19:58:45.077633 3690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-x8lrj" podStartSLOduration=3.077570178 podStartE2EDuration="3.077570178s" podCreationTimestamp="2024-10-08 19:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:45.076100402 +0000 UTC m=+14.765780625" watchObservedRunningTime="2024-10-08 19:58:45.077570178 +0000 UTC m=+14.767250402" Oct 8 19:58:46.194463 systemd-journald[1562]: Under memory pressure, flushing caches. Oct 8 19:58:46.193755 systemd-resolved[1972]: Under memory pressure, flushing caches. Oct 8 19:58:46.193827 systemd-resolved[1972]: Flushed all caches. Oct 8 19:58:51.287908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860615434.mount: Deactivated successfully. Oct 8 19:58:55.496797 containerd[2101]: time="2024-10-08T19:58:55.496733242Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:55.506142 containerd[2101]: time="2024-10-08T19:58:55.506070669Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735283" Oct 8 19:58:55.557270 containerd[2101]: time="2024-10-08T19:58:55.557220462Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:55.560840 containerd[2101]: time="2024-10-08T19:58:55.559797590Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.551791438s" Oct 8 19:58:55.560840 containerd[2101]: time="2024-10-08T19:58:55.559891529Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 8 19:58:55.598707 containerd[2101]: time="2024-10-08T19:58:55.598416888Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 19:58:55.599770 containerd[2101]: time="2024-10-08T19:58:55.599715762Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:58:55.712280 containerd[2101]: time="2024-10-08T19:58:55.712234270Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\"" Oct 8 19:58:55.714296 containerd[2101]: time="2024-10-08T19:58:55.712986880Z" level=info msg="StartContainer for \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\"" Oct 8 19:58:55.999874 containerd[2101]: time="2024-10-08T19:58:55.999669483Z" level=info msg="StartContainer for \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\" returns successfully" Oct 8 19:58:56.703726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee-rootfs.mount: Deactivated successfully. Oct 8 19:58:57.754006 containerd[2101]: time="2024-10-08T19:58:57.698386764Z" level=info msg="shim disconnected" id=fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee namespace=k8s.io Oct 8 19:58:57.754006 containerd[2101]: time="2024-10-08T19:58:57.747380781Z" level=warning msg="cleaning up after shim disconnected" id=fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee namespace=k8s.io Oct 8 19:58:57.754006 containerd[2101]: time="2024-10-08T19:58:57.747400136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:58.035497 containerd[2101]: time="2024-10-08T19:58:58.031892851Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:58:58.103902 systemd-journald[1562]: Under memory pressure, flushing caches. Oct 8 19:58:58.100421 systemd-resolved[1972]: Under memory pressure, flushing caches. Oct 8 19:58:58.100474 systemd-resolved[1972]: Flushed all caches. Oct 8 19:58:58.281503 containerd[2101]: time="2024-10-08T19:58:58.280590806Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\"" Oct 8 19:58:58.285943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897411371.mount: Deactivated successfully. Oct 8 19:58:58.290480 containerd[2101]: time="2024-10-08T19:58:58.289400711Z" level=info msg="StartContainer for \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\"" Oct 8 19:58:58.398326 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:58:58.399263 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:58:58.399342 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:58:58.404767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:58:58.467724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:58:58.548628 containerd[2101]: time="2024-10-08T19:58:58.547683383Z" level=info msg="StartContainer for \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\" returns successfully" Oct 8 19:58:58.588651 containerd[2101]: time="2024-10-08T19:58:58.588571297Z" level=info msg="shim disconnected" id=97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b namespace=k8s.io Oct 8 19:58:58.588651 containerd[2101]: time="2024-10-08T19:58:58.588635212Z" level=warning msg="cleaning up after shim disconnected" id=97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b namespace=k8s.io Oct 8 19:58:58.588651 containerd[2101]: time="2024-10-08T19:58:58.588649132Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:58.606312 containerd[2101]: time="2024-10-08T19:58:58.606204315Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:58:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:58:58.973601 containerd[2101]: time="2024-10-08T19:58:58.973562186Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:58:59.018061 containerd[2101]: time="2024-10-08T19:58:59.017414179Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\"" Oct 8 19:58:59.020221 containerd[2101]: time="2024-10-08T19:58:59.019887810Z" level=info msg="StartContainer for \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\"" Oct 8 19:58:59.234052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b-rootfs.mount: Deactivated successfully. Oct 8 19:58:59.266000 containerd[2101]: time="2024-10-08T19:58:59.265951325Z" level=info msg="StartContainer for \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\" returns successfully" Oct 8 19:58:59.324927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd-rootfs.mount: Deactivated successfully. Oct 8 19:59:00.114458 containerd[2101]: time="2024-10-08T19:59:00.114254438Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:00.139486 containerd[2101]: time="2024-10-08T19:59:00.138632348Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907201" Oct 8 19:59:00.151674 containerd[2101]: time="2024-10-08T19:59:00.151549490Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:59:00.173410 containerd[2101]: time="2024-10-08T19:59:00.165061346Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.566591823s" Oct 8 19:59:00.173410 containerd[2101]: time="2024-10-08T19:59:00.165109780Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 8 19:59:00.202034 containerd[2101]: time="2024-10-08T19:59:00.201978099Z" level=info msg="CreateContainer within sandbox \"c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 19:59:00.254912 containerd[2101]: time="2024-10-08T19:59:00.253732255Z" level=info msg="CreateContainer within sandbox \"c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\"" Oct 8 19:59:00.256403 containerd[2101]: time="2024-10-08T19:59:00.255099875Z" level=info msg="StartContainer for \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\"" Oct 8 19:59:00.258505 containerd[2101]: time="2024-10-08T19:59:00.258440937Z" level=info msg="shim disconnected" id=9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd namespace=k8s.io Oct 8 19:59:00.258505 containerd[2101]: time="2024-10-08T19:59:00.258500067Z" level=warning msg="cleaning up after shim disconnected" id=9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd namespace=k8s.io Oct 8 19:59:00.258505 containerd[2101]: time="2024-10-08T19:59:00.258510625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:00.296695 containerd[2101]: time="2024-10-08T19:59:00.296638534Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:59:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:59:00.381574 containerd[2101]: time="2024-10-08T19:59:00.381113755Z" level=info msg="StartContainer for \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\" returns successfully" Oct 8 19:59:00.997584 containerd[2101]: time="2024-10-08T19:59:00.997114090Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:59:01.035906 containerd[2101]: time="2024-10-08T19:59:01.035828678Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\"" Oct 8 19:59:01.037458 containerd[2101]: time="2024-10-08T19:59:01.037227737Z" level=info msg="StartContainer for \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\"" Oct 8 19:59:01.452491 containerd[2101]: time="2024-10-08T19:59:01.445650089Z" level=info msg="StartContainer for \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\" returns successfully" Oct 8 19:59:01.533820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b-rootfs.mount: Deactivated successfully. Oct 8 19:59:01.543350 containerd[2101]: time="2024-10-08T19:59:01.543258048Z" level=info msg="shim disconnected" id=63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b namespace=k8s.io Oct 8 19:59:01.543350 containerd[2101]: time="2024-10-08T19:59:01.543337859Z" level=warning msg="cleaning up after shim disconnected" id=63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b namespace=k8s.io Oct 8 19:59:01.543350 containerd[2101]: time="2024-10-08T19:59:01.543351121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:02.026544 containerd[2101]: time="2024-10-08T19:59:02.026500225Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:59:02.079391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568148676.mount: Deactivated successfully. Oct 8 19:59:02.103424 kubelet[3690]: I1008 19:59:02.103377 3690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-h8ktd" podStartSLOduration=3.33238415 podStartE2EDuration="20.103299215s" podCreationTimestamp="2024-10-08 19:58:42 +0000 UTC" firstStartedPulling="2024-10-08 19:58:43.399682474 +0000 UTC m=+13.089362688" lastFinishedPulling="2024-10-08 19:59:00.170597543 +0000 UTC m=+29.860277753" observedRunningTime="2024-10-08 19:59:01.330767512 +0000 UTC m=+31.020447735" watchObservedRunningTime="2024-10-08 19:59:02.103299215 +0000 UTC m=+31.792979438" Oct 8 19:59:02.107483 containerd[2101]: time="2024-10-08T19:59:02.106400647Z" level=info msg="CreateContainer within sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\"" Oct 8 19:59:02.115222 containerd[2101]: time="2024-10-08T19:59:02.115175779Z" level=info msg="StartContainer for \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\"" Oct 8 19:59:02.133374 systemd-journald[1562]: Under memory pressure, flushing caches. Oct 8 19:59:02.129767 systemd-resolved[1972]: Under memory pressure, flushing caches. Oct 8 19:59:02.129807 systemd-resolved[1972]: Flushed all caches. Oct 8 19:59:02.298830 containerd[2101]: time="2024-10-08T19:59:02.298698921Z" level=info msg="StartContainer for \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\" returns successfully" Oct 8 19:59:02.747349 kubelet[3690]: I1008 19:59:02.746161 3690 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:59:02.808961 kubelet[3690]: I1008 19:59:02.808711 3690 topology_manager.go:215] "Topology Admit Handler" podUID="ea89ff4e-1720-48c3-a8b3-4fa759c496ea" podNamespace="kube-system" podName="coredns-76f75df574-hw5sj" Oct 8 19:59:03.004051 kubelet[3690]: I1008 19:59:02.999548 3690 topology_manager.go:215] "Topology Admit Handler" podUID="a12b593d-4fe5-44ef-af05-dcafe26942ce" podNamespace="kube-system" podName="coredns-76f75df574-2tv7r" Oct 8 19:59:03.079448 kubelet[3690]: I1008 19:59:03.077737 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea89ff4e-1720-48c3-a8b3-4fa759c496ea-config-volume\") pod \"coredns-76f75df574-hw5sj\" (UID: \"ea89ff4e-1720-48c3-a8b3-4fa759c496ea\") " pod="kube-system/coredns-76f75df574-hw5sj" Oct 8 19:59:03.079448 kubelet[3690]: I1008 19:59:03.077794 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a12b593d-4fe5-44ef-af05-dcafe26942ce-config-volume\") pod \"coredns-76f75df574-2tv7r\" (UID: \"a12b593d-4fe5-44ef-af05-dcafe26942ce\") " pod="kube-system/coredns-76f75df574-2tv7r" Oct 8 19:59:03.079448 kubelet[3690]: I1008 19:59:03.077832 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthts\" (UniqueName: \"kubernetes.io/projected/ea89ff4e-1720-48c3-a8b3-4fa759c496ea-kube-api-access-qthts\") pod \"coredns-76f75df574-hw5sj\" (UID: \"ea89ff4e-1720-48c3-a8b3-4fa759c496ea\") " pod="kube-system/coredns-76f75df574-hw5sj" Oct 8 19:59:03.079448 kubelet[3690]: I1008 19:59:03.077875 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twtvf\" (UniqueName: \"kubernetes.io/projected/a12b593d-4fe5-44ef-af05-dcafe26942ce-kube-api-access-twtvf\") pod \"coredns-76f75df574-2tv7r\" (UID: \"a12b593d-4fe5-44ef-af05-dcafe26942ce\") " pod="kube-system/coredns-76f75df574-2tv7r" Oct 8 19:59:03.395337 containerd[2101]: time="2024-10-08T19:59:03.395242944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2tv7r,Uid:a12b593d-4fe5-44ef-af05-dcafe26942ce,Namespace:kube-system,Attempt:0,}" Oct 8 19:59:03.416495 containerd[2101]: time="2024-10-08T19:59:03.415895729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hw5sj,Uid:ea89ff4e-1720-48c3-a8b3-4fa759c496ea,Namespace:kube-system,Attempt:0,}" Oct 8 19:59:04.178899 systemd-resolved[1972]: Under memory pressure, flushing caches. Oct 8 19:59:04.178926 systemd-resolved[1972]: Flushed all caches. Oct 8 19:59:04.180551 systemd-journald[1562]: Under memory pressure, flushing caches. Oct 8 19:59:05.720531 systemd-networkd[1650]: cilium_host: Link UP Oct 8 19:59:05.726155 (udev-worker)[4468]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:59:05.726821 (udev-worker)[4466]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:59:05.728054 systemd-networkd[1650]: cilium_net: Link UP Oct 8 19:59:05.730190 systemd-networkd[1650]: cilium_net: Gained carrier Oct 8 19:59:05.731555 systemd-networkd[1650]: cilium_host: Gained carrier Oct 8 19:59:05.984494 systemd-networkd[1650]: cilium_vxlan: Link UP Oct 8 19:59:05.984505 systemd-networkd[1650]: cilium_vxlan: Gained carrier Oct 8 19:59:06.293486 systemd-networkd[1650]: cilium_host: Gained IPv6LL Oct 8 19:59:06.481756 systemd-networkd[1650]: cilium_net: Gained IPv6LL Oct 8 19:59:06.737424 kernel: NET: Registered PF_ALG protocol family Oct 8 19:59:07.889604 systemd-networkd[1650]: cilium_vxlan: Gained IPv6LL Oct 8 19:59:08.045792 (udev-worker)[4514]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:59:08.052684 systemd-networkd[1650]: lxc_health: Link UP Oct 8 19:59:08.061823 systemd-networkd[1650]: lxc_health: Gained carrier Oct 8 19:59:08.670257 systemd-networkd[1650]: lxc23fce8b73fe4: Link UP Oct 8 19:59:08.678399 kernel: eth0: renamed from tmp7ca82 Oct 8 19:59:08.712579 kernel: eth0: renamed from tmp0dd62 Oct 8 19:59:08.701475 systemd-networkd[1650]: lxc23fce8b73fe4: Gained carrier Oct 8 19:59:08.701779 systemd-networkd[1650]: lxcd883a0cf1186: Link UP Oct 8 19:59:08.726803 (udev-worker)[4518]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:59:08.728601 systemd-networkd[1650]: lxcd883a0cf1186: Gained carrier Oct 8 19:59:08.955861 kubelet[3690]: I1008 19:59:08.952952 3690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bzh44" podStartSLOduration=14.361506827 podStartE2EDuration="26.95011523s" podCreationTimestamp="2024-10-08 19:58:42 +0000 UTC" firstStartedPulling="2024-10-08 19:58:43.003269818 +0000 UTC m=+12.692950020" lastFinishedPulling="2024-10-08 19:58:55.591878197 +0000 UTC m=+25.281558423" observedRunningTime="2024-10-08 19:59:03.242932721 +0000 UTC m=+32.932612945" watchObservedRunningTime="2024-10-08 19:59:08.95011523 +0000 UTC m=+38.639795492" Oct 8 19:59:09.937771 systemd-networkd[1650]: lxc_health: Gained IPv6LL Oct 8 19:59:10.453824 systemd-networkd[1650]: lxcd883a0cf1186: Gained IPv6LL Oct 8 19:59:10.517754 systemd-networkd[1650]: lxc23fce8b73fe4: Gained IPv6LL Oct 8 19:59:13.330951 ntpd[2041]: Listen normally on 6 cilium_host 192.168.0.142:123 Oct 8 19:59:13.332229 ntpd[2041]: 8 Oct 19:59:13 ntpd[2041]: Listen normally on 6 cilium_host 192.168.0.142:123 Oct 8 19:59:13.332229 ntpd[2041]: 8 Oct 19:59:13 ntpd[2041]: Listen normally on 7 cilium_net [fe80::d8e6:5fff:fe66:1025%4]:123 Oct 8 19:59:13.332229 ntpd[2041]: 8 Oct 19:59:13 ntpd[2041]: Listen normally on 8 cilium_host [fe80::6023:efff:fedc:248c%5]:123 Oct 8 19:59:13.332229 ntpd[2041]: 8 Oct 19:59:13 ntpd[2041]: Listen normally on 9 cilium_vxlan [fe80::40f5:53ff:feb0:66f4%6]:123 Oct 8 19:59:13.332229 ntpd[2041]: 8 Oct 19:59:13 ntpd[2041]: Listen normally on 10 lxc_health [fe80::142c:41ff:fefb:b77%8]:123 Oct 8 19:59:13.332229 ntpd[2041]: 8 Oct 19:59:13 ntpd[2041]: Listen normally on 11 lxc23fce8b73fe4 [fe80::2cf2:a3ff:fe0f:c1a6%10]:123 Oct 8 19:59:13.332229 ntpd[2041]: 8 Oct 19:59:13 ntpd[2041]: Listen normally on 12 lxcd883a0cf1186 [fe80::5024:16ff:fec4:79e3%12]:123 Oct 8 19:59:13.331108 ntpd[2041]: Listen normally on 7 cilium_net [fe80::d8e6:5fff:fe66:1025%4]:123 Oct 8 19:59:13.331278 ntpd[2041]: Listen normally on 8 cilium_host [fe80::6023:efff:fedc:248c%5]:123 Oct 8 19:59:13.331330 ntpd[2041]: Listen normally on 9 cilium_vxlan [fe80::40f5:53ff:feb0:66f4%6]:123 Oct 8 19:59:13.331397 ntpd[2041]: Listen normally on 10 lxc_health [fe80::142c:41ff:fefb:b77%8]:123 Oct 8 19:59:13.331435 ntpd[2041]: Listen normally on 11 lxc23fce8b73fe4 [fe80::2cf2:a3ff:fe0f:c1a6%10]:123 Oct 8 19:59:13.331473 ntpd[2041]: Listen normally on 12 lxcd883a0cf1186 [fe80::5024:16ff:fec4:79e3%12]:123 Oct 8 19:59:16.307387 containerd[2101]: time="2024-10-08T19:59:16.307133891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:59:16.310005 containerd[2101]: time="2024-10-08T19:59:16.307215315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:59:16.310005 containerd[2101]: time="2024-10-08T19:59:16.307327071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:16.310005 containerd[2101]: time="2024-10-08T19:59:16.307507790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:16.315487 containerd[2101]: time="2024-10-08T19:59:16.311447881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:59:16.315487 containerd[2101]: time="2024-10-08T19:59:16.311527445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:59:16.315487 containerd[2101]: time="2024-10-08T19:59:16.311545899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:16.315487 containerd[2101]: time="2024-10-08T19:59:16.311681867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:16.392163 systemd[1]: run-containerd-runc-k8s.io-7ca8262c5851ace35a203e4c58b18d23bf5ad9ad04eb489891ca6834a663e7f8-runc.VlOtzt.mount: Deactivated successfully. Oct 8 19:59:16.532088 containerd[2101]: time="2024-10-08T19:59:16.531963903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2tv7r,Uid:a12b593d-4fe5-44ef-af05-dcafe26942ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ca8262c5851ace35a203e4c58b18d23bf5ad9ad04eb489891ca6834a663e7f8\"" Oct 8 19:59:16.535193 containerd[2101]: time="2024-10-08T19:59:16.535155707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hw5sj,Uid:ea89ff4e-1720-48c3-a8b3-4fa759c496ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dd629d9f4fd47e1d97ee0c0cc7d613d1b0e5d3d4527a9d59d805a8abfff7bea\"" Oct 8 19:59:16.548809 containerd[2101]: time="2024-10-08T19:59:16.548623698Z" level=info msg="CreateContainer within sandbox \"0dd629d9f4fd47e1d97ee0c0cc7d613d1b0e5d3d4527a9d59d805a8abfff7bea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:59:16.549490 containerd[2101]: time="2024-10-08T19:59:16.549461417Z" level=info msg="CreateContainer within sandbox \"7ca8262c5851ace35a203e4c58b18d23bf5ad9ad04eb489891ca6834a663e7f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:59:16.592807 containerd[2101]: time="2024-10-08T19:59:16.592583963Z" level=info msg="CreateContainer within sandbox \"0dd629d9f4fd47e1d97ee0c0cc7d613d1b0e5d3d4527a9d59d805a8abfff7bea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ed89dab5ef803ac8b5e8473e6d657e238d31569d564186b89d36dc4ae8ad203\"" Oct 8 19:59:16.595649 containerd[2101]: time="2024-10-08T19:59:16.593523869Z" level=info msg="StartContainer for \"0ed89dab5ef803ac8b5e8473e6d657e238d31569d564186b89d36dc4ae8ad203\"" Oct 8 19:59:16.602639 containerd[2101]: time="2024-10-08T19:59:16.602590630Z" level=info msg="CreateContainer within sandbox \"7ca8262c5851ace35a203e4c58b18d23bf5ad9ad04eb489891ca6834a663e7f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5aebabe20a4c74314c11296a12ac0c82a253fb32407c10f5c1ea431c363bc202\"" Oct 8 19:59:16.605762 containerd[2101]: time="2024-10-08T19:59:16.605722213Z" level=info msg="StartContainer for \"5aebabe20a4c74314c11296a12ac0c82a253fb32407c10f5c1ea431c363bc202\"" Oct 8 19:59:16.702022 containerd[2101]: time="2024-10-08T19:59:16.701975073Z" level=info msg="StartContainer for \"5aebabe20a4c74314c11296a12ac0c82a253fb32407c10f5c1ea431c363bc202\" returns successfully" Oct 8 19:59:16.702022 containerd[2101]: time="2024-10-08T19:59:16.701975094Z" level=info msg="StartContainer for \"0ed89dab5ef803ac8b5e8473e6d657e238d31569d564186b89d36dc4ae8ad203\" returns successfully" Oct 8 19:59:17.152741 kubelet[3690]: I1008 19:59:17.152036 3690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hw5sj" podStartSLOduration=35.151986924 podStartE2EDuration="35.151986924s" podCreationTimestamp="2024-10-08 19:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:59:17.151057004 +0000 UTC m=+46.840737241" watchObservedRunningTime="2024-10-08 19:59:17.151986924 +0000 UTC m=+46.841667148" Oct 8 19:59:17.323942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523618811.mount: Deactivated successfully. Oct 8 19:59:18.211838 kubelet[3690]: I1008 19:59:18.211656 3690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2tv7r" podStartSLOduration=36.211561128 podStartE2EDuration="36.211561128s" podCreationTimestamp="2024-10-08 19:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:59:17.174746791 +0000 UTC m=+46.864427014" watchObservedRunningTime="2024-10-08 19:59:18.211561128 +0000 UTC m=+47.901241353" Oct 8 19:59:18.867830 systemd[1]: Started sshd@7-172.31.20.47:22-139.178.68.195:51656.service - OpenSSH per-connection server daemon (139.178.68.195:51656). Oct 8 19:59:19.085402 sshd[5044]: Accepted publickey for core from 139.178.68.195 port 51656 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:19.086573 sshd[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:19.111321 systemd-logind[2065]: New session 8 of user core. Oct 8 19:59:19.115805 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:59:20.148991 sshd[5044]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:20.156476 systemd[1]: sshd@7-172.31.20.47:22-139.178.68.195:51656.service: Deactivated successfully. Oct 8 19:59:20.163473 systemd-logind[2065]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:59:20.164370 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:59:20.167653 systemd-logind[2065]: Removed session 8. Oct 8 19:59:25.179855 systemd[1]: Started sshd@8-172.31.20.47:22-139.178.68.195:47100.service - OpenSSH per-connection server daemon (139.178.68.195:47100). Oct 8 19:59:25.346326 sshd[5064]: Accepted publickey for core from 139.178.68.195 port 47100 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:25.348370 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:25.354237 systemd-logind[2065]: New session 9 of user core. Oct 8 19:59:25.358862 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:59:25.584549 sshd[5064]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:25.589918 systemd[1]: sshd@8-172.31.20.47:22-139.178.68.195:47100.service: Deactivated successfully. Oct 8 19:59:25.598828 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:59:25.600350 systemd-logind[2065]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:59:25.601542 systemd-logind[2065]: Removed session 9. Oct 8 19:59:30.626715 systemd[1]: Started sshd@9-172.31.20.47:22-139.178.68.195:42352.service - OpenSSH per-connection server daemon (139.178.68.195:42352). Oct 8 19:59:30.808734 sshd[5081]: Accepted publickey for core from 139.178.68.195 port 42352 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:30.809568 sshd[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:30.815143 systemd-logind[2065]: New session 10 of user core. Oct 8 19:59:30.821194 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:59:31.145508 sshd[5081]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:31.153202 systemd[1]: sshd@9-172.31.20.47:22-139.178.68.195:42352.service: Deactivated successfully. Oct 8 19:59:31.158909 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:59:31.160084 systemd-logind[2065]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:59:31.162336 systemd-logind[2065]: Removed session 10. Oct 8 19:59:36.177909 systemd[1]: Started sshd@10-172.31.20.47:22-139.178.68.195:42364.service - OpenSSH per-connection server daemon (139.178.68.195:42364). Oct 8 19:59:36.379727 sshd[5096]: Accepted publickey for core from 139.178.68.195 port 42364 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:36.381398 sshd[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:36.386918 systemd-logind[2065]: New session 11 of user core. Oct 8 19:59:36.394325 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:59:36.658114 sshd[5096]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:36.667975 systemd[1]: sshd@10-172.31.20.47:22-139.178.68.195:42364.service: Deactivated successfully. Oct 8 19:59:36.679404 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:59:36.681746 systemd-logind[2065]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:59:36.684845 systemd-logind[2065]: Removed session 11. Oct 8 19:59:41.688734 systemd[1]: Started sshd@11-172.31.20.47:22-139.178.68.195:34346.service - OpenSSH per-connection server daemon (139.178.68.195:34346). Oct 8 19:59:41.858236 sshd[5111]: Accepted publickey for core from 139.178.68.195 port 34346 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:41.858925 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:41.865121 systemd-logind[2065]: New session 12 of user core. Oct 8 19:59:41.869915 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:59:42.165609 sshd[5111]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:42.170764 systemd-logind[2065]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:59:42.171913 systemd[1]: sshd@11-172.31.20.47:22-139.178.68.195:34346.service: Deactivated successfully. Oct 8 19:59:42.177193 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:59:42.179336 systemd-logind[2065]: Removed session 12. Oct 8 19:59:42.195215 systemd[1]: Started sshd@12-172.31.20.47:22-139.178.68.195:34356.service - OpenSSH per-connection server daemon (139.178.68.195:34356). Oct 8 19:59:42.384157 sshd[5125]: Accepted publickey for core from 139.178.68.195 port 34356 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:42.386093 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:42.394214 systemd-logind[2065]: New session 13 of user core. Oct 8 19:59:42.402874 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:59:42.846623 sshd[5125]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:42.857154 systemd[1]: sshd@12-172.31.20.47:22-139.178.68.195:34356.service: Deactivated successfully. Oct 8 19:59:42.874096 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:59:42.890542 systemd-logind[2065]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:59:42.907089 systemd[1]: Started sshd@13-172.31.20.47:22-139.178.68.195:34360.service - OpenSSH per-connection server daemon (139.178.68.195:34360). Oct 8 19:59:42.916601 systemd-logind[2065]: Removed session 13. Oct 8 19:59:43.101755 sshd[5137]: Accepted publickey for core from 139.178.68.195 port 34360 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:43.102049 sshd[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:43.109776 systemd-logind[2065]: New session 14 of user core. Oct 8 19:59:43.118830 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:59:43.373838 sshd[5137]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:43.381540 systemd-logind[2065]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:59:43.382995 systemd[1]: sshd@13-172.31.20.47:22-139.178.68.195:34360.service: Deactivated successfully. Oct 8 19:59:43.389837 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:59:43.394131 systemd-logind[2065]: Removed session 14. Oct 8 19:59:48.405380 systemd[1]: Started sshd@14-172.31.20.47:22-139.178.68.195:34374.service - OpenSSH per-connection server daemon (139.178.68.195:34374). Oct 8 19:59:48.596395 sshd[5153]: Accepted publickey for core from 139.178.68.195 port 34374 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:48.597685 sshd[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:48.603040 systemd-logind[2065]: New session 15 of user core. Oct 8 19:59:48.609174 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:59:48.850928 sshd[5153]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:48.872644 systemd[1]: sshd@14-172.31.20.47:22-139.178.68.195:34374.service: Deactivated successfully. Oct 8 19:59:48.885698 systemd-logind[2065]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:59:48.887051 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:59:48.889469 systemd-logind[2065]: Removed session 15. Oct 8 19:59:53.885947 systemd[1]: Started sshd@15-172.31.20.47:22-139.178.68.195:41780.service - OpenSSH per-connection server daemon (139.178.68.195:41780). Oct 8 19:59:54.076635 sshd[5166]: Accepted publickey for core from 139.178.68.195 port 41780 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:54.078923 sshd[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:54.096762 systemd-logind[2065]: New session 16 of user core. Oct 8 19:59:54.105904 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:59:54.385406 sshd[5166]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:54.389097 systemd[1]: sshd@15-172.31.20.47:22-139.178.68.195:41780.service: Deactivated successfully. Oct 8 19:59:54.396124 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:59:54.397735 systemd-logind[2065]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:59:54.398784 systemd-logind[2065]: Removed session 16. Oct 8 19:59:59.428047 systemd[1]: Started sshd@16-172.31.20.47:22-139.178.68.195:41792.service - OpenSSH per-connection server daemon (139.178.68.195:41792). Oct 8 19:59:59.621553 sshd[5180]: Accepted publickey for core from 139.178.68.195 port 41792 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 19:59:59.623617 sshd[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:59.630391 systemd-logind[2065]: New session 17 of user core. Oct 8 19:59:59.635998 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:59:59.844149 sshd[5180]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:59.850773 systemd-logind[2065]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:59:59.851168 systemd[1]: sshd@16-172.31.20.47:22-139.178.68.195:41792.service: Deactivated successfully. Oct 8 19:59:59.856237 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:59:59.858327 systemd-logind[2065]: Removed session 17. Oct 8 19:59:59.875030 systemd[1]: Started sshd@17-172.31.20.47:22-139.178.68.195:41794.service - OpenSSH per-connection server daemon (139.178.68.195:41794). Oct 8 20:00:00.052282 sshd[5193]: Accepted publickey for core from 139.178.68.195 port 41794 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:00.062297 sshd[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:00.160801 systemd-logind[2065]: New session 18 of user core. Oct 8 20:00:00.168386 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:00:00.912685 sshd[5193]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:00.919105 systemd-logind[2065]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:00:00.923741 systemd[1]: sshd@17-172.31.20.47:22-139.178.68.195:41794.service: Deactivated successfully. Oct 8 20:00:00.930943 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:00:00.947390 systemd-logind[2065]: Removed session 18. Oct 8 20:00:00.958229 systemd[1]: Started sshd@18-172.31.20.47:22-139.178.68.195:33836.service - OpenSSH per-connection server daemon (139.178.68.195:33836). Oct 8 20:00:01.172472 sshd[5205]: Accepted publickey for core from 139.178.68.195 port 33836 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:01.178780 sshd[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:01.203963 systemd-logind[2065]: New session 19 of user core. Oct 8 20:00:01.220964 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:00:04.145623 systemd-resolved[1972]: Under memory pressure, flushing caches. Oct 8 20:00:04.148902 systemd-journald[1562]: Under memory pressure, flushing caches. Oct 8 20:00:04.145654 systemd-resolved[1972]: Flushed all caches. Oct 8 20:00:05.290054 sshd[5205]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:05.328112 systemd[1]: Started sshd@19-172.31.20.47:22-139.178.68.195:33838.service - OpenSSH per-connection server daemon (139.178.68.195:33838). Oct 8 20:00:05.328795 systemd[1]: sshd@18-172.31.20.47:22-139.178.68.195:33836.service: Deactivated successfully. Oct 8 20:00:05.337202 systemd-logind[2065]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:00:05.344306 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:00:05.348454 systemd-logind[2065]: Removed session 19. Oct 8 20:00:05.541893 sshd[5221]: Accepted publickey for core from 139.178.68.195 port 33838 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:05.542539 sshd[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:05.549912 systemd-logind[2065]: New session 20 of user core. Oct 8 20:00:05.556163 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:00:06.053519 sshd[5221]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:06.060764 systemd-logind[2065]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:00:06.065187 systemd[1]: sshd@19-172.31.20.47:22-139.178.68.195:33838.service: Deactivated successfully. Oct 8 20:00:06.072636 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:00:06.079534 systemd-logind[2065]: Removed session 20. Oct 8 20:00:06.089884 systemd[1]: Started sshd@20-172.31.20.47:22-139.178.68.195:33844.service - OpenSSH per-connection server daemon (139.178.68.195:33844). Oct 8 20:00:06.269936 sshd[5237]: Accepted publickey for core from 139.178.68.195 port 33844 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:06.270835 sshd[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:06.284672 systemd-logind[2065]: New session 21 of user core. Oct 8 20:00:06.293048 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:00:06.577811 sshd[5237]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:06.587559 systemd[1]: sshd@20-172.31.20.47:22-139.178.68.195:33844.service: Deactivated successfully. Oct 8 20:00:06.589703 systemd-logind[2065]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:00:06.596926 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:00:06.598446 systemd-logind[2065]: Removed session 21. Oct 8 20:00:11.600884 systemd[1]: Started sshd@21-172.31.20.47:22-139.178.68.195:35946.service - OpenSSH per-connection server daemon (139.178.68.195:35946). Oct 8 20:00:11.776220 sshd[5251]: Accepted publickey for core from 139.178.68.195 port 35946 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:11.779459 sshd[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:11.784608 systemd-logind[2065]: New session 22 of user core. Oct 8 20:00:11.789802 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 20:00:12.042253 sshd[5251]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:12.056490 systemd[1]: sshd@21-172.31.20.47:22-139.178.68.195:35946.service: Deactivated successfully. Oct 8 20:00:12.056829 systemd-logind[2065]: Session 22 logged out. Waiting for processes to exit. Oct 8 20:00:12.069032 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 20:00:12.070704 systemd-logind[2065]: Removed session 22. Oct 8 20:00:14.460464 update_engine[2068]: I20241008 20:00:14.460180 2068 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 8 20:00:14.460464 update_engine[2068]: I20241008 20:00:14.460381 2068 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 8 20:00:14.468693 update_engine[2068]: I20241008 20:00:14.468645 2068 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 8 20:00:14.469244 update_engine[2068]: I20241008 20:00:14.469204 2068 omaha_request_params.cc:62] Current group set to beta Oct 8 20:00:14.469538 update_engine[2068]: I20241008 20:00:14.469401 2068 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 8 20:00:14.469538 update_engine[2068]: I20241008 20:00:14.469419 2068 update_attempter.cc:643] Scheduling an action processor start. Oct 8 20:00:14.469538 update_engine[2068]: I20241008 20:00:14.469445 2068 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:00:14.469538 update_engine[2068]: I20241008 20:00:14.469493 2068 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 8 20:00:14.469721 update_engine[2068]: I20241008 20:00:14.469571 2068 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:00:14.469721 update_engine[2068]: I20241008 20:00:14.469581 2068 omaha_request_action.cc:272] Request: Oct 8 20:00:14.469721 update_engine[2068]: Oct 8 20:00:14.469721 update_engine[2068]: Oct 8 20:00:14.469721 update_engine[2068]: Oct 8 20:00:14.469721 update_engine[2068]: Oct 8 20:00:14.469721 update_engine[2068]: Oct 8 20:00:14.469721 update_engine[2068]: Oct 8 20:00:14.469721 update_engine[2068]: Oct 8 20:00:14.469721 update_engine[2068]: Oct 8 20:00:14.469721 update_engine[2068]: I20241008 20:00:14.469591 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:00:14.501603 update_engine[2068]: I20241008 20:00:14.497745 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:00:14.501603 update_engine[2068]: I20241008 20:00:14.498117 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:00:14.503716 update_engine[2068]: E20241008 20:00:14.503652 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:00:14.503847 update_engine[2068]: I20241008 20:00:14.503778 2068 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 8 20:00:14.504180 locksmithd[2130]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 8 20:00:17.072884 systemd[1]: Started sshd@22-172.31.20.47:22-139.178.68.195:35948.service - OpenSSH per-connection server daemon (139.178.68.195:35948). Oct 8 20:00:17.253268 sshd[5267]: Accepted publickey for core from 139.178.68.195 port 35948 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:17.256519 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:17.272690 systemd-logind[2065]: New session 23 of user core. Oct 8 20:00:17.282328 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 20:00:17.523352 sshd[5267]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:17.528643 systemd[1]: sshd@22-172.31.20.47:22-139.178.68.195:35948.service: Deactivated successfully. Oct 8 20:00:17.536265 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 20:00:17.536616 systemd-logind[2065]: Session 23 logged out. Waiting for processes to exit. Oct 8 20:00:17.539715 systemd-logind[2065]: Removed session 23. Oct 8 20:00:22.551209 systemd[1]: Started sshd@23-172.31.20.47:22-139.178.68.195:48808.service - OpenSSH per-connection server daemon (139.178.68.195:48808). Oct 8 20:00:22.717101 sshd[5284]: Accepted publickey for core from 139.178.68.195 port 48808 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:22.718915 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:22.725435 systemd-logind[2065]: New session 24 of user core. Oct 8 20:00:22.735908 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 20:00:22.992387 sshd[5284]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:22.997147 systemd[1]: sshd@23-172.31.20.47:22-139.178.68.195:48808.service: Deactivated successfully. Oct 8 20:00:23.003640 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 20:00:23.005320 systemd-logind[2065]: Session 24 logged out. Waiting for processes to exit. Oct 8 20:00:23.006513 systemd-logind[2065]: Removed session 24. Oct 8 20:00:24.386471 update_engine[2068]: I20241008 20:00:24.386394 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:00:24.386960 update_engine[2068]: I20241008 20:00:24.386685 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:00:24.387007 update_engine[2068]: I20241008 20:00:24.386958 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:00:24.387403 update_engine[2068]: E20241008 20:00:24.387339 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:00:24.387490 update_engine[2068]: I20241008 20:00:24.387440 2068 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 8 20:00:28.025159 systemd[1]: Started sshd@24-172.31.20.47:22-139.178.68.195:48818.service - OpenSSH per-connection server daemon (139.178.68.195:48818). Oct 8 20:00:28.205446 sshd[5298]: Accepted publickey for core from 139.178.68.195 port 48818 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:28.208903 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:28.245027 systemd-logind[2065]: New session 25 of user core. Oct 8 20:00:28.249795 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 20:00:28.449471 sshd[5298]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:28.453912 systemd[1]: sshd@24-172.31.20.47:22-139.178.68.195:48818.service: Deactivated successfully. Oct 8 20:00:28.463413 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 20:00:28.466497 systemd-logind[2065]: Session 25 logged out. Waiting for processes to exit. Oct 8 20:00:28.468013 systemd-logind[2065]: Removed session 25. Oct 8 20:00:33.477350 systemd[1]: Started sshd@25-172.31.20.47:22-139.178.68.195:56320.service - OpenSSH per-connection server daemon (139.178.68.195:56320). Oct 8 20:00:33.648826 sshd[5315]: Accepted publickey for core from 139.178.68.195 port 56320 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:33.650835 sshd[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:33.656786 systemd-logind[2065]: New session 26 of user core. Oct 8 20:00:33.661682 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 20:00:33.868427 sshd[5315]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:33.873329 systemd[1]: sshd@25-172.31.20.47:22-139.178.68.195:56320.service: Deactivated successfully. Oct 8 20:00:33.879669 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 20:00:33.882765 systemd-logind[2065]: Session 26 logged out. Waiting for processes to exit. Oct 8 20:00:33.885257 systemd-logind[2065]: Removed session 26. Oct 8 20:00:33.911156 systemd[1]: Started sshd@26-172.31.20.47:22-139.178.68.195:56324.service - OpenSSH per-connection server daemon (139.178.68.195:56324). Oct 8 20:00:34.089417 sshd[5328]: Accepted publickey for core from 139.178.68.195 port 56324 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:34.091156 sshd[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:34.107438 systemd-logind[2065]: New session 27 of user core. Oct 8 20:00:34.117936 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 20:00:34.385275 update_engine[2068]: I20241008 20:00:34.385205 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:00:34.387603 update_engine[2068]: I20241008 20:00:34.387563 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:00:34.387968 update_engine[2068]: I20241008 20:00:34.387917 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:00:34.390221 update_engine[2068]: E20241008 20:00:34.390035 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:00:34.390221 update_engine[2068]: I20241008 20:00:34.390185 2068 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 8 20:00:36.347473 containerd[2101]: time="2024-10-08T20:00:36.345133673Z" level=info msg="StopContainer for \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\" with timeout 30 (s)" Oct 8 20:00:36.352784 containerd[2101]: time="2024-10-08T20:00:36.351600072Z" level=info msg="Stop container \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\" with signal terminated" Oct 8 20:00:36.481945 containerd[2101]: time="2024-10-08T20:00:36.481747063Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:00:36.494595 containerd[2101]: time="2024-10-08T20:00:36.494521010Z" level=info msg="StopContainer for \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\" with timeout 2 (s)" Oct 8 20:00:36.495167 containerd[2101]: time="2024-10-08T20:00:36.495115109Z" level=info msg="Stop container \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\" with signal terminated" Oct 8 20:00:36.512313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838-rootfs.mount: Deactivated successfully. Oct 8 20:00:36.514802 systemd-networkd[1650]: lxc_health: Link DOWN Oct 8 20:00:36.514955 systemd-networkd[1650]: lxc_health: Lost carrier Oct 8 20:00:36.533805 containerd[2101]: time="2024-10-08T20:00:36.533598743Z" level=info msg="shim disconnected" id=141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838 namespace=k8s.io Oct 8 20:00:36.533805 containerd[2101]: time="2024-10-08T20:00:36.533756505Z" level=warning msg="cleaning up after shim disconnected" id=141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838 namespace=k8s.io Oct 8 20:00:36.533805 containerd[2101]: time="2024-10-08T20:00:36.533786382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:00:36.597277 containerd[2101]: time="2024-10-08T20:00:36.596564629Z" level=info msg="StopContainer for \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\" returns successfully" Oct 8 20:00:36.597744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68-rootfs.mount: Deactivated successfully. Oct 8 20:00:36.600269 containerd[2101]: time="2024-10-08T20:00:36.600231112Z" level=info msg="StopPodSandbox for \"c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b\"" Oct 8 20:00:36.600466 containerd[2101]: time="2024-10-08T20:00:36.600289593Z" level=info msg="Container to stop \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:00:36.607042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b-shm.mount: Deactivated successfully. Oct 8 20:00:36.616304 containerd[2101]: time="2024-10-08T20:00:36.616223966Z" level=info msg="shim disconnected" id=bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68 namespace=k8s.io Oct 8 20:00:36.616546 containerd[2101]: time="2024-10-08T20:00:36.616515680Z" level=warning msg="cleaning up after shim disconnected" id=bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68 namespace=k8s.io Oct 8 20:00:36.616889 containerd[2101]: time="2024-10-08T20:00:36.616859974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:00:36.655565 containerd[2101]: time="2024-10-08T20:00:36.655479776Z" level=warning msg="cleanup warnings time=\"2024-10-08T20:00:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 20:00:36.661245 containerd[2101]: time="2024-10-08T20:00:36.661200499Z" level=info msg="StopContainer for \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\" returns successfully" Oct 8 20:00:36.662461 containerd[2101]: time="2024-10-08T20:00:36.662426138Z" level=info msg="StopPodSandbox for \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\"" Oct 8 20:00:36.662578 containerd[2101]: time="2024-10-08T20:00:36.662480511Z" level=info msg="Container to stop \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:00:36.662578 containerd[2101]: time="2024-10-08T20:00:36.662497861Z" level=info msg="Container to stop \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:00:36.662578 containerd[2101]: time="2024-10-08T20:00:36.662510534Z" level=info msg="Container to stop \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:00:36.662578 containerd[2101]: time="2024-10-08T20:00:36.662524835Z" level=info msg="Container to stop \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:00:36.662578 containerd[2101]: time="2024-10-08T20:00:36.662538449Z" level=info msg="Container to stop \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:00:36.699790 containerd[2101]: time="2024-10-08T20:00:36.699713342Z" level=info msg="shim disconnected" id=c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b namespace=k8s.io Oct 8 20:00:36.699790 containerd[2101]: time="2024-10-08T20:00:36.699784532Z" level=warning msg="cleaning up after shim disconnected" id=c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b namespace=k8s.io Oct 8 20:00:36.699790 containerd[2101]: time="2024-10-08T20:00:36.699796409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:00:36.719598 containerd[2101]: time="2024-10-08T20:00:36.719529728Z" level=info msg="shim disconnected" id=fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26 namespace=k8s.io Oct 8 20:00:36.721501 containerd[2101]: time="2024-10-08T20:00:36.721456327Z" level=warning msg="cleaning up after shim disconnected" id=fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26 namespace=k8s.io Oct 8 20:00:36.721768 containerd[2101]: time="2024-10-08T20:00:36.721743384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:00:36.733021 containerd[2101]: time="2024-10-08T20:00:36.732985391Z" level=info msg="TearDown network for sandbox \"c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b\" successfully" Oct 8 20:00:36.733979 containerd[2101]: time="2024-10-08T20:00:36.733943362Z" level=info msg="StopPodSandbox for \"c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b\" returns successfully" Oct 8 20:00:36.751623 containerd[2101]: time="2024-10-08T20:00:36.751584524Z" level=info msg="TearDown network for sandbox \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" successfully" Oct 8 20:00:36.751779 containerd[2101]: time="2024-10-08T20:00:36.751763022Z" level=info msg="StopPodSandbox for \"fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26\" returns successfully" Oct 8 20:00:36.841932 kubelet[3690]: I1008 20:00:36.841884 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-hostproc\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.843715 kubelet[3690]: I1008 20:00:36.842891 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-config-path\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.843715 kubelet[3690]: I1008 20:00:36.842940 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-host-proc-sys-kernel\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.843715 kubelet[3690]: I1008 20:00:36.842974 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-etc-cni-netd\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.843715 kubelet[3690]: I1008 20:00:36.843000 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-cgroup\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.843715 kubelet[3690]: I1008 20:00:36.839536 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-hostproc" (OuterVolumeSpecName: "hostproc") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.843715 kubelet[3690]: I1008 20:00:36.843025 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cni-path\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.844142 kubelet[3690]: I1008 20:00:36.843047 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-xtables-lock\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.844142 kubelet[3690]: I1008 20:00:36.843072 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.844142 kubelet[3690]: I1008 20:00:36.843078 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpxlp\" (UniqueName: \"kubernetes.io/projected/a784a2a1-ca12-4350-98ec-e9034e5f0ab6-kube-api-access-fpxlp\") pod \"a784a2a1-ca12-4350-98ec-e9034e5f0ab6\" (UID: \"a784a2a1-ca12-4350-98ec-e9034e5f0ab6\") " Oct 8 20:00:36.844142 kubelet[3690]: I1008 20:00:36.843122 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmm2q\" (UniqueName: \"kubernetes.io/projected/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-kube-api-access-xmm2q\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.844142 kubelet[3690]: I1008 20:00:36.843195 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a784a2a1-ca12-4350-98ec-e9034e5f0ab6-cilium-config-path\") pod \"a784a2a1-ca12-4350-98ec-e9034e5f0ab6\" (UID: \"a784a2a1-ca12-4350-98ec-e9034e5f0ab6\") " Oct 8 20:00:36.844142 kubelet[3690]: I1008 20:00:36.843232 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-hubble-tls\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.844512 kubelet[3690]: I1008 20:00:36.843476 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-clustermesh-secrets\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.844512 kubelet[3690]: I1008 20:00:36.843514 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-host-proc-sys-net\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.844512 kubelet[3690]: I1008 20:00:36.843542 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-bpf-maps\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.844512 kubelet[3690]: I1008 20:00:36.843567 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-lib-modules\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.844512 kubelet[3690]: I1008 20:00:36.843591 3690 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-run\") pod \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\" (UID: \"a3d4cc3b-eee5-4a16-83b6-e1a826b12006\") " Oct 8 20:00:36.844512 kubelet[3690]: I1008 20:00:36.843646 3690 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-hostproc\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.844512 kubelet[3690]: I1008 20:00:36.843664 3690 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-etc-cni-netd\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.844902 kubelet[3690]: I1008 20:00:36.843693 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.847680 kubelet[3690]: I1008 20:00:36.847081 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:00:36.847680 kubelet[3690]: I1008 20:00:36.847149 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.850214 kubelet[3690]: I1008 20:00:36.849968 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.850214 kubelet[3690]: I1008 20:00:36.850188 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cni-path" (OuterVolumeSpecName: "cni-path") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.850350 kubelet[3690]: I1008 20:00:36.850221 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.857387 kubelet[3690]: I1008 20:00:36.856311 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a784a2a1-ca12-4350-98ec-e9034e5f0ab6-kube-api-access-fpxlp" (OuterVolumeSpecName: "kube-api-access-fpxlp") pod "a784a2a1-ca12-4350-98ec-e9034e5f0ab6" (UID: "a784a2a1-ca12-4350-98ec-e9034e5f0ab6"). InnerVolumeSpecName "kube-api-access-fpxlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:00:36.857387 kubelet[3690]: I1008 20:00:36.856512 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-kube-api-access-xmm2q" (OuterVolumeSpecName: "kube-api-access-xmm2q") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "kube-api-access-xmm2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:00:36.857387 kubelet[3690]: I1008 20:00:36.856556 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.860176 kubelet[3690]: I1008 20:00:36.860138 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:00:36.860446 kubelet[3690]: I1008 20:00:36.860423 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a784a2a1-ca12-4350-98ec-e9034e5f0ab6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a784a2a1-ca12-4350-98ec-e9034e5f0ab6" (UID: "a784a2a1-ca12-4350-98ec-e9034e5f0ab6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:00:36.860741 kubelet[3690]: I1008 20:00:36.860597 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.860859 kubelet[3690]: I1008 20:00:36.860844 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:00:36.863037 kubelet[3690]: I1008 20:00:36.863001 3690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a3d4cc3b-eee5-4a16-83b6-e1a826b12006" (UID: "a3d4cc3b-eee5-4a16-83b6-e1a826b12006"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 20:00:36.944195 kubelet[3690]: I1008 20:00:36.943899 3690 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a784a2a1-ca12-4350-98ec-e9034e5f0ab6-cilium-config-path\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944195 kubelet[3690]: I1008 20:00:36.943940 3690 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xmm2q\" (UniqueName: \"kubernetes.io/projected/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-kube-api-access-xmm2q\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944195 kubelet[3690]: I1008 20:00:36.943957 3690 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-hubble-tls\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944195 kubelet[3690]: I1008 20:00:36.943973 3690 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-clustermesh-secrets\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944195 kubelet[3690]: I1008 20:00:36.943989 3690 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-host-proc-sys-net\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944195 kubelet[3690]: I1008 20:00:36.944004 3690 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-run\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944195 kubelet[3690]: I1008 20:00:36.944019 3690 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-bpf-maps\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944195 kubelet[3690]: I1008 20:00:36.944035 3690 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-lib-modules\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944680 kubelet[3690]: I1008 20:00:36.944053 3690 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-config-path\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944680 kubelet[3690]: I1008 20:00:36.944073 3690 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-host-proc-sys-kernel\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944680 kubelet[3690]: I1008 20:00:36.944089 3690 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cilium-cgroup\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944680 kubelet[3690]: I1008 20:00:36.944106 3690 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-cni-path\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944680 kubelet[3690]: I1008 20:00:36.944122 3690 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3d4cc3b-eee5-4a16-83b6-e1a826b12006-xtables-lock\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:36.944680 kubelet[3690]: I1008 20:00:36.944140 3690 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fpxlp\" (UniqueName: \"kubernetes.io/projected/a784a2a1-ca12-4350-98ec-e9034e5f0ab6-kube-api-access-fpxlp\") on node \"ip-172-31-20-47\" DevicePath \"\"" Oct 8 20:00:37.446842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c337c4d289c0407dc9eee366db52dedb08064590c3e082704aad4b900023be4b-rootfs.mount: Deactivated successfully. Oct 8 20:00:37.447051 systemd[1]: var-lib-kubelet-pods-a784a2a1\x2dca12\x2d4350\x2d98ec\x2de9034e5f0ab6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfpxlp.mount: Deactivated successfully. Oct 8 20:00:37.447198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26-rootfs.mount: Deactivated successfully. Oct 8 20:00:37.447786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe375b6d9d83e9fdbdc0c67ec21db3fab517493709da1a5a073ecbab3d68ac26-shm.mount: Deactivated successfully. Oct 8 20:00:37.448102 systemd[1]: var-lib-kubelet-pods-a3d4cc3b\x2deee5\x2d4a16\x2d83b6\x2de1a826b12006-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxmm2q.mount: Deactivated successfully. Oct 8 20:00:37.448249 systemd[1]: var-lib-kubelet-pods-a3d4cc3b\x2deee5\x2d4a16\x2d83b6\x2de1a826b12006-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 20:00:37.448412 systemd[1]: var-lib-kubelet-pods-a3d4cc3b\x2deee5\x2d4a16\x2d83b6\x2de1a826b12006-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 20:00:37.504437 kubelet[3690]: I1008 20:00:37.504315 3690 scope.go:117] "RemoveContainer" containerID="141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838" Oct 8 20:00:37.510870 containerd[2101]: time="2024-10-08T20:00:37.510731769Z" level=info msg="RemoveContainer for \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\"" Oct 8 20:00:37.522535 containerd[2101]: time="2024-10-08T20:00:37.522197255Z" level=info msg="RemoveContainer for \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\" returns successfully" Oct 8 20:00:37.524653 kubelet[3690]: I1008 20:00:37.524291 3690 scope.go:117] "RemoveContainer" containerID="141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838" Oct 8 20:00:37.552241 containerd[2101]: time="2024-10-08T20:00:37.525938136Z" level=error msg="ContainerStatus for \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\": not found" Oct 8 20:00:37.557891 kubelet[3690]: E1008 20:00:37.557843 3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\": not found" containerID="141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838" Oct 8 20:00:37.561070 kubelet[3690]: I1008 20:00:37.561022 3690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838"} err="failed to get container status \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\": rpc error: code = NotFound desc = an error occurred when try to find container \"141aa5c8a8a60a5a7d2c3396a3df7712156584f9bd9122c2c37a736f6479a838\": not found" Oct 8 20:00:37.561226 kubelet[3690]: I1008 20:00:37.561080 3690 scope.go:117] "RemoveContainer" containerID="bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68" Oct 8 20:00:37.565063 containerd[2101]: time="2024-10-08T20:00:37.565027916Z" level=info msg="RemoveContainer for \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\"" Oct 8 20:00:37.574541 containerd[2101]: time="2024-10-08T20:00:37.574495248Z" level=info msg="RemoveContainer for \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\" returns successfully" Oct 8 20:00:37.574902 kubelet[3690]: I1008 20:00:37.574872 3690 scope.go:117] "RemoveContainer" containerID="63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b" Oct 8 20:00:37.576536 containerd[2101]: time="2024-10-08T20:00:37.576496172Z" level=info msg="RemoveContainer for \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\"" Oct 8 20:00:37.581345 containerd[2101]: time="2024-10-08T20:00:37.581296946Z" level=info msg="RemoveContainer for \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\" returns successfully" Oct 8 20:00:37.581839 kubelet[3690]: I1008 20:00:37.581816 3690 scope.go:117] "RemoveContainer" containerID="9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd" Oct 8 20:00:37.583140 containerd[2101]: time="2024-10-08T20:00:37.583109685Z" level=info msg="RemoveContainer for \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\"" Oct 8 20:00:37.587239 containerd[2101]: time="2024-10-08T20:00:37.587197345Z" level=info msg="RemoveContainer for \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\" returns successfully" Oct 8 20:00:37.587500 kubelet[3690]: I1008 20:00:37.587466 3690 scope.go:117] "RemoveContainer" containerID="97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b" Oct 8 20:00:37.589314 containerd[2101]: time="2024-10-08T20:00:37.589283105Z" level=info msg="RemoveContainer for \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\"" Oct 8 20:00:37.593593 containerd[2101]: time="2024-10-08T20:00:37.593559973Z" level=info msg="RemoveContainer for \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\" returns successfully" Oct 8 20:00:37.593834 kubelet[3690]: I1008 20:00:37.593810 3690 scope.go:117] "RemoveContainer" containerID="fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee" Oct 8 20:00:37.595133 containerd[2101]: time="2024-10-08T20:00:37.595100026Z" level=info msg="RemoveContainer for \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\"" Oct 8 20:00:37.603274 containerd[2101]: time="2024-10-08T20:00:37.603165669Z" level=info msg="RemoveContainer for \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\" returns successfully" Oct 8 20:00:37.603977 kubelet[3690]: I1008 20:00:37.603946 3690 scope.go:117] "RemoveContainer" containerID="bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68" Oct 8 20:00:37.604741 containerd[2101]: time="2024-10-08T20:00:37.604688584Z" level=error msg="ContainerStatus for \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\": not found" Oct 8 20:00:37.604880 kubelet[3690]: E1008 20:00:37.604857 3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\": not found" containerID="bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68" Oct 8 20:00:37.605032 kubelet[3690]: I1008 20:00:37.604965 3690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68"} err="failed to get container status \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdd2cb2aaa8963698680933385cc80cf202754ed1dfa06c92bb6d96cba61de68\": not found" Oct 8 20:00:37.605032 kubelet[3690]: I1008 20:00:37.604983 3690 scope.go:117] "RemoveContainer" containerID="63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b" Oct 8 20:00:37.605310 containerd[2101]: time="2024-10-08T20:00:37.605275148Z" level=error msg="ContainerStatus for \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\": not found" Oct 8 20:00:37.605494 kubelet[3690]: E1008 20:00:37.605464 3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\": not found" containerID="63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b" Oct 8 20:00:37.605628 kubelet[3690]: I1008 20:00:37.605500 3690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b"} err="failed to get container status \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\": rpc error: code = NotFound desc = an error occurred when try to find container \"63300921b388bc1efb9885aa49b9c67914066a0761d9af0d95958efa7d83001b\": not found" Oct 8 20:00:37.605628 kubelet[3690]: I1008 20:00:37.605514 3690 scope.go:117] "RemoveContainer" containerID="9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd" Oct 8 20:00:37.605959 containerd[2101]: time="2024-10-08T20:00:37.605923367Z" level=error msg="ContainerStatus for \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\": not found" Oct 8 20:00:37.606068 kubelet[3690]: E1008 20:00:37.606055 3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\": not found" containerID="9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd" Oct 8 20:00:37.606144 kubelet[3690]: I1008 20:00:37.606086 3690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd"} err="failed to get container status \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"9452c35f650ed2522f708e8bf4e4185fc24129308cd7f115386f57a81b7947cd\": not found" Oct 8 20:00:37.606144 kubelet[3690]: I1008 20:00:37.606099 3690 scope.go:117] "RemoveContainer" containerID="97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b" Oct 8 20:00:37.606303 containerd[2101]: time="2024-10-08T20:00:37.606271051Z" level=error msg="ContainerStatus for \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\": not found" Oct 8 20:00:37.606486 kubelet[3690]: E1008 20:00:37.606437 3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\": not found" containerID="97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b" Oct 8 20:00:37.606486 kubelet[3690]: I1008 20:00:37.606481 3690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b"} err="failed to get container status \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\": rpc error: code = NotFound desc = an error occurred when try to find container \"97f45c0a399b2d066b71f9389e4683e202e52d4ed624cfacd4640c079e84a09b\": not found" Oct 8 20:00:37.606784 kubelet[3690]: I1008 20:00:37.606495 3690 scope.go:117] "RemoveContainer" containerID="fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee" Oct 8 20:00:37.606918 containerd[2101]: time="2024-10-08T20:00:37.606885385Z" level=error msg="ContainerStatus for \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\": not found" Oct 8 20:00:37.607026 kubelet[3690]: E1008 20:00:37.607005 3690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\": not found" containerID="fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee" Oct 8 20:00:37.607100 kubelet[3690]: I1008 20:00:37.607039 3690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee"} err="failed to get container status \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa65f1ad5d9b3f9cd83341bed794c4e44b18b997a2ae90014d4a7063c6164aee\": not found" Oct 8 20:00:38.243016 sshd[5328]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:38.247021 systemd[1]: sshd@26-172.31.20.47:22-139.178.68.195:56324.service: Deactivated successfully. Oct 8 20:00:38.254627 systemd-logind[2065]: Session 27 logged out. Waiting for processes to exit. Oct 8 20:00:38.254837 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 20:00:38.259491 systemd-logind[2065]: Removed session 27. Oct 8 20:00:38.272455 systemd[1]: Started sshd@27-172.31.20.47:22-139.178.68.195:56332.service - OpenSSH per-connection server daemon (139.178.68.195:56332). Oct 8 20:00:38.458841 sshd[5497]: Accepted publickey for core from 139.178.68.195 port 56332 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:38.461499 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:38.468115 systemd-logind[2065]: New session 28 of user core. Oct 8 20:00:38.472830 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 20:00:38.553316 kubelet[3690]: I1008 20:00:38.553207 3690 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a3d4cc3b-eee5-4a16-83b6-e1a826b12006" path="/var/lib/kubelet/pods/a3d4cc3b-eee5-4a16-83b6-e1a826b12006/volumes" Oct 8 20:00:38.554727 kubelet[3690]: I1008 20:00:38.554349 3690 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a784a2a1-ca12-4350-98ec-e9034e5f0ab6" path="/var/lib/kubelet/pods/a784a2a1-ca12-4350-98ec-e9034e5f0ab6/volumes" Oct 8 20:00:39.330883 ntpd[2041]: Deleting interface #10 lxc_health, fe80::142c:41ff:fefb:b77%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Oct 8 20:00:39.331682 ntpd[2041]: 8 Oct 20:00:39 ntpd[2041]: Deleting interface #10 lxc_health, fe80::142c:41ff:fefb:b77%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Oct 8 20:00:39.511325 sshd[5497]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:39.533629 kubelet[3690]: I1008 20:00:39.519501 3690 topology_manager.go:215] "Topology Admit Handler" podUID="92262a0c-a81a-4cb9-830d-54624032550b" podNamespace="kube-system" podName="cilium-87ng6" Oct 8 20:00:39.533629 kubelet[3690]: E1008 20:00:39.530625 3690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3d4cc3b-eee5-4a16-83b6-e1a826b12006" containerName="mount-cgroup" Oct 8 20:00:39.533629 kubelet[3690]: E1008 20:00:39.530735 3690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a784a2a1-ca12-4350-98ec-e9034e5f0ab6" containerName="cilium-operator" Oct 8 20:00:39.533629 kubelet[3690]: E1008 20:00:39.532713 3690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3d4cc3b-eee5-4a16-83b6-e1a826b12006" containerName="clean-cilium-state" Oct 8 20:00:39.533629 kubelet[3690]: E1008 20:00:39.532855 3690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3d4cc3b-eee5-4a16-83b6-e1a826b12006" containerName="cilium-agent" Oct 8 20:00:39.533629 kubelet[3690]: E1008 20:00:39.532872 3690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3d4cc3b-eee5-4a16-83b6-e1a826b12006" containerName="apply-sysctl-overwrites" Oct 8 20:00:39.533629 kubelet[3690]: E1008 20:00:39.532886 3690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3d4cc3b-eee5-4a16-83b6-e1a826b12006" containerName="mount-bpf-fs" Oct 8 20:00:39.533629 kubelet[3690]: I1008 20:00:39.532939 3690 memory_manager.go:354] "RemoveStaleState removing state" podUID="a784a2a1-ca12-4350-98ec-e9034e5f0ab6" containerName="cilium-operator" Oct 8 20:00:39.533629 kubelet[3690]: I1008 20:00:39.532950 3690 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d4cc3b-eee5-4a16-83b6-e1a826b12006" containerName="cilium-agent" Oct 8 20:00:39.564920 systemd[1]: sshd@27-172.31.20.47:22-139.178.68.195:56332.service: Deactivated successfully. Oct 8 20:00:39.586183 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 20:00:39.599853 systemd-logind[2065]: Session 28 logged out. Waiting for processes to exit. Oct 8 20:00:39.613754 systemd[1]: Started sshd@28-172.31.20.47:22-139.178.68.195:56336.service - OpenSSH per-connection server daemon (139.178.68.195:56336). Oct 8 20:00:39.618232 systemd-logind[2065]: Removed session 28. Oct 8 20:00:39.683464 kubelet[3690]: I1008 20:00:39.682998 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92262a0c-a81a-4cb9-830d-54624032550b-clustermesh-secrets\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.687843 kubelet[3690]: I1008 20:00:39.685885 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92262a0c-a81a-4cb9-830d-54624032550b-cilium-config-path\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.687843 kubelet[3690]: I1008 20:00:39.686187 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-cilium-run\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.687843 kubelet[3690]: I1008 20:00:39.687478 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-cilium-cgroup\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.687843 kubelet[3690]: I1008 20:00:39.687569 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-xtables-lock\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.687843 kubelet[3690]: I1008 20:00:39.687769 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-host-proc-sys-kernel\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.689760 kubelet[3690]: I1008 20:00:39.689393 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-etc-cni-netd\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.693769 kubelet[3690]: I1008 20:00:39.690720 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92262a0c-a81a-4cb9-830d-54624032550b-cilium-ipsec-secrets\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.693769 kubelet[3690]: I1008 20:00:39.691298 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-host-proc-sys-net\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.693769 kubelet[3690]: I1008 20:00:39.691541 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92262a0c-a81a-4cb9-830d-54624032550b-hubble-tls\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.694533 kubelet[3690]: I1008 20:00:39.694077 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvglc\" (UniqueName: \"kubernetes.io/projected/92262a0c-a81a-4cb9-830d-54624032550b-kube-api-access-xvglc\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.694533 kubelet[3690]: I1008 20:00:39.694137 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-hostproc\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.694533 kubelet[3690]: I1008 20:00:39.694175 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-cni-path\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.694533 kubelet[3690]: I1008 20:00:39.694203 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-bpf-maps\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.694533 kubelet[3690]: I1008 20:00:39.694233 3690 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92262a0c-a81a-4cb9-830d-54624032550b-lib-modules\") pod \"cilium-87ng6\" (UID: \"92262a0c-a81a-4cb9-830d-54624032550b\") " pod="kube-system/cilium-87ng6" Oct 8 20:00:39.906719 containerd[2101]: time="2024-10-08T20:00:39.906672332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-87ng6,Uid:92262a0c-a81a-4cb9-830d-54624032550b,Namespace:kube-system,Attempt:0,}" Oct 8 20:00:39.941551 sshd[5511]: Accepted publickey for core from 139.178.68.195 port 56336 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:39.945109 sshd[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:39.955734 systemd-logind[2065]: New session 29 of user core. Oct 8 20:00:39.961709 containerd[2101]: time="2024-10-08T20:00:39.956319461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:00:39.961709 containerd[2101]: time="2024-10-08T20:00:39.960055821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:00:39.961709 containerd[2101]: time="2024-10-08T20:00:39.960398758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:39.962904 containerd[2101]: time="2024-10-08T20:00:39.962134136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:00:39.967781 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 8 20:00:40.069281 containerd[2101]: time="2024-10-08T20:00:40.068897623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-87ng6,Uid:92262a0c-a81a-4cb9-830d-54624032550b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\"" Oct 8 20:00:40.078301 containerd[2101]: time="2024-10-08T20:00:40.078255082Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 20:00:40.103245 sshd[5511]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:40.113963 systemd[1]: sshd@28-172.31.20.47:22-139.178.68.195:56336.service: Deactivated successfully. Oct 8 20:00:40.127501 systemd-logind[2065]: Session 29 logged out. Waiting for processes to exit. Oct 8 20:00:40.133213 systemd[1]: session-29.scope: Deactivated successfully. Oct 8 20:00:40.151997 systemd[1]: Started sshd@29-172.31.20.47:22-139.178.68.195:56338.service - OpenSSH per-connection server daemon (139.178.68.195:56338). Oct 8 20:00:40.158653 systemd-logind[2065]: Removed session 29. Oct 8 20:00:40.176569 containerd[2101]: time="2024-10-08T20:00:40.176517296Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb526dffc1ea058b0985c55c37e547455ab2cae5847e2174c565f0711b06386d\"" Oct 8 20:00:40.177421 containerd[2101]: time="2024-10-08T20:00:40.177197676Z" level=info msg="StartContainer for \"bb526dffc1ea058b0985c55c37e547455ab2cae5847e2174c565f0711b06386d\"" Oct 8 20:00:40.260341 containerd[2101]: time="2024-10-08T20:00:40.260296785Z" level=info msg="StartContainer for \"bb526dffc1ea058b0985c55c37e547455ab2cae5847e2174c565f0711b06386d\" returns successfully" Oct 8 20:00:40.365180 sshd[5566]: Accepted publickey for core from 139.178.68.195 port 56338 ssh2: RSA SHA256:a/9Iv00m6qg7PJXBlKjQoacVZ/jXpsGF+O4wYGPyBFI Oct 8 20:00:40.367086 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:40.374227 systemd-logind[2065]: New session 30 of user core. Oct 8 20:00:40.378519 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 8 20:00:40.398676 containerd[2101]: time="2024-10-08T20:00:40.398545356Z" level=info msg="shim disconnected" id=bb526dffc1ea058b0985c55c37e547455ab2cae5847e2174c565f0711b06386d namespace=k8s.io Oct 8 20:00:40.398676 containerd[2101]: time="2024-10-08T20:00:40.398662061Z" level=warning msg="cleaning up after shim disconnected" id=bb526dffc1ea058b0985c55c37e547455ab2cae5847e2174c565f0711b06386d namespace=k8s.io Oct 8 20:00:40.398676 containerd[2101]: time="2024-10-08T20:00:40.398674362Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:00:40.525408 containerd[2101]: time="2024-10-08T20:00:40.523862550Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 20:00:40.581520 containerd[2101]: time="2024-10-08T20:00:40.581352696Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"00c188e6d33993bf26e2cd34a1c09aa7dfd089f827ba21becbbe2f7250ab2d12\"" Oct 8 20:00:40.586350 containerd[2101]: time="2024-10-08T20:00:40.584503943Z" level=info msg="StartContainer for \"00c188e6d33993bf26e2cd34a1c09aa7dfd089f827ba21becbbe2f7250ab2d12\"" Oct 8 20:00:40.704220 containerd[2101]: time="2024-10-08T20:00:40.703594173Z" level=info msg="StartContainer for \"00c188e6d33993bf26e2cd34a1c09aa7dfd089f827ba21becbbe2f7250ab2d12\" returns successfully" Oct 8 20:00:40.913795 kubelet[3690]: E1008 20:00:40.855857 3690 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 20:00:40.921563 containerd[2101]: time="2024-10-08T20:00:40.920943862Z" level=info msg="shim disconnected" id=00c188e6d33993bf26e2cd34a1c09aa7dfd089f827ba21becbbe2f7250ab2d12 namespace=k8s.io Oct 8 20:00:40.921563 containerd[2101]: time="2024-10-08T20:00:40.921070059Z" level=warning msg="cleaning up after shim disconnected" id=00c188e6d33993bf26e2cd34a1c09aa7dfd089f827ba21becbbe2f7250ab2d12 namespace=k8s.io Oct 8 20:00:40.921563 containerd[2101]: time="2024-10-08T20:00:40.921084487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:00:41.538373 containerd[2101]: time="2024-10-08T20:00:41.538178292Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 20:00:41.591505 containerd[2101]: time="2024-10-08T20:00:41.589947025Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ec20d864e48623ac5aa5b3187559061b97b08d8450d3a824d8ca4bfc78bd957\"" Oct 8 20:00:41.598501 containerd[2101]: time="2024-10-08T20:00:41.596792502Z" level=info msg="StartContainer for \"1ec20d864e48623ac5aa5b3187559061b97b08d8450d3a824d8ca4bfc78bd957\"" Oct 8 20:00:41.807699 containerd[2101]: time="2024-10-08T20:00:41.807396736Z" level=info msg="StartContainer for \"1ec20d864e48623ac5aa5b3187559061b97b08d8450d3a824d8ca4bfc78bd957\" returns successfully" Oct 8 20:00:41.900982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ec20d864e48623ac5aa5b3187559061b97b08d8450d3a824d8ca4bfc78bd957-rootfs.mount: Deactivated successfully. Oct 8 20:00:41.937317 containerd[2101]: time="2024-10-08T20:00:41.936737515Z" level=info msg="shim disconnected" id=1ec20d864e48623ac5aa5b3187559061b97b08d8450d3a824d8ca4bfc78bd957 namespace=k8s.io Oct 8 20:00:41.937317 containerd[2101]: time="2024-10-08T20:00:41.937208135Z" level=warning msg="cleaning up after shim disconnected" id=1ec20d864e48623ac5aa5b3187559061b97b08d8450d3a824d8ca4bfc78bd957 namespace=k8s.io Oct 8 20:00:41.937317 containerd[2101]: time="2024-10-08T20:00:41.937231089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:00:42.537094 containerd[2101]: time="2024-10-08T20:00:42.536905589Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 20:00:42.578200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882552485.mount: Deactivated successfully. Oct 8 20:00:42.614069 containerd[2101]: time="2024-10-08T20:00:42.613815794Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48dbbd9a765f278c18829c81c25ea7233120e7a7c08d7cb889c0a99ec7e60632\"" Oct 8 20:00:42.616573 containerd[2101]: time="2024-10-08T20:00:42.615425778Z" level=info msg="StartContainer for \"48dbbd9a765f278c18829c81c25ea7233120e7a7c08d7cb889c0a99ec7e60632\"" Oct 8 20:00:42.720973 containerd[2101]: time="2024-10-08T20:00:42.720867610Z" level=info msg="StartContainer for \"48dbbd9a765f278c18829c81c25ea7233120e7a7c08d7cb889c0a99ec7e60632\" returns successfully" Oct 8 20:00:42.755148 containerd[2101]: time="2024-10-08T20:00:42.755059752Z" level=info msg="shim disconnected" id=48dbbd9a765f278c18829c81c25ea7233120e7a7c08d7cb889c0a99ec7e60632 namespace=k8s.io Oct 8 20:00:42.755148 containerd[2101]: time="2024-10-08T20:00:42.755137034Z" level=warning msg="cleaning up after shim disconnected" id=48dbbd9a765f278c18829c81c25ea7233120e7a7c08d7cb889c0a99ec7e60632 namespace=k8s.io Oct 8 20:00:42.755148 containerd[2101]: time="2024-10-08T20:00:42.755150968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:00:42.894092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48dbbd9a765f278c18829c81c25ea7233120e7a7c08d7cb889c0a99ec7e60632-rootfs.mount: Deactivated successfully. Oct 8 20:00:43.546251 containerd[2101]: time="2024-10-08T20:00:43.543751051Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 20:00:43.580099 containerd[2101]: time="2024-10-08T20:00:43.580038970Z" level=info msg="CreateContainer within sandbox \"2426d7f6f2b245a58883dfbb033c81caabc0c91c71e81de452ee6449d37d69ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a2089c4d102e0f0236e8b94c364a14a0197afad7bb581234035292463c74fdfb\"" Oct 8 20:00:43.583347 containerd[2101]: time="2024-10-08T20:00:43.583296554Z" level=info msg="StartContainer for \"a2089c4d102e0f0236e8b94c364a14a0197afad7bb581234035292463c74fdfb\"" Oct 8 20:00:43.720584 kubelet[3690]: I1008 20:00:43.720550 3690 setters.go:568] "Node became not ready" node="ip-172-31-20-47" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-08T20:00:43Z","lastTransitionTime":"2024-10-08T20:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 8 20:00:43.733311 containerd[2101]: time="2024-10-08T20:00:43.733262041Z" level=info msg="StartContainer for \"a2089c4d102e0f0236e8b94c364a14a0197afad7bb581234035292463c74fdfb\" returns successfully" Oct 8 20:00:43.896864 systemd[1]: run-containerd-runc-k8s.io-a2089c4d102e0f0236e8b94c364a14a0197afad7bb581234035292463c74fdfb-runc.HrUo3P.mount: Deactivated successfully. Oct 8 20:00:44.389486 update_engine[2068]: I20241008 20:00:44.388435 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:00:44.389486 update_engine[2068]: I20241008 20:00:44.388803 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:00:44.392394 update_engine[2068]: I20241008 20:00:44.392076 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:00:44.392394 update_engine[2068]: E20241008 20:00:44.392606 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:00:44.392394 update_engine[2068]: I20241008 20:00:44.392859 2068 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:00:44.392394 update_engine[2068]: I20241008 20:00:44.392879 2068 omaha_request_action.cc:617] Omaha request response: Oct 8 20:00:44.392394 update_engine[2068]: E20241008 20:00:44.392982 2068 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 8 20:00:44.398128 update_engine[2068]: I20241008 20:00:44.397768 2068 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 8 20:00:44.398128 update_engine[2068]: I20241008 20:00:44.397802 2068 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:00:44.398128 update_engine[2068]: I20241008 20:00:44.397812 2068 update_attempter.cc:306] Processing Done. Oct 8 20:00:44.398128 update_engine[2068]: E20241008 20:00:44.397831 2068 update_attempter.cc:619] Update failed. Oct 8 20:00:44.406907 update_engine[2068]: I20241008 20:00:44.402566 2068 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 8 20:00:44.406907 update_engine[2068]: I20241008 20:00:44.402611 2068 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 8 20:00:44.406907 update_engine[2068]: I20241008 20:00:44.402730 2068 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 8 20:00:44.406907 update_engine[2068]: I20241008 20:00:44.402832 2068 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:00:44.406907 update_engine[2068]: I20241008 20:00:44.403126 2068 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:00:44.406907 update_engine[2068]: I20241008 20:00:44.403172 2068 omaha_request_action.cc:272] Request: Oct 8 20:00:44.406907 update_engine[2068]: Oct 8 20:00:44.406907 update_engine[2068]: Oct 8 20:00:44.406907 update_engine[2068]: Oct 8 20:00:44.406907 update_engine[2068]: Oct 8 20:00:44.406907 update_engine[2068]: Oct 8 20:00:44.406907 update_engine[2068]: Oct 8 20:00:44.406907 update_engine[2068]: I20241008 20:00:44.403220 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:00:44.406907 update_engine[2068]: I20241008 20:00:44.403698 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:00:44.406907 update_engine[2068]: I20241008 20:00:44.404287 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:00:44.407489 update_engine[2068]: E20241008 20:00:44.406949 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:00:44.407489 update_engine[2068]: I20241008 20:00:44.407011 2068 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:00:44.407489 update_engine[2068]: I20241008 20:00:44.407022 2068 omaha_request_action.cc:617] Omaha request response: Oct 8 20:00:44.407489 update_engine[2068]: I20241008 20:00:44.407033 2068 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:00:44.407489 update_engine[2068]: I20241008 20:00:44.407042 2068 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:00:44.407489 update_engine[2068]: I20241008 20:00:44.407048 2068 update_attempter.cc:306] Processing Done. Oct 8 20:00:44.407489 update_engine[2068]: I20241008 20:00:44.407058 2068 update_attempter.cc:310] Error event sent. Oct 8 20:00:44.407489 update_engine[2068]: I20241008 20:00:44.407069 2068 update_check_scheduler.cc:74] Next update check in 49m49s Oct 8 20:00:44.410704 locksmithd[2130]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 8 20:00:44.410704 locksmithd[2130]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 8 20:00:44.621973 kubelet[3690]: I1008 20:00:44.617028 3690 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-87ng6" podStartSLOduration=5.616973747 podStartE2EDuration="5.616973747s" podCreationTimestamp="2024-10-08 20:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:00:44.616276171 +0000 UTC m=+134.305956395" watchObservedRunningTime="2024-10-08 20:00:44.616973747 +0000 UTC m=+134.306653969" Oct 8 20:00:44.633391 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 8 20:00:45.245274 systemd[1]: run-containerd-runc-k8s.io-a2089c4d102e0f0236e8b94c364a14a0197afad7bb581234035292463c74fdfb-runc.DhfF3M.mount: Deactivated successfully. Oct 8 20:00:48.682716 systemd-networkd[1650]: lxc_health: Link UP Oct 8 20:00:48.694944 systemd-networkd[1650]: lxc_health: Gained carrier Oct 8 20:00:48.714985 (udev-worker)[6404]: Network interface NamePolicy= disabled on kernel command line. Oct 8 20:00:49.841945 systemd-networkd[1650]: lxc_health: Gained IPv6LL Oct 8 20:00:52.331105 ntpd[2041]: Listen normally on 13 lxc_health [fe80::9cd0:d7ff:fe8c:61a2%14]:123 Oct 8 20:00:52.331953 ntpd[2041]: 8 Oct 20:00:52 ntpd[2041]: Listen normally on 13 lxc_health [fe80::9cd0:d7ff:fe8c:61a2%14]:123 Oct 8 20:00:52.865072 systemd[1]: run-containerd-runc-k8s.io-a2089c4d102e0f0236e8b94c364a14a0197afad7bb581234035292463c74fdfb-runc.I7tGeo.mount: Deactivated successfully. Oct 8 20:00:55.328479 sshd[5566]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:55.337511 systemd[1]: sshd@29-172.31.20.47:22-139.178.68.195:56338.service: Deactivated successfully. Oct 8 20:00:55.346803 systemd-logind[2065]: Session 30 logged out. Waiting for processes to exit. Oct 8 20:00:55.352267 systemd[1]: session-30.scope: Deactivated successfully. Oct 8 20:00:55.353770 systemd-logind[2065]: Removed session 30. Oct 8 20:01:09.924731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0698bc137091b0afaad3b85d33566597a4ff29471a7f9409138cf7fb5748ab6e-rootfs.mount: Deactivated successfully. Oct 8 20:01:09.952157 containerd[2101]: time="2024-10-08T20:01:09.952086564Z" level=info msg="shim disconnected" id=0698bc137091b0afaad3b85d33566597a4ff29471a7f9409138cf7fb5748ab6e namespace=k8s.io Oct 8 20:01:09.952157 containerd[2101]: time="2024-10-08T20:01:09.952149710Z" level=warning msg="cleaning up after shim disconnected" id=0698bc137091b0afaad3b85d33566597a4ff29471a7f9409138cf7fb5748ab6e namespace=k8s.io Oct 8 20:01:09.952157 containerd[2101]: time="2024-10-08T20:01:09.952163535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:09.976033 containerd[2101]: time="2024-10-08T20:01:09.975970643Z" level=warning msg="cleanup warnings time=\"2024-10-08T20:01:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 20:01:10.660892 kubelet[3690]: I1008 20:01:10.660852 3690 scope.go:117] "RemoveContainer" containerID="0698bc137091b0afaad3b85d33566597a4ff29471a7f9409138cf7fb5748ab6e" Oct 8 20:01:10.665511 containerd[2101]: time="2024-10-08T20:01:10.665468988Z" level=info msg="CreateContainer within sandbox \"179b33629f0ab79071e1cea59daef629d47509b5d19c2e4cf51c4a87c21ae341\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 8 20:01:10.696862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523863380.mount: Deactivated successfully. Oct 8 20:01:10.704959 containerd[2101]: time="2024-10-08T20:01:10.704909903Z" level=info msg="CreateContainer within sandbox \"179b33629f0ab79071e1cea59daef629d47509b5d19c2e4cf51c4a87c21ae341\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"efa670a164dda92d12fc96eafcd7dd909ad526968a428d9ef4fc080bceaaaf0b\"" Oct 8 20:01:10.705715 containerd[2101]: time="2024-10-08T20:01:10.705682999Z" level=info msg="StartContainer for \"efa670a164dda92d12fc96eafcd7dd909ad526968a428d9ef4fc080bceaaaf0b\"" Oct 8 20:01:10.884218 containerd[2101]: time="2024-10-08T20:01:10.884172764Z" level=info msg="StartContainer for \"efa670a164dda92d12fc96eafcd7dd909ad526968a428d9ef4fc080bceaaaf0b\" returns successfully" Oct 8 20:01:14.127388 kubelet[3690]: E1008 20:01:14.126373 3690 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-47?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 8 20:01:15.538241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36a0262f451969b0a9221628c119307df2978cbb5f94959ba023868d3c271bad-rootfs.mount: Deactivated successfully. Oct 8 20:01:15.593448 containerd[2101]: time="2024-10-08T20:01:15.593253312Z" level=info msg="shim disconnected" id=36a0262f451969b0a9221628c119307df2978cbb5f94959ba023868d3c271bad namespace=k8s.io Oct 8 20:01:15.593448 containerd[2101]: time="2024-10-08T20:01:15.593328369Z" level=warning msg="cleaning up after shim disconnected" id=36a0262f451969b0a9221628c119307df2978cbb5f94959ba023868d3c271bad namespace=k8s.io Oct 8 20:01:15.594125 containerd[2101]: time="2024-10-08T20:01:15.593461439Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:15.678558 kubelet[3690]: I1008 20:01:15.678519 3690 scope.go:117] "RemoveContainer" containerID="36a0262f451969b0a9221628c119307df2978cbb5f94959ba023868d3c271bad" Oct 8 20:01:15.681805 containerd[2101]: time="2024-10-08T20:01:15.681761240Z" level=info msg="CreateContainer within sandbox \"d1b192a9e90635d23dc92700e6a94b56af8ccf2dfbaaadba40747c8fc83f6b7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 8 20:01:15.707494 containerd[2101]: time="2024-10-08T20:01:15.707448326Z" level=info msg="CreateContainer within sandbox \"d1b192a9e90635d23dc92700e6a94b56af8ccf2dfbaaadba40747c8fc83f6b7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e1c60e9ec5308b180c473526876f8a5796ca7a49e382287a4dbd7e35d65513d1\"" Oct 8 20:01:15.708167 containerd[2101]: time="2024-10-08T20:01:15.708136394Z" level=info msg="StartContainer for \"e1c60e9ec5308b180c473526876f8a5796ca7a49e382287a4dbd7e35d65513d1\"" Oct 8 20:01:15.812306 containerd[2101]: time="2024-10-08T20:01:15.812081004Z" level=info msg="StartContainer for \"e1c60e9ec5308b180c473526876f8a5796ca7a49e382287a4dbd7e35d65513d1\" returns successfully" Oct 8 20:01:16.541181 systemd[1]: run-containerd-runc-k8s.io-e1c60e9ec5308b180c473526876f8a5796ca7a49e382287a4dbd7e35d65513d1-runc.YTDvbC.mount: Deactivated successfully. Oct 8 20:01:24.127904 kubelet[3690]: E1008 20:01:24.127752 3690 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-47?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"