Jan 13 21:31:53.077680 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:31:53.077837 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:31:53.077888 kernel: BIOS-provided physical RAM map: Jan 13 21:31:53.077900 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:31:53.077910 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:31:53.077920 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:31:53.077971 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 21:31:53.077984 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 21:31:53.077995 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 21:31:53.078005 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:31:53.078015 kernel: NX (Execute Disable) protection: active Jan 13 21:31:53.078074 kernel: APIC: Static calls initialized Jan 13 21:31:53.078087 kernel: SMBIOS 2.7 present. Jan 13 21:31:53.078099 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 21:31:53.078149 kernel: Hypervisor detected: KVM Jan 13 21:31:53.078163 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:31:53.078174 kernel: kvm-clock: using sched offset of 5993146687 cycles Jan 13 21:31:53.078185 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:31:53.078232 kernel: tsc: Detected 2500.004 MHz processor Jan 13 21:31:53.078246 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:31:53.078259 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:31:53.078274 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 21:31:53.078320 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:31:53.078334 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:31:53.078347 kernel: Using GB pages for direct mapping Jan 13 21:31:53.078392 kernel: ACPI: Early table checksum verification disabled Jan 13 21:31:53.078410 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 21:31:53.078423 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 21:31:53.078435 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:31:53.078478 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 21:31:53.078500 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 21:31:53.078514 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:31:53.078528 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:31:53.078574 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 21:31:53.078589 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:31:53.078603 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 21:31:53.078693 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 21:31:53.078746 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:31:53.078760 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 21:31:53.078779 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 21:31:53.078831 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 21:31:53.078848 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 21:31:53.078862 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 21:31:53.078908 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 21:31:53.078929 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 21:31:53.078943 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 21:31:53.078988 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 21:31:53.079006 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 21:31:53.079022 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:31:53.079094 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:31:53.079112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 21:31:53.079158 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 21:31:53.079177 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 21:31:53.079198 kernel: Zone ranges: Jan 13 21:31:53.079244 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:31:53.079361 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 21:31:53.079375 kernel: Normal empty Jan 13 21:31:53.079388 kernel: Movable zone start for each node Jan 13 21:31:53.079400 kernel: Early memory node ranges Jan 13 21:31:53.079446 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:31:53.079699 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 21:31:53.079820 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 21:31:53.079834 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:31:53.079852 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:31:53.079864 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 21:31:53.079877 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:31:53.079889 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:31:53.079901 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 21:31:53.079914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:31:53.079927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:31:53.080102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:31:53.080147 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:31:53.080167 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:31:53.080181 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:31:53.080194 kernel: TSC deadline timer available Jan 13 21:31:53.080207 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:31:53.080220 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:31:53.080232 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 21:31:53.080245 kernel: Booting paravirtualized kernel on KVM Jan 13 21:31:53.080259 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:31:53.080273 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:31:53.080289 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:31:53.080344 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:31:53.080422 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:31:53.083077 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:31:53.083092 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:31:53.083109 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:31:53.083125 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:31:53.083141 kernel: random: crng init done Jan 13 21:31:53.083162 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:31:53.083178 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:31:53.083240 kernel: Fallback order for Node 0: 0 Jan 13 21:31:53.083258 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 21:31:53.083274 kernel: Policy zone: DMA32 Jan 13 21:31:53.083290 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:31:53.083307 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 13 21:31:53.083322 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:31:53.083338 kernel: Kernel/User page tables isolation: enabled Jan 13 21:31:53.083358 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:31:53.083373 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:31:53.083389 kernel: Dynamic Preempt: voluntary Jan 13 21:31:53.083404 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:31:53.083421 kernel: rcu: RCU event tracing is enabled. Jan 13 21:31:53.083437 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:31:53.083453 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:31:53.083664 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:31:53.083709 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:31:53.083730 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:31:53.083747 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:31:53.083764 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:31:53.083779 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:31:53.083794 kernel: Console: colour VGA+ 80x25 Jan 13 21:31:53.083809 kernel: printk: console [ttyS0] enabled Jan 13 21:31:53.083825 kernel: ACPI: Core revision 20230628 Jan 13 21:31:53.083842 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 21:31:53.083858 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:31:53.083877 kernel: x2apic enabled Jan 13 21:31:53.083893 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:31:53.083922 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jan 13 21:31:53.083942 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Jan 13 21:31:53.083959 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 21:31:53.083975 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 21:31:53.083992 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:31:53.084008 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:31:53.084024 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:31:53.084058 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:31:53.084072 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 21:31:53.084086 kernel: RETBleed: Vulnerable Jan 13 21:31:53.084104 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:31:53.084118 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:31:53.084132 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:31:53.084145 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 21:31:53.084159 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:31:53.084173 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:31:53.084229 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:31:53.084250 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 21:31:53.084265 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 21:31:53.084280 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 21:31:53.084295 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 21:31:53.084309 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 21:31:53.084325 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 21:31:53.084340 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:31:53.084356 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 21:31:53.084371 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 21:31:53.084386 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 21:31:53.084498 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 21:31:53.084520 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 21:31:53.084536 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 21:31:53.084558 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 21:31:53.084579 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:31:53.084593 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:31:53.084608 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:31:53.084622 kernel: landlock: Up and running. Jan 13 21:31:53.084635 kernel: SELinux: Initializing. Jan 13 21:31:53.084648 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:31:53.084661 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:31:53.084675 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 21:31:53.084694 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:31:53.084708 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:31:53.084722 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:31:53.084738 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 21:31:53.084755 kernel: signal: max sigframe size: 3632 Jan 13 21:31:53.084770 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:31:53.084784 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:31:53.084797 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:31:53.084811 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:31:53.084829 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:31:53.084844 kernel: .... node #0, CPUs: #1 Jan 13 21:31:53.084862 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:31:53.084876 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:31:53.084892 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:31:53.084946 kernel: smpboot: Max logical packages: 1 Jan 13 21:31:53.084962 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Jan 13 21:31:53.085015 kernel: devtmpfs: initialized Jan 13 21:31:53.085127 kernel: x86/mm: Memory block size: 128MB Jan 13 21:31:53.085244 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:31:53.085261 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:31:53.085305 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:31:53.085322 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:31:53.085335 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:31:53.085350 kernel: audit: type=2000 audit(1736803911.864:1): state=initialized audit_enabled=0 res=1 Jan 13 21:31:53.085439 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:31:53.085485 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:31:53.085507 kernel: cpuidle: using governor menu Jan 13 21:31:53.085774 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:31:53.085794 kernel: dca service started, version 1.12.1 Jan 13 21:31:53.085810 kernel: PCI: Using configuration type 1 for base access Jan 13 21:31:53.085825 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:31:53.085841 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:31:53.085855 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:31:53.085870 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:31:53.085884 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:31:53.085903 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:31:53.085919 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:31:53.085935 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:31:53.085952 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:31:53.085969 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:31:53.085986 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:31:53.086002 kernel: ACPI: Interpreter enabled Jan 13 21:31:53.086018 kernel: ACPI: PM: (supports S0 S5) Jan 13 21:31:53.086054 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:31:53.086069 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:31:53.086090 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:31:53.086106 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:31:53.086123 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:31:53.086661 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:31:53.086819 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:31:53.087006 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:31:53.087042 kernel: acpiphp: Slot [3] registered Jan 13 21:31:53.087600 kernel: acpiphp: Slot [4] registered Jan 13 21:31:53.087619 kernel: acpiphp: Slot [5] registered Jan 13 21:31:53.087632 kernel: acpiphp: Slot [6] registered Jan 13 21:31:53.087646 kernel: acpiphp: Slot [7] registered Jan 13 21:31:53.087659 kernel: acpiphp: Slot [8] registered Jan 13 21:31:53.087672 kernel: acpiphp: Slot [9] registered Jan 13 21:31:53.087688 kernel: acpiphp: Slot [10] registered Jan 13 21:31:53.087704 kernel: acpiphp: Slot [11] registered Jan 13 21:31:53.087720 kernel: acpiphp: Slot [12] registered Jan 13 21:31:53.087742 kernel: acpiphp: Slot [13] registered Jan 13 21:31:53.087758 kernel: acpiphp: Slot [14] registered Jan 13 21:31:53.087774 kernel: acpiphp: Slot [15] registered Jan 13 21:31:53.087790 kernel: acpiphp: Slot [16] registered Jan 13 21:31:53.087807 kernel: acpiphp: Slot [17] registered Jan 13 21:31:53.087824 kernel: acpiphp: Slot [18] registered Jan 13 21:31:53.087841 kernel: acpiphp: Slot [19] registered Jan 13 21:31:53.087858 kernel: acpiphp: Slot [20] registered Jan 13 21:31:53.087875 kernel: acpiphp: Slot [21] registered Jan 13 21:31:53.087892 kernel: acpiphp: Slot [22] registered Jan 13 21:31:53.087913 kernel: acpiphp: Slot [23] registered Jan 13 21:31:53.087929 kernel: acpiphp: Slot [24] registered Jan 13 21:31:53.087997 kernel: acpiphp: Slot [25] registered Jan 13 21:31:53.088061 kernel: acpiphp: Slot [26] registered Jan 13 21:31:53.088080 kernel: acpiphp: Slot [27] registered Jan 13 21:31:53.088095 kernel: acpiphp: Slot [28] registered Jan 13 21:31:53.088113 kernel: acpiphp: Slot [29] registered Jan 13 21:31:53.088129 kernel: acpiphp: Slot [30] registered Jan 13 21:31:53.088146 kernel: acpiphp: Slot [31] registered Jan 13 21:31:53.088168 kernel: PCI host bridge to bus 0000:00 Jan 13 21:31:53.088798 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:31:53.088951 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:31:53.089693 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:31:53.089933 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 21:31:53.090089 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:31:53.090462 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:31:53.090630 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:31:53.090783 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 21:31:53.091064 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:31:53.091281 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 21:31:53.091438 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 21:31:53.091668 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 21:31:53.091820 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 21:31:53.091975 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 21:31:53.092131 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 21:31:53.092505 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 21:31:53.092703 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 21:31:53.092903 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 21:31:53.093292 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 21:31:53.093958 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:31:53.094263 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:31:53.094401 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 21:31:53.094538 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:31:53.095950 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 21:31:53.095980 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:31:53.095997 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:31:53.096019 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:31:53.096068 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:31:53.096084 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:31:53.096101 kernel: iommu: Default domain type: Translated Jan 13 21:31:53.096117 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:31:53.096134 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:31:53.096150 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:31:53.096166 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:31:53.096182 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 21:31:53.096336 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 21:31:53.096604 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 21:31:53.096786 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:31:53.096807 kernel: vgaarb: loaded Jan 13 21:31:53.096824 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 21:31:53.096840 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 21:31:53.096856 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:31:53.096872 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:31:53.096889 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:31:53.096910 kernel: pnp: PnP ACPI init Jan 13 21:31:53.096927 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:31:53.096943 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:31:53.096959 kernel: NET: Registered PF_INET protocol family Jan 13 21:31:53.097047 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:31:53.097066 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 21:31:53.097083 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:31:53.097100 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:31:53.097121 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:31:53.097265 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 21:31:53.097285 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:31:53.097301 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:31:53.097318 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:31:53.097334 kernel: NET: Registered PF_XDP protocol family Jan 13 21:31:53.097529 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:31:53.097657 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:31:53.097779 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:31:53.097907 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 21:31:53.098133 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:31:53.098155 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:31:53.098172 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:31:53.098189 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jan 13 21:31:53.098205 kernel: clocksource: Switched to clocksource tsc Jan 13 21:31:53.098222 kernel: Initialise system trusted keyrings Jan 13 21:31:53.098238 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 21:31:53.098405 kernel: Key type asymmetric registered Jan 13 21:31:53.098425 kernel: Asymmetric key parser 'x509' registered Jan 13 21:31:53.098441 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:31:53.098458 kernel: io scheduler mq-deadline registered Jan 13 21:31:53.098474 kernel: io scheduler kyber registered Jan 13 21:31:53.098490 kernel: io scheduler bfq registered Jan 13 21:31:53.098506 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:31:53.098522 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:31:53.098538 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:31:53.098560 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:31:53.098576 kernel: i8042: Warning: Keylock active Jan 13 21:31:53.098592 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:31:53.098609 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:31:53.098777 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:31:53.098952 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:31:53.099303 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:31:52 UTC (1736803912) Jan 13 21:31:53.099579 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:31:53.099609 kernel: intel_pstate: CPU model not supported Jan 13 21:31:53.099625 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:31:53.099642 kernel: Segment Routing with IPv6 Jan 13 21:31:53.099658 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:31:53.099674 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:31:53.099723 kernel: Key type dns_resolver registered Jan 13 21:31:53.099739 kernel: IPI shorthand broadcast: enabled Jan 13 21:31:53.099756 kernel: sched_clock: Marking stable (652003853, 229329284)->(966913792, -85580655) Jan 13 21:31:53.099771 kernel: registered taskstats version 1 Jan 13 21:31:53.099817 kernel: Loading compiled-in X.509 certificates Jan 13 21:31:53.099833 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:31:53.099962 kernel: Key type .fscrypt registered Jan 13 21:31:53.099978 kernel: Key type fscrypt-provisioning registered Jan 13 21:31:53.099995 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:31:53.100043 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:31:53.100061 kernel: ima: No architecture policies found Jan 13 21:31:53.100078 kernel: clk: Disabling unused clocks Jan 13 21:31:53.100117 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:31:53.100140 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:31:53.100156 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:31:53.100172 kernel: Run /init as init process Jan 13 21:31:53.100297 kernel: with arguments: Jan 13 21:31:53.100315 kernel: /init Jan 13 21:31:53.100331 kernel: with environment: Jan 13 21:31:53.100347 kernel: HOME=/ Jan 13 21:31:53.100362 kernel: TERM=linux Jan 13 21:31:53.100378 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:31:53.100405 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:31:53.100494 systemd[1]: Detected virtualization amazon. Jan 13 21:31:53.100518 systemd[1]: Detected architecture x86-64. Jan 13 21:31:53.100536 systemd[1]: Running in initrd. Jan 13 21:31:53.100553 systemd[1]: No hostname configured, using default hostname. Jan 13 21:31:53.100573 systemd[1]: Hostname set to . Jan 13 21:31:53.100591 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:31:53.100609 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:31:53.100626 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:31:53.100689 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:31:53.100709 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:31:53.100727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:31:53.100745 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:31:53.100767 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:31:53.100787 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:31:53.100805 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:31:53.100823 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:31:53.100841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:31:53.100859 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:31:53.100880 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:31:53.100898 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:31:53.100916 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:31:53.100934 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:31:53.101094 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:31:53.101118 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:31:53.101137 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:31:53.101218 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:31:53.101236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:31:53.101259 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:31:53.101278 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:31:53.101296 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:31:53.101314 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:31:53.101328 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:31:53.101348 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:31:53.101368 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:31:53.101463 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:31:53.101483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:31:53.101503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:31:53.101520 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:31:53.101538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:31:53.101555 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:31:53.101613 systemd-journald[178]: Collecting audit messages is disabled. Jan 13 21:31:53.101653 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:31:53.101674 systemd-journald[178]: Journal started Jan 13 21:31:53.101715 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2df83abf08a95992e85ad3bd33fc0c) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:31:53.107489 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:31:53.114379 systemd-modules-load[179]: Inserted module 'overlay' Jan 13 21:31:53.243321 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:31:53.243358 kernel: Bridge firewalling registered Jan 13 21:31:53.123237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:31:53.167153 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 13 21:31:53.264389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:31:53.273962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:31:53.280135 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:31:53.294326 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:31:53.305544 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:31:53.326390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:31:53.327490 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:31:53.355364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:31:53.360869 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:31:53.364899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:31:53.371500 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:31:53.387518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:31:53.406706 dracut-cmdline[211]: dracut-dracut-053 Jan 13 21:31:53.416574 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:31:53.464478 systemd-resolved[214]: Positive Trust Anchors: Jan 13 21:31:53.464498 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:31:53.464564 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:31:53.479890 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 13 21:31:53.482509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:31:53.484084 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:31:53.533072 kernel: SCSI subsystem initialized Jan 13 21:31:53.543069 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:31:53.554065 kernel: iscsi: registered transport (tcp) Jan 13 21:31:53.576149 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:31:53.576231 kernel: QLogic iSCSI HBA Driver Jan 13 21:31:53.615192 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:31:53.621264 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:31:53.648134 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:31:53.648217 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:31:53.648238 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:31:53.692065 kernel: raid6: avx512x4 gen() 16460 MB/s Jan 13 21:31:53.709071 kernel: raid6: avx512x2 gen() 16728 MB/s Jan 13 21:31:53.726067 kernel: raid6: avx512x1 gen() 17310 MB/s Jan 13 21:31:53.743064 kernel: raid6: avx2x4 gen() 17282 MB/s Jan 13 21:31:53.760065 kernel: raid6: avx2x2 gen() 16606 MB/s Jan 13 21:31:53.777073 kernel: raid6: avx2x1 gen() 13182 MB/s Jan 13 21:31:53.777162 kernel: raid6: using algorithm avx512x1 gen() 17310 MB/s Jan 13 21:31:53.794064 kernel: raid6: .... xor() 19473 MB/s, rmw enabled Jan 13 21:31:53.794140 kernel: raid6: using avx512x2 recovery algorithm Jan 13 21:31:53.816066 kernel: xor: automatically using best checksumming function avx Jan 13 21:31:54.015058 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:31:54.026122 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:31:54.033248 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:31:54.047433 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 13 21:31:54.052711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:31:54.060239 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:31:54.083417 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jan 13 21:31:54.114184 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:31:54.119267 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:31:54.185767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:31:54.195888 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:31:54.224710 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:31:54.228404 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:31:54.229990 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:31:54.230418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:31:54.242316 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:31:54.270047 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:31:54.293310 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:31:54.311055 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:31:54.316170 kernel: AES CTR mode by8 optimization enabled Jan 13 21:31:54.316236 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:31:54.330057 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:31:54.330252 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 21:31:54.330404 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ef:4a:97:b9:a9 Jan 13 21:31:54.323681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:31:54.323866 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:31:54.325949 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:31:54.327128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:31:54.327330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:31:54.328606 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:31:54.342727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:31:54.345105 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:31:54.357123 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:31:54.357465 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:31:54.373108 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:31:54.378058 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:31:54.378122 kernel: GPT:9289727 != 16777215 Jan 13 21:31:54.378141 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:31:54.378159 kernel: GPT:9289727 != 16777215 Jan 13 21:31:54.378176 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:31:54.378193 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:31:54.472366 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:31:54.516546 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jan 13 21:31:54.516584 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (447) Jan 13 21:31:54.527105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:31:54.537308 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:31:54.585770 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:31:54.590511 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:31:54.592080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:31:54.608070 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:31:54.627124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:31:54.639329 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:31:54.669561 disk-uuid[631]: Primary Header is updated. Jan 13 21:31:54.669561 disk-uuid[631]: Secondary Entries is updated. Jan 13 21:31:54.669561 disk-uuid[631]: Secondary Header is updated. Jan 13 21:31:54.682062 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:31:54.695063 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:31:55.703386 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:31:55.705986 disk-uuid[632]: The operation has completed successfully. Jan 13 21:31:55.913612 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:31:55.913754 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:31:55.935701 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:31:55.944943 sh[890]: Success Jan 13 21:31:55.988133 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:31:56.119868 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:31:56.140300 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:31:56.153242 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:31:56.172056 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:31:56.172129 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:31:56.173926 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:31:56.173983 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:31:56.174560 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:31:56.286062 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:31:56.301259 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:31:56.305672 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:31:56.317249 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:31:56.327315 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:31:56.362865 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:56.362935 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:31:56.362958 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:31:56.370102 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:31:56.394380 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:56.395569 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:31:56.421209 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:31:56.433579 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:31:56.514316 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:31:56.525508 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:31:56.564382 systemd-networkd[1083]: lo: Link UP Jan 13 21:31:56.564394 systemd-networkd[1083]: lo: Gained carrier Jan 13 21:31:56.567320 systemd-networkd[1083]: Enumeration completed Jan 13 21:31:56.567775 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:31:56.567780 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:31:56.573277 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:31:56.574655 systemd[1]: Reached target network.target - Network. Jan 13 21:31:56.583848 systemd-networkd[1083]: eth0: Link UP Jan 13 21:31:56.583854 systemd-networkd[1083]: eth0: Gained carrier Jan 13 21:31:56.583868 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:31:56.601365 systemd-networkd[1083]: eth0: DHCPv4 address 172.31.19.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:31:56.650964 ignition[1027]: Ignition 2.19.0 Jan 13 21:31:56.650978 ignition[1027]: Stage: fetch-offline Jan 13 21:31:56.651259 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:56.651272 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:56.651663 ignition[1027]: Ignition finished successfully Jan 13 21:31:56.657618 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:31:56.664552 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:31:56.685345 ignition[1091]: Ignition 2.19.0 Jan 13 21:31:56.685451 ignition[1091]: Stage: fetch Jan 13 21:31:56.686011 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:56.686025 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:56.686157 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:56.706061 ignition[1091]: PUT result: OK Jan 13 21:31:56.709733 ignition[1091]: parsed url from cmdline: "" Jan 13 21:31:56.709744 ignition[1091]: no config URL provided Jan 13 21:31:56.709756 ignition[1091]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:31:56.709772 ignition[1091]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:31:56.709796 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:56.710703 ignition[1091]: PUT result: OK Jan 13 21:31:56.710758 ignition[1091]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:31:56.714754 ignition[1091]: GET result: OK Jan 13 21:31:56.714865 ignition[1091]: parsing config with SHA512: 0d72e7e09572cf93d755bd1dcaac8fea3445d37c3c4a997fc52ca3cce54cdb7feb47503384e26c91c0283b84648eb4acaafb063442c14a389538a73e97ecffd8 Jan 13 21:31:56.722809 unknown[1091]: fetched base config from "system" Jan 13 21:31:56.722823 unknown[1091]: fetched base config from "system" Jan 13 21:31:56.722831 unknown[1091]: fetched user config from "aws" Jan 13 21:31:56.725193 ignition[1091]: fetch: fetch complete Jan 13 21:31:56.725201 ignition[1091]: fetch: fetch passed Jan 13 21:31:56.726025 ignition[1091]: Ignition finished successfully Jan 13 21:31:56.730888 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:31:56.740341 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:31:56.762195 ignition[1097]: Ignition 2.19.0 Jan 13 21:31:56.762213 ignition[1097]: Stage: kargs Jan 13 21:31:56.763357 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:56.763431 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:56.763566 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:56.769860 ignition[1097]: PUT result: OK Jan 13 21:31:56.775114 ignition[1097]: kargs: kargs passed Jan 13 21:31:56.775194 ignition[1097]: Ignition finished successfully Jan 13 21:31:56.777594 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:31:56.785423 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:31:56.837792 ignition[1103]: Ignition 2.19.0 Jan 13 21:31:56.837812 ignition[1103]: Stage: disks Jan 13 21:31:56.838398 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:56.838413 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:56.838679 ignition[1103]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:56.840901 ignition[1103]: PUT result: OK Jan 13 21:31:56.847488 ignition[1103]: disks: disks passed Jan 13 21:31:56.847551 ignition[1103]: Ignition finished successfully Jan 13 21:31:56.849997 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:31:56.852871 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:31:56.855294 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:31:56.858249 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:31:56.860825 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:31:56.863122 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:31:56.870245 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:31:56.908965 systemd-fsck[1111]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:31:56.917565 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:31:56.927292 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:31:57.088069 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:31:57.089118 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:31:57.091269 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:31:57.098146 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:31:57.107275 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:31:57.112582 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:31:57.112660 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:31:57.112694 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:31:57.117802 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:31:57.126431 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:31:57.133077 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1130) Jan 13 21:31:57.136727 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:57.136784 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:31:57.136812 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:31:57.141076 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:31:57.143614 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:31:57.405520 initrd-setup-root[1159]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:31:57.412049 initrd-setup-root[1166]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:31:57.432964 initrd-setup-root[1173]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:31:57.443441 initrd-setup-root[1180]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:31:57.762997 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:31:57.774442 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:31:57.801191 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:31:57.829069 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:57.830877 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:31:57.872879 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:31:57.875369 ignition[1247]: INFO : Ignition 2.19.0 Jan 13 21:31:57.876524 ignition[1247]: INFO : Stage: mount Jan 13 21:31:57.877622 ignition[1247]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:57.877622 ignition[1247]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:57.882007 ignition[1247]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:57.883716 ignition[1247]: INFO : PUT result: OK Jan 13 21:31:57.886621 ignition[1247]: INFO : mount: mount passed Jan 13 21:31:57.887416 ignition[1247]: INFO : Ignition finished successfully Jan 13 21:31:57.889696 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:31:57.897824 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:31:57.945455 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:31:57.988056 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1259) Jan 13 21:31:57.988118 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:57.989950 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:31:57.989987 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:31:57.996070 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:31:57.996758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:31:58.046429 ignition[1276]: INFO : Ignition 2.19.0 Jan 13 21:31:58.046429 ignition[1276]: INFO : Stage: files Jan 13 21:31:58.048744 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:58.048744 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:58.051302 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:58.052977 ignition[1276]: INFO : PUT result: OK Jan 13 21:31:58.056579 ignition[1276]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:31:58.059580 ignition[1276]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:31:58.059580 ignition[1276]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:31:58.066111 ignition[1276]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:31:58.068430 ignition[1276]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:31:58.071221 unknown[1276]: wrote ssh authorized keys file for user: core Jan 13 21:31:58.074357 ignition[1276]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:31:58.078983 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:31:58.081159 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:31:58.193776 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:31:58.276417 systemd-networkd[1083]: eth0: Gained IPv6LL Jan 13 21:31:58.341963 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:31:58.341963 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:31:58.346808 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:31:58.808588 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:31:58.992414 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:31:58.996298 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:31:59.001592 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:31:59.001592 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:31:59.011079 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:31:59.011079 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:31:59.011079 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:31:59.028314 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:31:59.028314 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:31:59.028314 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:31:59.028314 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:31:59.028314 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:31:59.028314 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:31:59.028314 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:31:59.028314 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:31:59.288364 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:31:59.908497 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:31:59.908497 ignition[1276]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:31:59.920793 ignition[1276]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:31:59.929804 ignition[1276]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:31:59.929804 ignition[1276]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:31:59.929804 ignition[1276]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:31:59.941954 ignition[1276]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:31:59.941954 ignition[1276]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:31:59.941954 ignition[1276]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:31:59.941954 ignition[1276]: INFO : files: files passed Jan 13 21:31:59.941954 ignition[1276]: INFO : Ignition finished successfully Jan 13 21:31:59.934519 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:31:59.964323 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:31:59.988166 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:31:59.998071 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:31:59.998199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:32:00.028625 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:32:00.028625 initrd-setup-root-after-ignition[1304]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:32:00.036243 initrd-setup-root-after-ignition[1308]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:32:00.048915 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:32:00.049403 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:32:00.075730 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:32:00.180835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:32:00.180948 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:32:00.182521 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:32:00.186044 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:32:00.188847 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:32:00.199674 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:32:00.230107 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:32:00.236647 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:32:00.266810 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:32:00.269895 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:32:00.270413 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:32:00.271235 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:32:00.272065 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:32:00.273785 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:32:00.273973 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:32:00.274386 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:32:00.274602 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:32:00.274877 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:32:00.275101 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:32:00.275296 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:32:00.275618 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:32:00.275921 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:32:00.276278 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:32:00.276419 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:32:00.276573 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:32:00.277600 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:32:00.277810 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:32:00.277980 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:32:00.295113 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:32:00.296704 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:32:00.296872 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:32:00.299867 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:32:00.300263 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:32:00.302985 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:32:00.303146 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:32:00.335110 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:32:00.353337 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:32:00.356097 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:32:00.356347 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:32:00.359370 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:32:00.359520 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:32:00.371352 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:32:00.371487 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:32:00.386616 ignition[1328]: INFO : Ignition 2.19.0 Jan 13 21:32:00.387904 ignition[1328]: INFO : Stage: umount Jan 13 21:32:00.387904 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:00.387904 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:32:00.387904 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:32:00.392421 ignition[1328]: INFO : PUT result: OK Jan 13 21:32:00.397907 ignition[1328]: INFO : umount: umount passed Jan 13 21:32:00.401323 ignition[1328]: INFO : Ignition finished successfully Jan 13 21:32:00.399099 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:32:00.402341 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:32:00.402527 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:32:00.404976 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:32:00.405306 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:32:00.409237 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:32:00.410362 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:32:00.412607 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:32:00.412682 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:32:00.413765 systemd[1]: Stopped target network.target - Network. Jan 13 21:32:00.416688 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:32:00.416778 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:32:00.419111 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:32:00.422663 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:32:00.430242 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:32:00.439955 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:32:00.449981 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:32:00.450402 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:32:00.450470 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:32:00.453793 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:32:00.453856 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:32:00.456439 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:32:00.457514 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:32:00.458717 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:32:00.458904 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:32:00.462309 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:32:00.469698 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:32:00.470204 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:32:00.470318 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:32:00.473620 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:32:00.473732 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:32:00.474262 systemd-networkd[1083]: eth0: DHCPv6 lease lost Jan 13 21:32:00.476881 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:32:00.477139 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:32:00.479853 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:32:00.479922 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:32:00.493902 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:32:00.496190 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:32:00.496294 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:32:00.503279 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:32:00.507772 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:32:00.507924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:32:00.519449 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:32:00.519671 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:32:00.523773 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:32:00.523888 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:32:00.525416 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:32:00.525470 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:32:00.527558 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:32:00.527607 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:32:00.530144 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:32:00.530199 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:32:00.532333 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:32:00.532389 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:32:00.542308 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:32:00.542391 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:32:00.551943 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:32:00.553205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:32:00.553385 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:32:00.554634 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:32:00.554688 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:32:00.556445 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:32:00.556499 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:32:00.558926 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:32:00.558979 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:32:00.560362 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:32:00.560406 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:32:00.561760 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:32:00.561804 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:32:00.563056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:32:00.563094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:00.588460 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:32:00.588628 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:32:00.591011 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:32:00.601294 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:32:00.612587 systemd[1]: Switching root. Jan 13 21:32:00.662088 systemd-journald[178]: Journal stopped Jan 13 21:32:04.178158 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 13 21:32:04.178274 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:32:04.178307 kernel: SELinux: policy capability open_perms=1 Jan 13 21:32:04.178331 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:32:04.178360 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:32:04.178381 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:32:04.178402 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:32:04.178426 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:32:04.178456 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:32:04.178479 kernel: audit: type=1403 audit(1736803921.960:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:32:04.178509 systemd[1]: Successfully loaded SELinux policy in 155.177ms. Jan 13 21:32:04.178549 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 67.513ms. Jan 13 21:32:04.178584 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:32:04.178611 systemd[1]: Detected virtualization amazon. Jan 13 21:32:04.178636 systemd[1]: Detected architecture x86-64. Jan 13 21:32:04.178659 systemd[1]: Detected first boot. Jan 13 21:32:04.178685 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:32:04.178710 zram_generator::config[1372]: No configuration found. Jan 13 21:32:04.178738 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:32:04.178763 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:32:04.178790 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:32:04.178818 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:32:04.178843 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:32:04.178872 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:32:04.178895 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:32:04.178920 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:32:04.178946 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:32:04.180198 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:32:04.180224 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:32:04.180250 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:32:04.180271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:32:04.180293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:32:04.180315 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:32:04.180333 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:32:04.180356 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:32:04.180378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:32:04.180398 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:32:04.180424 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:32:04.180447 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:32:04.180468 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:32:04.180489 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:32:04.180510 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:32:04.180538 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:32:04.180558 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:32:04.180576 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:32:04.180595 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:32:04.180617 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:32:04.180636 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:32:04.180654 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:32:04.180673 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:32:04.180694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:32:04.180714 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:32:04.180732 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:32:04.180750 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:32:04.180769 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:32:04.180794 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:04.180814 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:32:04.180832 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:32:04.180850 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:32:04.180869 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:32:04.180980 systemd[1]: Reached target machines.target - Containers. Jan 13 21:32:04.181000 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:32:04.181279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:04.181307 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:32:04.181326 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:32:04.181344 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:32:04.181363 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:32:04.181383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:32:04.181402 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:32:04.181421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:32:04.181442 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:32:04.181462 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:32:04.181487 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:32:04.181507 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:32:04.181526 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:32:04.181553 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:32:04.181571 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:32:04.181591 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:32:04.181614 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:32:04.181633 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:32:04.181654 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:32:04.181680 systemd[1]: Stopped verity-setup.service. Jan 13 21:32:04.181700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:04.181823 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:32:04.181851 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:32:04.181871 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:32:04.181890 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:32:04.181910 kernel: loop: module loaded Jan 13 21:32:04.181938 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:32:04.181961 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:32:04.181985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:32:04.182009 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:32:04.182044 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:32:04.189423 kernel: fuse: init (API version 7.39) Jan 13 21:32:04.189467 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:32:04.189495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:32:04.189522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:32:04.189548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:32:04.189572 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:32:04.189597 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:32:04.189625 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:32:04.189651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:32:04.189676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:32:04.189703 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:32:04.189770 systemd-journald[1451]: Collecting audit messages is disabled. Jan 13 21:32:04.189818 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:32:04.189842 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:32:04.189870 systemd-journald[1451]: Journal started Jan 13 21:32:04.189915 systemd-journald[1451]: Runtime Journal (/run/log/journal/ec2df83abf08a95992e85ad3bd33fc0c) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:32:04.194474 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:32:03.543187 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:32:03.595462 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:32:03.595880 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:32:04.206073 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:32:04.216321 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:32:04.225627 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:32:04.231784 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:32:04.234546 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:32:04.236973 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:32:04.304350 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:32:04.319598 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:32:04.321230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:32:04.321404 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:32:04.326457 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:32:04.363990 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:32:04.377120 kernel: ACPI: bus type drm_connector registered Jan 13 21:32:04.369272 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:32:04.370624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:04.378910 systemd-tmpfiles[1475]: ACLs are not supported, ignoring. Jan 13 21:32:04.378943 systemd-tmpfiles[1475]: ACLs are not supported, ignoring. Jan 13 21:32:04.382271 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:32:04.402282 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:32:04.406638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:32:04.409133 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:32:04.432713 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:32:04.446598 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:32:04.447118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:32:04.450302 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:32:04.452684 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:32:04.456108 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:32:04.475300 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:32:04.479991 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:32:04.482550 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:32:04.499510 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:32:04.513869 systemd-journald[1451]: Time spent on flushing to /var/log/journal/ec2df83abf08a95992e85ad3bd33fc0c is 107.710ms for 972 entries. Jan 13 21:32:04.513869 systemd-journald[1451]: System Journal (/var/log/journal/ec2df83abf08a95992e85ad3bd33fc0c) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:32:04.660680 systemd-journald[1451]: Received client request to flush runtime journal. Jan 13 21:32:04.660775 kernel: loop0: detected capacity change from 0 to 211296 Jan 13 21:32:04.660818 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:32:04.660850 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 21:32:04.521250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:32:04.533554 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:32:04.580672 udevadm[1513]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:32:04.642791 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:32:04.655289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:32:04.672167 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:32:04.680390 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:32:04.699997 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:32:04.755133 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 13 21:32:04.755562 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 13 21:32:04.765206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:32:04.828069 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:32:04.908239 kernel: loop3: detected capacity change from 0 to 61336 Jan 13 21:32:05.028372 kernel: loop4: detected capacity change from 0 to 211296 Jan 13 21:32:05.059076 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:32:05.086074 kernel: loop6: detected capacity change from 0 to 142488 Jan 13 21:32:05.130071 kernel: loop7: detected capacity change from 0 to 61336 Jan 13 21:32:05.142324 (sd-merge)[1526]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:32:05.143375 (sd-merge)[1526]: Merged extensions into '/usr'. Jan 13 21:32:05.151013 systemd[1]: Reloading requested from client PID 1503 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:32:05.151185 systemd[1]: Reloading... Jan 13 21:32:05.263109 zram_generator::config[1552]: No configuration found. Jan 13 21:32:05.579961 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:05.725337 systemd[1]: Reloading finished in 573 ms. Jan 13 21:32:05.756411 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:32:05.769315 systemd[1]: Starting ensure-sysext.service... Jan 13 21:32:05.772219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:32:05.792105 systemd[1]: Reloading requested from client PID 1600 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:32:05.792140 systemd[1]: Reloading... Jan 13 21:32:05.873592 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:32:05.876295 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:32:05.882028 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:32:05.884760 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 13 21:32:05.886711 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 13 21:32:05.903719 systemd-tmpfiles[1601]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:32:05.905224 systemd-tmpfiles[1601]: Skipping /boot Jan 13 21:32:05.941067 zram_generator::config[1631]: No configuration found. Jan 13 21:32:05.943676 systemd-tmpfiles[1601]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:32:05.943867 systemd-tmpfiles[1601]: Skipping /boot Jan 13 21:32:05.994733 ldconfig[1499]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:32:06.158292 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:06.228111 systemd[1]: Reloading finished in 435 ms. Jan 13 21:32:06.246139 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:32:06.247840 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:32:06.254643 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:32:06.270464 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:32:06.287291 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:32:06.298259 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:32:06.309299 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:32:06.322989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:32:06.334618 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:32:06.349930 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:06.350658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:06.356366 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:32:06.360471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:32:06.370984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:32:06.374066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:06.374755 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:06.390413 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:32:06.393699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:06.393970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:06.395257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:06.395452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:06.410736 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:06.411689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:06.423492 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:32:06.424819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:06.425275 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:32:06.426673 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:06.427074 systemd-udevd[1691]: Using default interface naming scheme 'v255'. Jan 13 21:32:06.429798 systemd[1]: Finished ensure-sysext.service. Jan 13 21:32:06.456884 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:32:06.483657 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:32:06.484526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:32:06.489353 augenrules[1709]: No rules Jan 13 21:32:06.497850 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:32:06.499579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:32:06.499784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:32:06.503214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:32:06.503796 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:32:06.506380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:32:06.509661 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:32:06.512658 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:32:06.516714 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:32:06.535634 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:32:06.537553 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:32:06.556232 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:32:06.571238 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:32:06.572931 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:32:06.594152 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:32:06.620900 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:32:06.636103 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:32:06.743788 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:32:06.745915 (udev-worker)[1732]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:32:06.803128 systemd-networkd[1727]: lo: Link UP Jan 13 21:32:06.803139 systemd-networkd[1727]: lo: Gained carrier Jan 13 21:32:06.806989 systemd-networkd[1727]: Enumeration completed Jan 13 21:32:06.807148 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:32:06.812592 systemd-networkd[1727]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:06.816384 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:32:06.820604 systemd-networkd[1727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:32:06.832107 systemd-networkd[1727]: eth0: Link UP Jan 13 21:32:06.833430 systemd-networkd[1727]: eth0: Gained carrier Jan 13 21:32:06.833691 systemd-networkd[1727]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:06.844689 systemd-networkd[1727]: eth0: DHCPv4 address 172.31.19.228/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:32:06.846502 systemd-resolved[1690]: Positive Trust Anchors: Jan 13 21:32:06.846523 systemd-resolved[1690]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:32:06.846671 systemd-resolved[1690]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:32:06.879000 systemd-resolved[1690]: Defaulting to hostname 'linux'. Jan 13 21:32:06.883551 systemd-networkd[1727]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:06.885187 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:32:06.886492 systemd[1]: Reached target network.target - Network. Jan 13 21:32:06.887455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:32:06.907057 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:32:06.917058 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:32:06.922074 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 21:32:06.936057 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1723) Jan 13 21:32:06.937143 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 21:32:06.948056 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 21:32:06.986919 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 13 21:32:07.173744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:32:07.187480 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:32:07.231571 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:32:07.234166 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:32:07.257578 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:32:07.266750 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:32:07.289817 lvm[1843]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:32:07.298854 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:32:07.325781 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:32:07.327978 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:32:07.335306 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:32:07.345058 lvm[1849]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:32:07.371365 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:32:07.468138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:07.469687 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:32:07.472026 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:32:07.473906 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:32:07.475370 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:32:07.476755 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:32:07.479462 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:32:07.480854 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:32:07.480971 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:32:07.481986 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:32:07.483747 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:32:07.486769 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:32:07.491969 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:32:07.493785 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:32:07.495119 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:32:07.496375 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:32:07.498139 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:32:07.498171 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:32:07.507182 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:32:07.510405 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:32:07.515849 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:32:07.534671 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:32:07.545710 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:32:07.546955 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:32:07.552209 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:32:07.564062 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:32:07.570406 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:32:07.574201 jq[1858]: false Jan 13 21:32:07.583186 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:32:07.588383 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:32:07.592202 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:32:07.614401 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:32:07.617276 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:32:07.618367 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:32:07.630295 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:32:07.644770 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:32:07.657929 extend-filesystems[1859]: Found loop4 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found loop5 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found loop6 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found loop7 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found nvme0n1 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found nvme0n1p1 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found nvme0n1p2 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found nvme0n1p3 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found usr Jan 13 21:32:07.657929 extend-filesystems[1859]: Found nvme0n1p4 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found nvme0n1p6 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found nvme0n1p7 Jan 13 21:32:07.657929 extend-filesystems[1859]: Found nvme0n1p9 Jan 13 21:32:07.657929 extend-filesystems[1859]: Checking size of /dev/nvme0n1p9 Jan 13 21:32:07.797345 coreos-metadata[1856]: Jan 13 21:32:07.768 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:32:07.797345 coreos-metadata[1856]: Jan 13 21:32:07.782 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:32:07.797345 coreos-metadata[1856]: Jan 13 21:32:07.789 INFO Fetch successful Jan 13 21:32:07.797345 coreos-metadata[1856]: Jan 13 21:32:07.789 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:32:07.775829 dbus-daemon[1857]: [system] SELinux support is enabled Jan 13 21:32:07.660952 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:32:07.812707 coreos-metadata[1856]: Jan 13 21:32:07.799 INFO Fetch successful Jan 13 21:32:07.812707 coreos-metadata[1856]: Jan 13 21:32:07.799 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:32:07.812707 coreos-metadata[1856]: Jan 13 21:32:07.812 INFO Fetch successful Jan 13 21:32:07.812707 coreos-metadata[1856]: Jan 13 21:32:07.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:32:07.786093 dbus-daemon[1857]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1727 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:32:07.661285 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:32:07.814679 jq[1875]: true Jan 13 21:32:07.711318 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:32:07.837749 tar[1879]: linux-amd64/helm Jan 13 21:32:07.869620 coreos-metadata[1856]: Jan 13 21:32:07.835 INFO Fetch successful Jan 13 21:32:07.869620 coreos-metadata[1856]: Jan 13 21:32:07.835 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:32:07.869620 coreos-metadata[1856]: Jan 13 21:32:07.869 INFO Fetch failed with 404: resource not found Jan 13 21:32:07.869620 coreos-metadata[1856]: Jan 13 21:32:07.869 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:32:07.820391 dbus-daemon[1857]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:32:07.765907 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:32:07.869959 extend-filesystems[1859]: Resized partition /dev/nvme0n1p9 Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: ---------------------------------------------------- Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: corporation. Support and training for ntp-4 are Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: available at https://www.nwtime.org/support Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: ---------------------------------------------------- Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: proto: precision = 0.066 usec (-24) Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: basedate set to 2025-01-01 Jan 13 21:32:07.880205 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: gps base set to 2025-01-05 (week 2348) Jan 13 21:32:07.865323 ntpd[1861]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:32:07.766721 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:32:07.887511 extend-filesystems[1900]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:32:07.911480 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:32:07.911519 coreos-metadata[1856]: Jan 13 21:32:07.880 INFO Fetch successful Jan 13 21:32:07.911519 coreos-metadata[1856]: Jan 13 21:32:07.880 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:32:07.911519 coreos-metadata[1856]: Jan 13 21:32:07.891 INFO Fetch successful Jan 13 21:32:07.911519 coreos-metadata[1856]: Jan 13 21:32:07.891 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:32:07.911519 coreos-metadata[1856]: Jan 13 21:32:07.894 INFO Fetch successful Jan 13 21:32:07.911519 coreos-metadata[1856]: Jan 13 21:32:07.894 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:32:07.911519 coreos-metadata[1856]: Jan 13 21:32:07.894 INFO Fetch successful Jan 13 21:32:07.911519 coreos-metadata[1856]: Jan 13 21:32:07.894 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:32:07.911519 coreos-metadata[1856]: Jan 13 21:32:07.896 INFO Fetch successful Jan 13 21:32:07.865351 ntpd[1861]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:32:07.776426 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:32:07.865362 ntpd[1861]: ---------------------------------------------------- Jan 13 21:32:07.809371 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:32:07.865372 ntpd[1861]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:32:07.809419 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:32:07.865382 ntpd[1861]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:32:07.811441 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:32:07.865393 ntpd[1861]: corporation. Support and training for ntp-4 are Jan 13 21:32:07.811468 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:32:07.865403 ntpd[1861]: available at https://www.nwtime.org/support Jan 13 21:32:07.871602 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:32:07.865412 ntpd[1861]: ---------------------------------------------------- Jan 13 21:32:07.872650 ntpd[1861]: proto: precision = 0.066 usec (-24) Jan 13 21:32:07.876619 ntpd[1861]: basedate set to 2025-01-01 Jan 13 21:32:07.876641 ntpd[1861]: gps base set to 2025-01-05 (week 2348) Jan 13 21:32:07.949068 jq[1896]: true Jan 13 21:32:07.947244 systemd-logind[1869]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:32:07.949655 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:32:07.949655 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:32:07.949655 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:32:07.949655 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: Listen normally on 3 eth0 172.31.19.228:123 Jan 13 21:32:07.949655 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: Listen normally on 4 lo [::1]:123 Jan 13 21:32:07.949655 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: bind(21) AF_INET6 fe80::4ef:4aff:fe97:b9a9%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:32:07.949655 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: unable to create socket on eth0 (5) for fe80::4ef:4aff:fe97:b9a9%2#123 Jan 13 21:32:07.949655 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: failed to init interface for address fe80::4ef:4aff:fe97:b9a9%2 Jan 13 21:32:07.949655 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: Listening on routing socket on fd #21 for interface updates Jan 13 21:32:07.949966 update_engine[1872]: I20250113 21:32:07.906847 1872 main.cc:92] Flatcar Update Engine starting Jan 13 21:32:07.936882 ntpd[1861]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:32:07.947271 systemd-logind[1869]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 21:32:07.936955 ntpd[1861]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:32:07.947297 systemd-logind[1869]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:32:07.937414 ntpd[1861]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:32:07.952286 systemd-logind[1869]: New seat seat0. Jan 13 21:32:07.937463 ntpd[1861]: Listen normally on 3 eth0 172.31.19.228:123 Jan 13 21:32:07.953830 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:32:07.937508 ntpd[1861]: Listen normally on 4 lo [::1]:123 Jan 13 21:32:07.937559 ntpd[1861]: bind(21) AF_INET6 fe80::4ef:4aff:fe97:b9a9%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:32:07.937584 ntpd[1861]: unable to create socket on eth0 (5) for fe80::4ef:4aff:fe97:b9a9%2#123 Jan 13 21:32:07.937669 ntpd[1861]: failed to init interface for address fe80::4ef:4aff:fe97:b9a9%2 Jan 13 21:32:07.937755 ntpd[1861]: Listening on routing socket on fd #21 for interface updates Jan 13 21:32:07.961556 (ntainerd)[1897]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:32:07.962154 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:32:07.962538 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:32:07.974670 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:32:07.974670 ntpd[1861]: 13 Jan 21:32:07 ntpd[1861]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:32:07.974118 ntpd[1861]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:32:07.973554 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:32:07.974149 ntpd[1861]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:32:07.989803 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:32:08.003181 update_engine[1872]: I20250113 21:32:08.002871 1872 update_check_scheduler.cc:74] Next update check in 7m2s Jan 13 21:32:08.047950 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:32:08.050351 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:32:08.089108 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:32:08.174548 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:32:08.182713 extend-filesystems[1900]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:32:08.182713 extend-filesystems[1900]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:32:08.182713 extend-filesystems[1900]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:32:08.195143 extend-filesystems[1859]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:32:08.188921 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:32:08.189573 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:32:08.248075 bash[1938]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:32:08.246533 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:32:08.263929 systemd[1]: Starting sshkeys.service... Jan 13 21:32:08.297076 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1732) Jan 13 21:32:08.299313 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:32:08.309520 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:32:08.411551 dbus-daemon[1857]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:32:08.424657 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:32:08.433150 dbus-daemon[1857]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1899 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:32:08.445499 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:32:08.513668 polkitd[1983]: Started polkitd version 121 Jan 13 21:32:08.537753 locksmithd[1914]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:32:08.544811 polkitd[1983]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:32:08.546406 polkitd[1983]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:32:08.555963 polkitd[1983]: Finished loading, compiling and executing 2 rules Jan 13 21:32:08.557268 dbus-daemon[1857]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:32:08.560849 coreos-metadata[1957]: Jan 13 21:32:08.560 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:32:08.560849 coreos-metadata[1957]: Jan 13 21:32:08.560 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:32:08.560849 coreos-metadata[1957]: Jan 13 21:32:08.560 INFO Fetch successful Jan 13 21:32:08.560849 coreos-metadata[1957]: Jan 13 21:32:08.560 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:32:08.557476 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:32:08.563112 polkitd[1983]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:32:08.564213 coreos-metadata[1957]: Jan 13 21:32:08.561 INFO Fetch successful Jan 13 21:32:08.600404 unknown[1957]: wrote ssh authorized keys file for user: core Jan 13 21:32:08.606427 systemd-hostnamed[1899]: Hostname set to (transient) Jan 13 21:32:08.606635 systemd-resolved[1690]: System hostname changed to 'ip-172-31-19-228'. Jan 13 21:32:08.691700 update-ssh-keys[2022]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:32:08.682956 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:32:08.696537 systemd[1]: Finished sshkeys.service. Jan 13 21:32:08.836753 systemd-networkd[1727]: eth0: Gained IPv6LL Jan 13 21:32:08.865949 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:32:08.869427 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:32:08.896631 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:32:08.912327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:08.932595 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:32:09.036417 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:32:09.140764 sshd_keygen[1874]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:32:09.160717 amazon-ssm-agent[2059]: Initializing new seelog logger Jan 13 21:32:09.162116 amazon-ssm-agent[2059]: New Seelog Logger Creation Complete Jan 13 21:32:09.162116 amazon-ssm-agent[2059]: 2025/01/13 21:32:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:09.162116 amazon-ssm-agent[2059]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:09.162116 amazon-ssm-agent[2059]: 2025/01/13 21:32:09 processing appconfig overrides Jan 13 21:32:09.162682 amazon-ssm-agent[2059]: 2025/01/13 21:32:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:09.162761 amazon-ssm-agent[2059]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:09.162913 amazon-ssm-agent[2059]: 2025/01/13 21:32:09 processing appconfig overrides Jan 13 21:32:09.163375 amazon-ssm-agent[2059]: 2025/01/13 21:32:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:09.163447 amazon-ssm-agent[2059]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:09.163644 amazon-ssm-agent[2059]: 2025/01/13 21:32:09 processing appconfig overrides Jan 13 21:32:09.164165 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO Proxy environment variables: Jan 13 21:32:09.169362 amazon-ssm-agent[2059]: 2025/01/13 21:32:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:09.172064 amazon-ssm-agent[2059]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:09.172064 amazon-ssm-agent[2059]: 2025/01/13 21:32:09 processing appconfig overrides Jan 13 21:32:09.211991 containerd[1897]: time="2025-01-13T21:32:09.211888055Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:32:09.237898 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:32:09.250435 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:32:09.254690 systemd[1]: Started sshd@0-172.31.19.228:22-147.75.109.163:60592.service - OpenSSH per-connection server daemon (147.75.109.163:60592). Jan 13 21:32:09.264292 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO https_proxy: Jan 13 21:32:09.290283 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:32:09.291502 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:32:09.304569 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:32:09.343493 containerd[1897]: time="2025-01-13T21:32:09.343425555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:09.353904 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:32:09.357525 containerd[1897]: time="2025-01-13T21:32:09.357469454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:09.357640 containerd[1897]: time="2025-01-13T21:32:09.357526489Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:32:09.357640 containerd[1897]: time="2025-01-13T21:32:09.357550875Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:32:09.358349 containerd[1897]: time="2025-01-13T21:32:09.357745776Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:32:09.358349 containerd[1897]: time="2025-01-13T21:32:09.357773703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:09.358349 containerd[1897]: time="2025-01-13T21:32:09.357949177Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:09.358349 containerd[1897]: time="2025-01-13T21:32:09.357974831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:09.364392 containerd[1897]: time="2025-01-13T21:32:09.363533936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:09.364392 containerd[1897]: time="2025-01-13T21:32:09.363584952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:09.364392 containerd[1897]: time="2025-01-13T21:32:09.363611880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:09.364392 containerd[1897]: time="2025-01-13T21:32:09.363629237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:09.364392 containerd[1897]: time="2025-01-13T21:32:09.363776907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:09.364392 containerd[1897]: time="2025-01-13T21:32:09.364112661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:09.366250 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:32:09.374572 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:32:09.376408 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:32:09.378375 containerd[1897]: time="2025-01-13T21:32:09.378126737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:09.378375 containerd[1897]: time="2025-01-13T21:32:09.378274017Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:32:09.380290 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO http_proxy: Jan 13 21:32:09.381065 containerd[1897]: time="2025-01-13T21:32:09.380694296Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:32:09.381065 containerd[1897]: time="2025-01-13T21:32:09.380818466Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:32:09.404800 containerd[1897]: time="2025-01-13T21:32:09.404613430Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:32:09.404800 containerd[1897]: time="2025-01-13T21:32:09.404692392Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:32:09.404800 containerd[1897]: time="2025-01-13T21:32:09.404715369Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:32:09.404800 containerd[1897]: time="2025-01-13T21:32:09.404759594Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:32:09.404800 containerd[1897]: time="2025-01-13T21:32:09.404782439Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:32:09.405084 containerd[1897]: time="2025-01-13T21:32:09.404983520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407073197Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407310476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407336067Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407529283Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407605658Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407631229Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407652765Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407688400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407712888Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407732644Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407766279Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407786678Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407832663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.407934 containerd[1897]: time="2025-01-13T21:32:09.407856549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.407886071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.407921534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.407939654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.407958170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.407986816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408010080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408028554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408077133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408096692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408132690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408153591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408179287Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408229047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408249882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.408544 containerd[1897]: time="2025-01-13T21:32:09.408282724Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:32:09.409109 containerd[1897]: time="2025-01-13T21:32:09.408428908Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:32:09.409109 containerd[1897]: time="2025-01-13T21:32:09.408472638Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:32:09.409109 containerd[1897]: time="2025-01-13T21:32:09.408508438Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:32:09.409109 containerd[1897]: time="2025-01-13T21:32:09.408528614Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:32:09.409109 containerd[1897]: time="2025-01-13T21:32:09.408544106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.409109 containerd[1897]: time="2025-01-13T21:32:09.408586741Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:32:09.409109 containerd[1897]: time="2025-01-13T21:32:09.408603693Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:32:09.409109 containerd[1897]: time="2025-01-13T21:32:09.408620063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:32:09.409406 containerd[1897]: time="2025-01-13T21:32:09.409133698Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:32:09.409406 containerd[1897]: time="2025-01-13T21:32:09.409246320Z" level=info msg="Connect containerd service" Jan 13 21:32:09.409406 containerd[1897]: time="2025-01-13T21:32:09.409311280Z" level=info msg="using legacy CRI server" Jan 13 21:32:09.409406 containerd[1897]: time="2025-01-13T21:32:09.409323124Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:32:09.409727 containerd[1897]: time="2025-01-13T21:32:09.409497330Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.410639360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.410939083Z" level=info msg="Start subscribing containerd event" Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.411001318Z" level=info msg="Start recovering state" Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.411096875Z" level=info msg="Start event monitor" Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.411117682Z" level=info msg="Start snapshots syncer" Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.411132084Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.411144933Z" level=info msg="Start streaming server" Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.411523004Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.411586054Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:32:09.418289 containerd[1897]: time="2025-01-13T21:32:09.411914874Z" level=info msg="containerd successfully booted in 0.203222s" Jan 13 21:32:09.411782 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:32:09.478551 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO no_proxy: Jan 13 21:32:09.576828 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:32:09.579667 sshd[2087]: Accepted publickey for core from 147.75.109.163 port 60592 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:09.587883 sshd[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:09.617204 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:32:09.627629 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:32:09.646494 systemd-logind[1869]: New session 1 of user core. Jan 13 21:32:09.672444 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:32:09.677202 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:32:09.686726 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:32:09.708192 (systemd)[2101]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:32:09.760370 tar[1879]: linux-amd64/LICENSE Jan 13 21:32:09.763595 tar[1879]: linux-amd64/README.md Jan 13 21:32:09.778134 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO Agent will take identity from EC2 Jan 13 21:32:09.794179 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:32:09.879381 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:32:09.982654 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:32:10.002666 systemd[2101]: Queued start job for default target default.target. Jan 13 21:32:10.008818 systemd[2101]: Created slice app.slice - User Application Slice. Jan 13 21:32:10.008866 systemd[2101]: Reached target paths.target - Paths. Jan 13 21:32:10.008886 systemd[2101]: Reached target timers.target - Timers. Jan 13 21:32:10.024232 systemd[2101]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:32:10.059450 systemd[2101]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:32:10.060705 systemd[2101]: Reached target sockets.target - Sockets. Jan 13 21:32:10.060738 systemd[2101]: Reached target basic.target - Basic System. Jan 13 21:32:10.060803 systemd[2101]: Reached target default.target - Main User Target. Jan 13 21:32:10.060846 systemd[2101]: Startup finished in 336ms. Jan 13 21:32:10.061019 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:32:10.071352 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:32:10.080742 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:32:10.180214 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:32:10.239053 systemd[1]: Started sshd@1-172.31.19.228:22-147.75.109.163:57058.service - OpenSSH per-connection server daemon (147.75.109.163:57058). Jan 13 21:32:10.250090 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 21:32:10.250090 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:32:10.250090 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:32:10.250090 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO [Registrar] Starting registrar module Jan 13 21:32:10.250090 amazon-ssm-agent[2059]: 2025-01-13 21:32:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:32:10.250090 amazon-ssm-agent[2059]: 2025-01-13 21:32:10 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:32:10.250090 amazon-ssm-agent[2059]: 2025-01-13 21:32:10 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:32:10.250090 amazon-ssm-agent[2059]: 2025-01-13 21:32:10 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:32:10.250090 amazon-ssm-agent[2059]: 2025-01-13 21:32:10 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:32:10.280683 amazon-ssm-agent[2059]: 2025-01-13 21:32:10 INFO [CredentialRefresher] Next credential rotation will be in 32.4499931842 minutes Jan 13 21:32:10.400240 sshd[2116]: Accepted publickey for core from 147.75.109.163 port 57058 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:10.403105 sshd[2116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:10.426635 systemd-logind[1869]: New session 2 of user core. Jan 13 21:32:10.432332 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:32:10.585392 sshd[2116]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:10.590883 systemd[1]: sshd@1-172.31.19.228:22-147.75.109.163:57058.service: Deactivated successfully. Jan 13 21:32:10.598494 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:32:10.601507 systemd-logind[1869]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:32:10.603528 systemd-logind[1869]: Removed session 2. Jan 13 21:32:10.623892 systemd[1]: Started sshd@2-172.31.19.228:22-147.75.109.163:57062.service - OpenSSH per-connection server daemon (147.75.109.163:57062). Jan 13 21:32:10.708724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:10.711420 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:32:10.714213 systemd[1]: Startup finished in 835ms (kernel) + 9.097s (initrd) + 8.901s (userspace) = 18.835s. Jan 13 21:32:10.788809 sshd[2123]: Accepted publickey for core from 147.75.109.163 port 57062 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:10.790130 sshd[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:10.797421 systemd-logind[1869]: New session 3 of user core. Jan 13 21:32:10.805254 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:32:10.854425 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:32:10.866481 ntpd[1861]: Listen normally on 6 eth0 [fe80::4ef:4aff:fe97:b9a9%2]:123 Jan 13 21:32:10.866807 ntpd[1861]: 13 Jan 21:32:10 ntpd[1861]: Listen normally on 6 eth0 [fe80::4ef:4aff:fe97:b9a9%2]:123 Jan 13 21:32:10.929573 sshd[2123]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:10.935812 systemd-logind[1869]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:32:10.936840 systemd[1]: sshd@2-172.31.19.228:22-147.75.109.163:57062.service: Deactivated successfully. Jan 13 21:32:10.939758 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:32:10.941565 systemd-logind[1869]: Removed session 3. Jan 13 21:32:11.299762 amazon-ssm-agent[2059]: 2025-01-13 21:32:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:32:11.399918 amazon-ssm-agent[2059]: 2025-01-13 21:32:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2144) started Jan 13 21:32:11.505265 amazon-ssm-agent[2059]: 2025-01-13 21:32:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:32:11.699927 kubelet[2130]: E0113 21:32:11.699736 2130 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:32:11.702738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:32:11.702943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:32:11.703345 systemd[1]: kubelet.service: Consumed 1.123s CPU time. Jan 13 21:32:15.114360 systemd-resolved[1690]: Clock change detected. Flushing caches. Jan 13 21:32:21.214867 systemd[1]: Started sshd@3-172.31.19.228:22-147.75.109.163:39368.service - OpenSSH per-connection server daemon (147.75.109.163:39368). Jan 13 21:32:21.371335 sshd[2159]: Accepted publickey for core from 147.75.109.163 port 39368 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:21.372216 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:21.380232 systemd-logind[1869]: New session 4 of user core. Jan 13 21:32:21.391721 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:32:21.519281 sshd[2159]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:21.525903 systemd[1]: sshd@3-172.31.19.228:22-147.75.109.163:39368.service: Deactivated successfully. Jan 13 21:32:21.532837 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:32:21.537965 systemd-logind[1869]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:32:21.562706 systemd[1]: Started sshd@4-172.31.19.228:22-147.75.109.163:39372.service - OpenSSH per-connection server daemon (147.75.109.163:39372). Jan 13 21:32:21.565221 systemd-logind[1869]: Removed session 4. Jan 13 21:32:21.741204 sshd[2166]: Accepted publickey for core from 147.75.109.163 port 39372 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:21.742717 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:21.751948 systemd-logind[1869]: New session 5 of user core. Jan 13 21:32:21.755667 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:32:21.874965 sshd[2166]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:21.879778 systemd[1]: sshd@4-172.31.19.228:22-147.75.109.163:39372.service: Deactivated successfully. Jan 13 21:32:21.884099 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:32:21.888623 systemd-logind[1869]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:32:21.892587 systemd-logind[1869]: Removed session 5. Jan 13 21:32:21.921987 systemd[1]: Started sshd@5-172.31.19.228:22-147.75.109.163:39384.service - OpenSSH per-connection server daemon (147.75.109.163:39384). Jan 13 21:32:22.046305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:32:22.055817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:22.077508 sshd[2173]: Accepted publickey for core from 147.75.109.163 port 39384 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:22.078252 sshd[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:22.087515 systemd-logind[1869]: New session 6 of user core. Jan 13 21:32:22.094672 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:32:22.212684 sshd[2173]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:22.219517 systemd-logind[1869]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:32:22.220694 systemd[1]: sshd@5-172.31.19.228:22-147.75.109.163:39384.service: Deactivated successfully. Jan 13 21:32:22.222938 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:32:22.225332 systemd-logind[1869]: Removed session 6. Jan 13 21:32:22.251495 systemd[1]: Started sshd@6-172.31.19.228:22-147.75.109.163:39400.service - OpenSSH per-connection server daemon (147.75.109.163:39400). Jan 13 21:32:22.411235 sshd[2183]: Accepted publickey for core from 147.75.109.163 port 39400 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:22.411987 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:22.416248 systemd-logind[1869]: New session 7 of user core. Jan 13 21:32:22.421715 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:32:22.542038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:22.555932 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:32:22.555832 sudo[2186]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:32:22.556328 sudo[2186]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:32:22.571810 sudo[2186]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:22.597605 sshd[2183]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:22.604004 systemd[1]: sshd@6-172.31.19.228:22-147.75.109.163:39400.service: Deactivated successfully. Jan 13 21:32:22.604287 systemd-logind[1869]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:32:22.608417 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:32:22.613373 systemd-logind[1869]: Removed session 7. Jan 13 21:32:22.638000 systemd[1]: Started sshd@7-172.31.19.228:22-147.75.109.163:39414.service - OpenSSH per-connection server daemon (147.75.109.163:39414). Jan 13 21:32:22.655174 kubelet[2192]: E0113 21:32:22.655123 2192 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:32:22.660729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:32:22.660911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:32:22.804352 sshd[2203]: Accepted publickey for core from 147.75.109.163 port 39414 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:22.806130 sshd[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:22.810706 systemd-logind[1869]: New session 8 of user core. Jan 13 21:32:22.817646 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:32:22.913200 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:32:22.913623 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:32:22.917716 sudo[2208]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:22.923206 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:32:22.923708 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:32:22.957500 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:32:22.979757 auditctl[2211]: No rules Jan 13 21:32:22.981630 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:32:22.981884 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:32:22.991733 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:32:23.032482 augenrules[2229]: No rules Jan 13 21:32:23.034066 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:32:23.035378 sudo[2207]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:23.058516 sshd[2203]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:23.062659 systemd[1]: sshd@7-172.31.19.228:22-147.75.109.163:39414.service: Deactivated successfully. Jan 13 21:32:23.064673 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:32:23.065512 systemd-logind[1869]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:32:23.067284 systemd-logind[1869]: Removed session 8. Jan 13 21:32:23.094814 systemd[1]: Started sshd@8-172.31.19.228:22-147.75.109.163:39418.service - OpenSSH per-connection server daemon (147.75.109.163:39418). Jan 13 21:32:23.258062 sshd[2237]: Accepted publickey for core from 147.75.109.163 port 39418 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:23.259988 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:23.266403 systemd-logind[1869]: New session 9 of user core. Jan 13 21:32:23.273665 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:32:23.371177 sudo[2240]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:32:23.371591 sudo[2240]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:32:24.219808 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:32:24.222369 (dockerd)[2255]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:32:24.826815 dockerd[2255]: time="2025-01-13T21:32:24.826750585Z" level=info msg="Starting up" Jan 13 21:32:25.025827 dockerd[2255]: time="2025-01-13T21:32:25.025549684Z" level=info msg="Loading containers: start." Jan 13 21:32:25.222479 kernel: Initializing XFRM netlink socket Jan 13 21:32:25.261035 (udev-worker)[2278]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:32:25.333670 systemd-networkd[1727]: docker0: Link UP Jan 13 21:32:25.353973 dockerd[2255]: time="2025-01-13T21:32:25.353922931Z" level=info msg="Loading containers: done." Jan 13 21:32:25.408035 dockerd[2255]: time="2025-01-13T21:32:25.407976983Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:32:25.408245 dockerd[2255]: time="2025-01-13T21:32:25.408104707Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:32:25.408298 dockerd[2255]: time="2025-01-13T21:32:25.408244892Z" level=info msg="Daemon has completed initialization" Jan 13 21:32:25.454592 dockerd[2255]: time="2025-01-13T21:32:25.454172642Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:32:25.454424 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:32:26.832749 containerd[1897]: time="2025-01-13T21:32:26.832702460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:32:27.620102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822686831.mount: Deactivated successfully. Jan 13 21:32:29.869869 containerd[1897]: time="2025-01-13T21:32:29.869770548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:29.871857 containerd[1897]: time="2025-01-13T21:32:29.871551727Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 21:32:29.874340 containerd[1897]: time="2025-01-13T21:32:29.873689242Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:29.879356 containerd[1897]: time="2025-01-13T21:32:29.879312755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:29.883078 containerd[1897]: time="2025-01-13T21:32:29.883031601Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.050230611s" Jan 13 21:32:29.883078 containerd[1897]: time="2025-01-13T21:32:29.883082504Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:32:29.921616 containerd[1897]: time="2025-01-13T21:32:29.921569673Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:32:32.408137 containerd[1897]: time="2025-01-13T21:32:32.408081622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:32.410351 containerd[1897]: time="2025-01-13T21:32:32.410137132Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 21:32:32.414468 containerd[1897]: time="2025-01-13T21:32:32.413012367Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:32.422111 containerd[1897]: time="2025-01-13T21:32:32.422057094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:32.423303 containerd[1897]: time="2025-01-13T21:32:32.423259362Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.501645451s" Jan 13 21:32:32.423409 containerd[1897]: time="2025-01-13T21:32:32.423308999Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:32:32.451183 containerd[1897]: time="2025-01-13T21:32:32.451138669Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:32:32.911293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:32:32.916738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:33.372896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:33.383606 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:32:33.484355 kubelet[2480]: E0113 21:32:33.484023 2480 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:32:33.489416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:32:33.489635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:32:34.103746 containerd[1897]: time="2025-01-13T21:32:34.103691928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:34.105471 containerd[1897]: time="2025-01-13T21:32:34.105244438Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 21:32:34.107601 containerd[1897]: time="2025-01-13T21:32:34.107543552Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:34.115137 containerd[1897]: time="2025-01-13T21:32:34.114401669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:34.116162 containerd[1897]: time="2025-01-13T21:32:34.116114659Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.66492597s" Jan 13 21:32:34.116278 containerd[1897]: time="2025-01-13T21:32:34.116166398Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:32:34.196047 containerd[1897]: time="2025-01-13T21:32:34.196002599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:32:35.540715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628740791.mount: Deactivated successfully. Jan 13 21:32:36.218149 containerd[1897]: time="2025-01-13T21:32:36.218089934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:36.220332 containerd[1897]: time="2025-01-13T21:32:36.219993448Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 21:32:36.222532 containerd[1897]: time="2025-01-13T21:32:36.222487072Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:36.225673 containerd[1897]: time="2025-01-13T21:32:36.225608263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:36.226615 containerd[1897]: time="2025-01-13T21:32:36.226457636Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.030396207s" Jan 13 21:32:36.226615 containerd[1897]: time="2025-01-13T21:32:36.226499797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:32:36.259868 containerd[1897]: time="2025-01-13T21:32:36.259831653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:32:36.864278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521975300.mount: Deactivated successfully. Jan 13 21:32:38.550755 containerd[1897]: time="2025-01-13T21:32:38.550687189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:38.553231 containerd[1897]: time="2025-01-13T21:32:38.553162061Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:32:38.556496 containerd[1897]: time="2025-01-13T21:32:38.554837490Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:38.564024 containerd[1897]: time="2025-01-13T21:32:38.563940153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:38.565897 containerd[1897]: time="2025-01-13T21:32:38.565512011Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.305567312s" Jan 13 21:32:38.565897 containerd[1897]: time="2025-01-13T21:32:38.565566201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:32:38.609154 containerd[1897]: time="2025-01-13T21:32:38.609110275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:32:38.905210 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:32:39.272978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3133691407.mount: Deactivated successfully. Jan 13 21:32:39.283372 containerd[1897]: time="2025-01-13T21:32:39.283317473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:39.284867 containerd[1897]: time="2025-01-13T21:32:39.284700422Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:32:39.287575 containerd[1897]: time="2025-01-13T21:32:39.286244734Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:39.289279 containerd[1897]: time="2025-01-13T21:32:39.289240815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:39.290994 containerd[1897]: time="2025-01-13T21:32:39.290951725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 681.800132ms" Jan 13 21:32:39.291161 containerd[1897]: time="2025-01-13T21:32:39.291139550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:32:39.318683 containerd[1897]: time="2025-01-13T21:32:39.318640903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:32:39.903358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029550043.mount: Deactivated successfully. Jan 13 21:32:42.752129 containerd[1897]: time="2025-01-13T21:32:42.752076765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:42.754103 containerd[1897]: time="2025-01-13T21:32:42.753901233Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 21:32:42.756176 containerd[1897]: time="2025-01-13T21:32:42.755622087Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:42.761227 containerd[1897]: time="2025-01-13T21:32:42.761180014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:42.762623 containerd[1897]: time="2025-01-13T21:32:42.762580782Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.443901222s" Jan 13 21:32:42.762787 containerd[1897]: time="2025-01-13T21:32:42.762766404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:32:43.509618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:32:43.523835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:44.720691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:44.740238 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:32:44.859182 kubelet[2680]: E0113 21:32:44.859124 2680 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:32:44.864911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:32:44.865504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:32:46.798681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:46.806851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:46.840125 systemd[1]: Reloading requested from client PID 2695 ('systemctl') (unit session-9.scope)... Jan 13 21:32:46.840145 systemd[1]: Reloading... Jan 13 21:32:47.011662 zram_generator::config[2735]: No configuration found. Jan 13 21:32:47.221773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:47.380544 systemd[1]: Reloading finished in 539 ms. Jan 13 21:32:47.466082 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:32:47.466217 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:32:47.466660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:47.473921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:48.083013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:48.107942 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:32:48.201560 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:48.201560 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:32:48.201560 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:48.201560 kubelet[2796]: I0113 21:32:48.201670 2796 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:32:48.759800 kubelet[2796]: I0113 21:32:48.759765 2796 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:32:48.759800 kubelet[2796]: I0113 21:32:48.759795 2796 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:32:48.760027 kubelet[2796]: I0113 21:32:48.760019 2796 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:32:48.792952 kubelet[2796]: E0113 21:32:48.792887 2796 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:48.793088 kubelet[2796]: I0113 21:32:48.793027 2796 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:32:48.821925 kubelet[2796]: I0113 21:32:48.821878 2796 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:32:48.824896 kubelet[2796]: I0113 21:32:48.824857 2796 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:32:48.826321 kubelet[2796]: I0113 21:32:48.826281 2796 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:32:48.826321 kubelet[2796]: I0113 21:32:48.826325 2796 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:32:48.826650 kubelet[2796]: I0113 21:32:48.826341 2796 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:32:48.826650 kubelet[2796]: I0113 21:32:48.826495 2796 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:48.826752 kubelet[2796]: I0113 21:32:48.826692 2796 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:32:48.826752 kubelet[2796]: I0113 21:32:48.826713 2796 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:32:48.830149 kubelet[2796]: I0113 21:32:48.829403 2796 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:32:48.834532 kubelet[2796]: W0113 21:32:48.831174 2796 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.19.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-228&limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:48.834532 kubelet[2796]: E0113 21:32:48.831261 2796 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-228&limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:48.834532 kubelet[2796]: I0113 21:32:48.831282 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:32:48.834532 kubelet[2796]: W0113 21:32:48.834381 2796 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.19.228:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:48.834875 kubelet[2796]: E0113 21:32:48.834857 2796 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.228:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:48.835312 kubelet[2796]: I0113 21:32:48.835130 2796 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:32:48.842735 kubelet[2796]: I0113 21:32:48.842692 2796 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:32:48.845984 kubelet[2796]: W0113 21:32:48.845914 2796 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:32:48.849272 kubelet[2796]: I0113 21:32:48.849003 2796 server.go:1256] "Started kubelet" Jan 13 21:32:48.854800 kubelet[2796]: I0113 21:32:48.849659 2796 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:32:48.854800 kubelet[2796]: I0113 21:32:48.853006 2796 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:32:48.858386 kubelet[2796]: I0113 21:32:48.857111 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:32:48.861488 kubelet[2796]: I0113 21:32:48.860744 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:32:48.861488 kubelet[2796]: I0113 21:32:48.861078 2796 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:32:48.867987 kubelet[2796]: E0113 21:32:48.867945 2796 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.228:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-228.181a5e01f7bb3b75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-228,UID:ip-172-31-19-228,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-228,},FirstTimestamp:2025-01-13 21:32:48.848968565 +0000 UTC m=+0.724859081,LastTimestamp:2025-01-13 21:32:48.848968565 +0000 UTC m=+0.724859081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-228,}" Jan 13 21:32:48.870475 kubelet[2796]: I0113 21:32:48.870426 2796 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:32:48.870884 kubelet[2796]: I0113 21:32:48.870467 2796 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:32:48.871094 kubelet[2796]: I0113 21:32:48.871079 2796 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:32:48.875948 kubelet[2796]: E0113 21:32:48.875903 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-228?timeout=10s\": dial tcp 172.31.19.228:6443: connect: connection refused" interval="200ms" Jan 13 21:32:48.878244 kubelet[2796]: I0113 21:32:48.877786 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:32:48.884055 kubelet[2796]: W0113 21:32:48.884006 2796 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.19.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:48.885599 kubelet[2796]: E0113 21:32:48.884246 2796 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:48.885599 kubelet[2796]: I0113 21:32:48.884420 2796 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:32:48.885599 kubelet[2796]: I0113 21:32:48.884430 2796 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:32:48.891099 kubelet[2796]: I0113 21:32:48.890928 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:32:48.893062 kubelet[2796]: I0113 21:32:48.893042 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:32:48.893866 kubelet[2796]: I0113 21:32:48.893378 2796 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:32:48.893866 kubelet[2796]: I0113 21:32:48.893535 2796 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:32:48.893866 kubelet[2796]: E0113 21:32:48.893591 2796 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:32:48.927700 kubelet[2796]: W0113 21:32:48.927650 2796 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.19.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:48.928859 kubelet[2796]: E0113 21:32:48.928845 2796 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:48.936092 kubelet[2796]: E0113 21:32:48.935517 2796 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:32:48.948163 kubelet[2796]: I0113 21:32:48.948142 2796 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:32:48.948411 kubelet[2796]: I0113 21:32:48.948386 2796 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:32:48.948567 kubelet[2796]: I0113 21:32:48.948557 2796 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:48.962362 kubelet[2796]: I0113 21:32:48.962330 2796 policy_none.go:49] "None policy: Start" Jan 13 21:32:48.963563 kubelet[2796]: I0113 21:32:48.963338 2796 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:32:48.963563 kubelet[2796]: I0113 21:32:48.963366 2796 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:32:48.973313 kubelet[2796]: I0113 21:32:48.972856 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-228" Jan 13 21:32:48.973313 kubelet[2796]: E0113 21:32:48.973247 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.228:6443/api/v1/nodes\": dial tcp 172.31.19.228:6443: connect: connection refused" node="ip-172-31-19-228" Jan 13 21:32:48.984915 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:32:48.994493 kubelet[2796]: E0113 21:32:48.994463 2796 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:32:48.998120 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:32:49.005084 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:32:49.018919 kubelet[2796]: I0113 21:32:49.018820 2796 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:32:49.019477 kubelet[2796]: I0113 21:32:49.019436 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:32:49.024250 kubelet[2796]: E0113 21:32:49.024215 2796 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-228\" not found" Jan 13 21:32:49.078384 kubelet[2796]: E0113 21:32:49.078348 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-228?timeout=10s\": dial tcp 172.31.19.228:6443: connect: connection refused" interval="400ms" Jan 13 21:32:49.177144 kubelet[2796]: I0113 21:32:49.177104 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-228" Jan 13 21:32:49.177816 kubelet[2796]: E0113 21:32:49.177794 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.228:6443/api/v1/nodes\": dial tcp 172.31.19.228:6443: connect: connection refused" node="ip-172-31-19-228" Jan 13 21:32:49.197025 kubelet[2796]: I0113 21:32:49.196979 2796 topology_manager.go:215] "Topology Admit Handler" podUID="4ee34b9f5b3e0e97c025e3e7c229f369" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-228" Jan 13 21:32:49.199716 kubelet[2796]: I0113 21:32:49.199685 2796 topology_manager.go:215] "Topology Admit Handler" podUID="ea050a20dbf2d1fed0fb4c9bb0781398" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-228" Jan 13 21:32:49.201700 kubelet[2796]: I0113 21:32:49.201320 2796 topology_manager.go:215] "Topology Admit Handler" podUID="e3cbcd78874cd4649ed10925246b983e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-228" Jan 13 21:32:49.211766 systemd[1]: Created slice kubepods-burstable-pod4ee34b9f5b3e0e97c025e3e7c229f369.slice - libcontainer container kubepods-burstable-pod4ee34b9f5b3e0e97c025e3e7c229f369.slice. Jan 13 21:32:49.230259 systemd[1]: Created slice kubepods-burstable-podea050a20dbf2d1fed0fb4c9bb0781398.slice - libcontainer container kubepods-burstable-podea050a20dbf2d1fed0fb4c9bb0781398.slice. Jan 13 21:32:49.251795 systemd[1]: Created slice kubepods-burstable-pode3cbcd78874cd4649ed10925246b983e.slice - libcontainer container kubepods-burstable-pode3cbcd78874cd4649ed10925246b983e.slice. Jan 13 21:32:49.272799 kubelet[2796]: I0113 21:32:49.272609 2796 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:32:49.272799 kubelet[2796]: I0113 21:32:49.272658 2796 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:32:49.272799 kubelet[2796]: I0113 21:32:49.272685 2796 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ee34b9f5b3e0e97c025e3e7c229f369-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-228\" (UID: \"4ee34b9f5b3e0e97c025e3e7c229f369\") " pod="kube-system/kube-apiserver-ip-172-31-19-228" Jan 13 21:32:49.272799 kubelet[2796]: I0113 21:32:49.272719 2796 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ee34b9f5b3e0e97c025e3e7c229f369-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-228\" (UID: \"4ee34b9f5b3e0e97c025e3e7c229f369\") " pod="kube-system/kube-apiserver-ip-172-31-19-228" Jan 13 21:32:49.272799 kubelet[2796]: I0113 21:32:49.272752 2796 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:32:49.273590 kubelet[2796]: I0113 21:32:49.272798 2796 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:32:49.273590 kubelet[2796]: I0113 21:32:49.272871 2796 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3cbcd78874cd4649ed10925246b983e-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-228\" (UID: \"e3cbcd78874cd4649ed10925246b983e\") " pod="kube-system/kube-scheduler-ip-172-31-19-228" Jan 13 21:32:49.273590 kubelet[2796]: I0113 21:32:49.272924 2796 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ee34b9f5b3e0e97c025e3e7c229f369-ca-certs\") pod \"kube-apiserver-ip-172-31-19-228\" (UID: \"4ee34b9f5b3e0e97c025e3e7c229f369\") " pod="kube-system/kube-apiserver-ip-172-31-19-228" Jan 13 21:32:49.273590 kubelet[2796]: I0113 21:32:49.272953 2796 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:32:49.479560 kubelet[2796]: E0113 21:32:49.479524 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-228?timeout=10s\": dial tcp 172.31.19.228:6443: connect: connection refused" interval="800ms" Jan 13 21:32:49.529566 containerd[1897]: time="2025-01-13T21:32:49.529422798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-228,Uid:4ee34b9f5b3e0e97c025e3e7c229f369,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:49.562330 containerd[1897]: time="2025-01-13T21:32:49.561465113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-228,Uid:e3cbcd78874cd4649ed10925246b983e,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:49.563853 containerd[1897]: time="2025-01-13T21:32:49.563804632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-228,Uid:ea050a20dbf2d1fed0fb4c9bb0781398,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:49.580801 kubelet[2796]: I0113 21:32:49.580769 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-228" Jan 13 21:32:49.581227 kubelet[2796]: E0113 21:32:49.581191 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.228:6443/api/v1/nodes\": dial tcp 172.31.19.228:6443: connect: connection refused" node="ip-172-31-19-228" Jan 13 21:32:49.685324 kubelet[2796]: W0113 21:32:49.685212 2796 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.19.228:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:49.685324 kubelet[2796]: E0113 21:32:49.685323 2796 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.228:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:50.002036 kubelet[2796]: W0113 21:32:50.001976 2796 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.19.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-228&limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:50.002036 kubelet[2796]: E0113 21:32:50.002037 2796 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.228:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-228&limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:50.078848 kubelet[2796]: W0113 21:32:50.078737 2796 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.19.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:50.078848 kubelet[2796]: E0113 21:32:50.078825 2796 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.228:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:50.084909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414455555.mount: Deactivated successfully. Jan 13 21:32:50.098406 containerd[1897]: time="2025-01-13T21:32:50.098348704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.100015 containerd[1897]: time="2025-01-13T21:32:50.099803808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:32:50.101713 containerd[1897]: time="2025-01-13T21:32:50.101677048Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.103488 containerd[1897]: time="2025-01-13T21:32:50.103430236Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.104916 containerd[1897]: time="2025-01-13T21:32:50.104871567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:32:50.106741 containerd[1897]: time="2025-01-13T21:32:50.106689939Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.107813 containerd[1897]: time="2025-01-13T21:32:50.107703033Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:32:50.112488 containerd[1897]: time="2025-01-13T21:32:50.112297809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:50.113474 containerd[1897]: time="2025-01-13T21:32:50.113220478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.336072ms" Jan 13 21:32:50.115906 containerd[1897]: time="2025-01-13T21:32:50.115864873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.163316ms" Jan 13 21:32:50.119228 containerd[1897]: time="2025-01-13T21:32:50.119106969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.570741ms" Jan 13 21:32:50.281496 kubelet[2796]: E0113 21:32:50.280908 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-228?timeout=10s\": dial tcp 172.31.19.228:6443: connect: connection refused" interval="1.6s" Jan 13 21:32:50.383145 kubelet[2796]: I0113 21:32:50.383112 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-228" Jan 13 21:32:50.383533 kubelet[2796]: E0113 21:32:50.383507 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.228:6443/api/v1/nodes\": dial tcp 172.31.19.228:6443: connect: connection refused" node="ip-172-31-19-228" Jan 13 21:32:50.410694 kubelet[2796]: W0113 21:32:50.410648 2796 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.19.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:50.410694 kubelet[2796]: E0113 21:32:50.410702 2796 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.228:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:50.440138 containerd[1897]: time="2025-01-13T21:32:50.439918522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:50.440138 containerd[1897]: time="2025-01-13T21:32:50.440005420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:50.440662 containerd[1897]: time="2025-01-13T21:32:50.440175626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.447454 containerd[1897]: time="2025-01-13T21:32:50.446413119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.456482 containerd[1897]: time="2025-01-13T21:32:50.456184436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:50.456482 containerd[1897]: time="2025-01-13T21:32:50.456304577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:50.456814 containerd[1897]: time="2025-01-13T21:32:50.456338243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.457187 containerd[1897]: time="2025-01-13T21:32:50.457112641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.462780 containerd[1897]: time="2025-01-13T21:32:50.462230559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:50.462780 containerd[1897]: time="2025-01-13T21:32:50.462307677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:50.462780 containerd[1897]: time="2025-01-13T21:32:50.462329066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.462780 containerd[1897]: time="2025-01-13T21:32:50.462496947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:50.498689 systemd[1]: Started cri-containerd-c5f0721f48ec1981d92b9ea1c775419622337f62daec2e4d62f7a6abcf2478d1.scope - libcontainer container c5f0721f48ec1981d92b9ea1c775419622337f62daec2e4d62f7a6abcf2478d1. Jan 13 21:32:50.524389 systemd[1]: Started cri-containerd-086c5e48a44564a7091ffbba917453f5bbc9b080a262a036e2d7e844c13e8727.scope - libcontainer container 086c5e48a44564a7091ffbba917453f5bbc9b080a262a036e2d7e844c13e8727. Jan 13 21:32:50.546722 systemd[1]: Started cri-containerd-af622bcce0eed0a6d06deaaea2daa3090261e0750a7de4fda32e6d15efa46daf.scope - libcontainer container af622bcce0eed0a6d06deaaea2daa3090261e0750a7de4fda32e6d15efa46daf. Jan 13 21:32:50.646528 containerd[1897]: time="2025-01-13T21:32:50.646404411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-228,Uid:ea050a20dbf2d1fed0fb4c9bb0781398,Namespace:kube-system,Attempt:0,} returns sandbox id \"af622bcce0eed0a6d06deaaea2daa3090261e0750a7de4fda32e6d15efa46daf\"" Jan 13 21:32:50.663148 containerd[1897]: time="2025-01-13T21:32:50.663080812Z" level=info msg="CreateContainer within sandbox \"af622bcce0eed0a6d06deaaea2daa3090261e0750a7de4fda32e6d15efa46daf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:32:50.665938 containerd[1897]: time="2025-01-13T21:32:50.665338935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-228,Uid:4ee34b9f5b3e0e97c025e3e7c229f369,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5f0721f48ec1981d92b9ea1c775419622337f62daec2e4d62f7a6abcf2478d1\"" Jan 13 21:32:50.674813 containerd[1897]: time="2025-01-13T21:32:50.674764699Z" level=info msg="CreateContainer within sandbox \"c5f0721f48ec1981d92b9ea1c775419622337f62daec2e4d62f7a6abcf2478d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:32:50.686937 containerd[1897]: time="2025-01-13T21:32:50.686888387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-228,Uid:e3cbcd78874cd4649ed10925246b983e,Namespace:kube-system,Attempt:0,} returns sandbox id \"086c5e48a44564a7091ffbba917453f5bbc9b080a262a036e2d7e844c13e8727\"" Jan 13 21:32:50.690747 containerd[1897]: time="2025-01-13T21:32:50.690697827Z" level=info msg="CreateContainer within sandbox \"086c5e48a44564a7091ffbba917453f5bbc9b080a262a036e2d7e844c13e8727\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:32:50.721118 containerd[1897]: time="2025-01-13T21:32:50.720906248Z" level=info msg="CreateContainer within sandbox \"af622bcce0eed0a6d06deaaea2daa3090261e0750a7de4fda32e6d15efa46daf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5\"" Jan 13 21:32:50.722526 containerd[1897]: time="2025-01-13T21:32:50.722491400Z" level=info msg="StartContainer for \"3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5\"" Jan 13 21:32:50.726646 containerd[1897]: time="2025-01-13T21:32:50.726609121Z" level=info msg="CreateContainer within sandbox \"c5f0721f48ec1981d92b9ea1c775419622337f62daec2e4d62f7a6abcf2478d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"05fbb22720ced15e4d2fa47fdf7ac2ca326619d71cd11f868506f30935af8c47\"" Jan 13 21:32:50.727864 containerd[1897]: time="2025-01-13T21:32:50.727595049Z" level=info msg="StartContainer for \"05fbb22720ced15e4d2fa47fdf7ac2ca326619d71cd11f868506f30935af8c47\"" Jan 13 21:32:50.739991 containerd[1897]: time="2025-01-13T21:32:50.739921173Z" level=info msg="CreateContainer within sandbox \"086c5e48a44564a7091ffbba917453f5bbc9b080a262a036e2d7e844c13e8727\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282\"" Jan 13 21:32:50.741985 containerd[1897]: time="2025-01-13T21:32:50.741950390Z" level=info msg="StartContainer for \"e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282\"" Jan 13 21:32:50.771792 systemd[1]: Started cri-containerd-3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5.scope - libcontainer container 3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5. Jan 13 21:32:50.783979 systemd[1]: Started cri-containerd-05fbb22720ced15e4d2fa47fdf7ac2ca326619d71cd11f868506f30935af8c47.scope - libcontainer container 05fbb22720ced15e4d2fa47fdf7ac2ca326619d71cd11f868506f30935af8c47. Jan 13 21:32:50.826722 systemd[1]: Started cri-containerd-e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282.scope - libcontainer container e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282. Jan 13 21:32:50.832184 kubelet[2796]: E0113 21:32:50.830892 2796 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.228:6443: connect: connection refused Jan 13 21:32:50.873466 containerd[1897]: time="2025-01-13T21:32:50.872512056Z" level=info msg="StartContainer for \"3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5\" returns successfully" Jan 13 21:32:50.927546 containerd[1897]: time="2025-01-13T21:32:50.926890171Z" level=info msg="StartContainer for \"05fbb22720ced15e4d2fa47fdf7ac2ca326619d71cd11f868506f30935af8c47\" returns successfully" Jan 13 21:32:50.972982 containerd[1897]: time="2025-01-13T21:32:50.972930718Z" level=info msg="StartContainer for \"e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282\" returns successfully" Jan 13 21:32:50.997109 kubelet[2796]: E0113 21:32:50.996760 2796 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.228:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-228.181a5e01f7bb3b75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-228,UID:ip-172-31-19-228,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-228,},FirstTimestamp:2025-01-13 21:32:48.848968565 +0000 UTC m=+0.724859081,LastTimestamp:2025-01-13 21:32:48.848968565 +0000 UTC m=+0.724859081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-228,}" Jan 13 21:32:51.989553 kubelet[2796]: I0113 21:32:51.989524 2796 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-228" Jan 13 21:32:53.131306 update_engine[1872]: I20250113 21:32:53.129482 1872 update_attempter.cc:509] Updating boot flags... Jan 13 21:32:53.271488 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3082) Jan 13 21:32:53.613299 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3082) Jan 13 21:32:54.056635 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3082) Jan 13 21:32:55.070568 kubelet[2796]: E0113 21:32:55.070518 2796 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-228\" not found" node="ip-172-31-19-228" Jan 13 21:32:55.081759 kubelet[2796]: I0113 21:32:55.081573 2796 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-228" Jan 13 21:32:55.646467 kubelet[2796]: E0113 21:32:55.646416 2796 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-19-228\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:32:55.838060 kubelet[2796]: I0113 21:32:55.838016 2796 apiserver.go:52] "Watching apiserver" Jan 13 21:32:55.871374 kubelet[2796]: I0113 21:32:55.871332 2796 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:32:58.364363 systemd[1]: Reloading requested from client PID 3336 ('systemctl') (unit session-9.scope)... Jan 13 21:32:58.364525 systemd[1]: Reloading... Jan 13 21:32:58.518559 zram_generator::config[3377]: No configuration found. Jan 13 21:32:58.673732 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:58.805238 systemd[1]: Reloading finished in 439 ms. Jan 13 21:32:58.855847 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:58.877894 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:32:58.878080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:58.900301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:59.366701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:59.372953 (kubelet)[3433]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:32:59.497620 kubelet[3433]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:59.497620 kubelet[3433]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:32:59.497620 kubelet[3433]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:59.497620 kubelet[3433]: I0113 21:32:59.497071 3433 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:32:59.505502 kubelet[3433]: I0113 21:32:59.505411 3433 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:32:59.505715 kubelet[3433]: I0113 21:32:59.505693 3433 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:32:59.506249 kubelet[3433]: I0113 21:32:59.506195 3433 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:32:59.508816 kubelet[3433]: I0113 21:32:59.508778 3433 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:32:59.511432 kubelet[3433]: I0113 21:32:59.511401 3433 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:32:59.522135 kubelet[3433]: I0113 21:32:59.522003 3433 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:32:59.522717 kubelet[3433]: I0113 21:32:59.522685 3433 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:32:59.522946 kubelet[3433]: I0113 21:32:59.522922 3433 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:32:59.523093 kubelet[3433]: I0113 21:32:59.522956 3433 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:32:59.523093 kubelet[3433]: I0113 21:32:59.522970 3433 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:32:59.523093 kubelet[3433]: I0113 21:32:59.523013 3433 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:59.523231 kubelet[3433]: I0113 21:32:59.523148 3433 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:32:59.523231 kubelet[3433]: I0113 21:32:59.523167 3433 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:32:59.523427 kubelet[3433]: I0113 21:32:59.523406 3433 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:32:59.523514 kubelet[3433]: I0113 21:32:59.523430 3433 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:32:59.544928 sudo[3447]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:32:59.545898 sudo[3447]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:32:59.546631 kubelet[3433]: I0113 21:32:59.546602 3433 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:32:59.547141 kubelet[3433]: I0113 21:32:59.546876 3433 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:32:59.554135 kubelet[3433]: I0113 21:32:59.554093 3433 server.go:1256] "Started kubelet" Jan 13 21:32:59.561737 kubelet[3433]: I0113 21:32:59.561669 3433 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:32:59.574517 kubelet[3433]: I0113 21:32:59.574383 3433 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:32:59.575138 kubelet[3433]: I0113 21:32:59.575019 3433 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:32:59.620780 kubelet[3433]: I0113 21:32:59.619987 3433 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:32:59.620780 kubelet[3433]: I0113 21:32:59.580171 3433 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:32:59.620780 kubelet[3433]: I0113 21:32:59.580194 3433 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:32:59.620780 kubelet[3433]: I0113 21:32:59.620245 3433 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:32:59.620780 kubelet[3433]: I0113 21:32:59.617828 3433 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:32:59.620780 kubelet[3433]: I0113 21:32:59.620346 3433 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:32:59.620780 kubelet[3433]: I0113 21:32:59.619624 3433 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:32:59.626489 kubelet[3433]: I0113 21:32:59.626459 3433 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:32:59.630410 kubelet[3433]: I0113 21:32:59.630385 3433 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:32:59.631034 kubelet[3433]: I0113 21:32:59.630706 3433 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:32:59.631034 kubelet[3433]: I0113 21:32:59.630735 3433 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:32:59.631034 kubelet[3433]: E0113 21:32:59.630795 3433 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:32:59.633343 kubelet[3433]: E0113 21:32:59.632973 3433 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:32:59.634045 kubelet[3433]: I0113 21:32:59.634024 3433 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:32:59.694707 kubelet[3433]: I0113 21:32:59.694416 3433 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-228" Jan 13 21:32:59.725309 kubelet[3433]: I0113 21:32:59.725275 3433 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-228" Jan 13 21:32:59.726467 kubelet[3433]: I0113 21:32:59.725615 3433 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-228" Jan 13 21:32:59.733020 kubelet[3433]: E0113 21:32:59.732982 3433 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:32:59.741933 kubelet[3433]: I0113 21:32:59.741907 3433 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:32:59.742864 kubelet[3433]: I0113 21:32:59.742846 3433 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:32:59.743050 kubelet[3433]: I0113 21:32:59.743039 3433 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:59.743366 kubelet[3433]: I0113 21:32:59.743354 3433 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:32:59.743659 kubelet[3433]: I0113 21:32:59.743645 3433 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:32:59.743865 kubelet[3433]: I0113 21:32:59.743853 3433 policy_none.go:49] "None policy: Start" Jan 13 21:32:59.744702 kubelet[3433]: I0113 21:32:59.744684 3433 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:32:59.744784 kubelet[3433]: I0113 21:32:59.744714 3433 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:32:59.745913 kubelet[3433]: I0113 21:32:59.745132 3433 state_mem.go:75] "Updated machine memory state" Jan 13 21:32:59.751612 kubelet[3433]: I0113 21:32:59.751589 3433 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:32:59.752111 kubelet[3433]: I0113 21:32:59.752095 3433 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:32:59.933984 kubelet[3433]: I0113 21:32:59.933755 3433 topology_manager.go:215] "Topology Admit Handler" podUID="4ee34b9f5b3e0e97c025e3e7c229f369" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-228" Jan 13 21:32:59.936394 kubelet[3433]: I0113 21:32:59.934213 3433 topology_manager.go:215] "Topology Admit Handler" podUID="ea050a20dbf2d1fed0fb4c9bb0781398" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-228" Jan 13 21:32:59.936530 kubelet[3433]: I0113 21:32:59.936497 3433 topology_manager.go:215] "Topology Admit Handler" podUID="e3cbcd78874cd4649ed10925246b983e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-228" Jan 13 21:32:59.952041 kubelet[3433]: E0113 21:32:59.951974 3433 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-19-228\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-228" Jan 13 21:33:00.035558 kubelet[3433]: I0113 21:33:00.031749 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:33:00.036265 kubelet[3433]: I0113 21:33:00.036189 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:33:00.036399 kubelet[3433]: I0113 21:33:00.036298 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ee34b9f5b3e0e97c025e3e7c229f369-ca-certs\") pod \"kube-apiserver-ip-172-31-19-228\" (UID: \"4ee34b9f5b3e0e97c025e3e7c229f369\") " pod="kube-system/kube-apiserver-ip-172-31-19-228" Jan 13 21:33:00.036485 kubelet[3433]: I0113 21:33:00.036405 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:33:00.036544 kubelet[3433]: I0113 21:33:00.036515 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:33:00.036593 kubelet[3433]: I0113 21:33:00.036567 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea050a20dbf2d1fed0fb4c9bb0781398-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-228\" (UID: \"ea050a20dbf2d1fed0fb4c9bb0781398\") " pod="kube-system/kube-controller-manager-ip-172-31-19-228" Jan 13 21:33:00.036646 kubelet[3433]: I0113 21:33:00.036605 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3cbcd78874cd4649ed10925246b983e-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-228\" (UID: \"e3cbcd78874cd4649ed10925246b983e\") " pod="kube-system/kube-scheduler-ip-172-31-19-228" Jan 13 21:33:00.036695 kubelet[3433]: I0113 21:33:00.036655 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ee34b9f5b3e0e97c025e3e7c229f369-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-228\" (UID: \"4ee34b9f5b3e0e97c025e3e7c229f369\") " pod="kube-system/kube-apiserver-ip-172-31-19-228" Jan 13 21:33:00.036695 kubelet[3433]: I0113 21:33:00.036694 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ee34b9f5b3e0e97c025e3e7c229f369-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-228\" (UID: \"4ee34b9f5b3e0e97c025e3e7c229f369\") " pod="kube-system/kube-apiserver-ip-172-31-19-228" Jan 13 21:33:00.533857 kubelet[3433]: I0113 21:33:00.533777 3433 apiserver.go:52] "Watching apiserver" Jan 13 21:33:00.621478 kubelet[3433]: I0113 21:33:00.620992 3433 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:33:00.713932 kubelet[3433]: E0113 21:33:00.712825 3433 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-228\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-228" Jan 13 21:33:00.768186 kubelet[3433]: I0113 21:33:00.768145 3433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-228" podStartSLOduration=1.768099269 podStartE2EDuration="1.768099269s" podCreationTimestamp="2025-01-13 21:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:00.765651861 +0000 UTC m=+1.383077750" watchObservedRunningTime="2025-01-13 21:33:00.768099269 +0000 UTC m=+1.385525138" Jan 13 21:33:00.802875 kubelet[3433]: I0113 21:33:00.802837 3433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-228" podStartSLOduration=1.802786562 podStartE2EDuration="1.802786562s" podCreationTimestamp="2025-01-13 21:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:00.80237242 +0000 UTC m=+1.419798308" watchObservedRunningTime="2025-01-13 21:33:00.802786562 +0000 UTC m=+1.420212451" Jan 13 21:33:00.803090 kubelet[3433]: I0113 21:33:00.802922 3433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-228" podStartSLOduration=2.802901719 podStartE2EDuration="2.802901719s" podCreationTimestamp="2025-01-13 21:32:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:00.787604198 +0000 UTC m=+1.405030088" watchObservedRunningTime="2025-01-13 21:33:00.802901719 +0000 UTC m=+1.420327607" Jan 13 21:33:00.876018 sudo[3447]: pam_unix(sudo:session): session closed for user root Jan 13 21:33:04.166522 sudo[2240]: pam_unix(sudo:session): session closed for user root Jan 13 21:33:04.190260 sshd[2237]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:04.197259 systemd[1]: sshd@8-172.31.19.228:22-147.75.109.163:39418.service: Deactivated successfully. Jan 13 21:33:04.201846 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:33:04.202704 systemd[1]: session-9.scope: Consumed 5.645s CPU time, 188.9M memory peak, 0B memory swap peak. Jan 13 21:33:04.207040 systemd-logind[1869]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:33:04.208490 systemd-logind[1869]: Removed session 9. Jan 13 21:33:11.283936 kubelet[3433]: I0113 21:33:11.283893 3433 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:33:11.288601 containerd[1897]: time="2025-01-13T21:33:11.288411805Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:33:11.289200 kubelet[3433]: I0113 21:33:11.288967 3433 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:33:11.888211 kubelet[3433]: I0113 21:33:11.888166 3433 topology_manager.go:215] "Topology Admit Handler" podUID="717b9d57-6e93-4789-bb52-34f2672d23f6" podNamespace="kube-system" podName="kube-proxy-wzzr6" Jan 13 21:33:11.906456 kubelet[3433]: I0113 21:33:11.905688 3433 topology_manager.go:215] "Topology Admit Handler" podUID="b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" podNamespace="kube-system" podName="cilium-m27zv" Jan 13 21:33:11.919280 systemd[1]: Created slice kubepods-besteffort-pod717b9d57_6e93_4789_bb52_34f2672d23f6.slice - libcontainer container kubepods-besteffort-pod717b9d57_6e93_4789_bb52_34f2672d23f6.slice. Jan 13 21:33:11.939074 systemd[1]: Created slice kubepods-burstable-podb6b5086c_3ec2_44fe_a3e7_b06e0a4ade5e.slice - libcontainer container kubepods-burstable-podb6b5086c_3ec2_44fe_a3e7_b06e0a4ade5e.slice. Jan 13 21:33:11.954292 kubelet[3433]: I0113 21:33:11.954073 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-hubble-tls\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.954292 kubelet[3433]: I0113 21:33:11.954135 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-hostproc\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.954292 kubelet[3433]: I0113 21:33:11.954175 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/717b9d57-6e93-4789-bb52-34f2672d23f6-lib-modules\") pod \"kube-proxy-wzzr6\" (UID: \"717b9d57-6e93-4789-bb52-34f2672d23f6\") " pod="kube-system/kube-proxy-wzzr6" Jan 13 21:33:11.954292 kubelet[3433]: I0113 21:33:11.954240 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-host-proc-sys-kernel\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.955253 kubelet[3433]: I0113 21:33:11.954393 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-run\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.955253 kubelet[3433]: I0113 21:33:11.954533 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-lib-modules\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.955253 kubelet[3433]: I0113 21:33:11.955005 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lzdt\" (UniqueName: \"kubernetes.io/projected/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-kube-api-access-9lzdt\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.962338 kubelet[3433]: I0113 21:33:11.955412 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/717b9d57-6e93-4789-bb52-34f2672d23f6-kube-proxy\") pod \"kube-proxy-wzzr6\" (UID: \"717b9d57-6e93-4789-bb52-34f2672d23f6\") " pod="kube-system/kube-proxy-wzzr6" Jan 13 21:33:11.962338 kubelet[3433]: I0113 21:33:11.955562 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-clustermesh-secrets\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.962338 kubelet[3433]: I0113 21:33:11.955692 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/717b9d57-6e93-4789-bb52-34f2672d23f6-xtables-lock\") pod \"kube-proxy-wzzr6\" (UID: \"717b9d57-6e93-4789-bb52-34f2672d23f6\") " pod="kube-system/kube-proxy-wzzr6" Jan 13 21:33:11.962751 kubelet[3433]: I0113 21:33:11.955736 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l6x2\" (UniqueName: \"kubernetes.io/projected/717b9d57-6e93-4789-bb52-34f2672d23f6-kube-api-access-5l6x2\") pod \"kube-proxy-wzzr6\" (UID: \"717b9d57-6e93-4789-bb52-34f2672d23f6\") " pod="kube-system/kube-proxy-wzzr6" Jan 13 21:33:11.964907 kubelet[3433]: I0113 21:33:11.963254 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cni-path\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.969124 kubelet[3433]: I0113 21:33:11.965241 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-etc-cni-netd\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.969124 kubelet[3433]: I0113 21:33:11.965314 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-host-proc-sys-net\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.969124 kubelet[3433]: I0113 21:33:11.965376 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-bpf-maps\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.969124 kubelet[3433]: I0113 21:33:11.965408 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-cgroup\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.969124 kubelet[3433]: I0113 21:33:11.965464 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-xtables-lock\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:11.969124 kubelet[3433]: I0113 21:33:11.965504 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-config-path\") pod \"cilium-m27zv\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " pod="kube-system/cilium-m27zv" Jan 13 21:33:12.240746 containerd[1897]: time="2025-01-13T21:33:12.240616733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzzr6,Uid:717b9d57-6e93-4789-bb52-34f2672d23f6,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:12.254661 containerd[1897]: time="2025-01-13T21:33:12.254426733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m27zv,Uid:b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:12.320131 kubelet[3433]: I0113 21:33:12.320089 3433 topology_manager.go:215] "Topology Admit Handler" podUID="ad619d14-7172-42fc-baf3-aaaf4a095687" podNamespace="kube-system" podName="cilium-operator-5cc964979-bpcd2" Jan 13 21:33:12.348752 containerd[1897]: time="2025-01-13T21:33:12.345709467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:12.348752 containerd[1897]: time="2025-01-13T21:33:12.345779441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:12.348752 containerd[1897]: time="2025-01-13T21:33:12.345808220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.348752 containerd[1897]: time="2025-01-13T21:33:12.345947116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.352649 systemd[1]: Created slice kubepods-besteffort-podad619d14_7172_42fc_baf3_aaaf4a095687.slice - libcontainer container kubepods-besteffort-podad619d14_7172_42fc_baf3_aaaf4a095687.slice. Jan 13 21:33:12.396761 kubelet[3433]: I0113 21:33:12.396582 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7qb\" (UniqueName: \"kubernetes.io/projected/ad619d14-7172-42fc-baf3-aaaf4a095687-kube-api-access-cc7qb\") pod \"cilium-operator-5cc964979-bpcd2\" (UID: \"ad619d14-7172-42fc-baf3-aaaf4a095687\") " pod="kube-system/cilium-operator-5cc964979-bpcd2" Jan 13 21:33:12.396761 kubelet[3433]: I0113 21:33:12.396700 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad619d14-7172-42fc-baf3-aaaf4a095687-cilium-config-path\") pod \"cilium-operator-5cc964979-bpcd2\" (UID: \"ad619d14-7172-42fc-baf3-aaaf4a095687\") " pod="kube-system/cilium-operator-5cc964979-bpcd2" Jan 13 21:33:12.419383 containerd[1897]: time="2025-01-13T21:33:12.418992276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:12.419383 containerd[1897]: time="2025-01-13T21:33:12.419091053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:12.419383 containerd[1897]: time="2025-01-13T21:33:12.419113939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.419383 containerd[1897]: time="2025-01-13T21:33:12.419241135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.446664 systemd[1]: Started cri-containerd-3bd67fc7eddb3d7c33a4fe59d15b25a32cf3770c0e70654e4b6f8021339e05bb.scope - libcontainer container 3bd67fc7eddb3d7c33a4fe59d15b25a32cf3770c0e70654e4b6f8021339e05bb. Jan 13 21:33:12.467715 systemd[1]: Started cri-containerd-0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5.scope - libcontainer container 0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5. Jan 13 21:33:12.542201 containerd[1897]: time="2025-01-13T21:33:12.542136173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzzr6,Uid:717b9d57-6e93-4789-bb52-34f2672d23f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bd67fc7eddb3d7c33a4fe59d15b25a32cf3770c0e70654e4b6f8021339e05bb\"" Jan 13 21:33:12.591483 containerd[1897]: time="2025-01-13T21:33:12.591420563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m27zv,Uid:b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\"" Jan 13 21:33:12.602410 containerd[1897]: time="2025-01-13T21:33:12.598617827Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:33:12.618489 containerd[1897]: time="2025-01-13T21:33:12.618407535Z" level=info msg="CreateContainer within sandbox \"3bd67fc7eddb3d7c33a4fe59d15b25a32cf3770c0e70654e4b6f8021339e05bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:33:12.649879 containerd[1897]: time="2025-01-13T21:33:12.649807336Z" level=info msg="CreateContainer within sandbox \"3bd67fc7eddb3d7c33a4fe59d15b25a32cf3770c0e70654e4b6f8021339e05bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43738f4435b2992b9c20c246a4a581cc917e7e3be1ff3fab624ac9be43435396\"" Jan 13 21:33:12.653903 containerd[1897]: time="2025-01-13T21:33:12.650992562Z" level=info msg="StartContainer for \"43738f4435b2992b9c20c246a4a581cc917e7e3be1ff3fab624ac9be43435396\"" Jan 13 21:33:12.675047 containerd[1897]: time="2025-01-13T21:33:12.675002428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-bpcd2,Uid:ad619d14-7172-42fc-baf3-aaaf4a095687,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:12.696689 systemd[1]: Started cri-containerd-43738f4435b2992b9c20c246a4a581cc917e7e3be1ff3fab624ac9be43435396.scope - libcontainer container 43738f4435b2992b9c20c246a4a581cc917e7e3be1ff3fab624ac9be43435396. Jan 13 21:33:12.733131 containerd[1897]: time="2025-01-13T21:33:12.733013106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:12.734089 containerd[1897]: time="2025-01-13T21:33:12.733079191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:12.734089 containerd[1897]: time="2025-01-13T21:33:12.733332777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.734639 containerd[1897]: time="2025-01-13T21:33:12.734568275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.782004 containerd[1897]: time="2025-01-13T21:33:12.781956531Z" level=info msg="StartContainer for \"43738f4435b2992b9c20c246a4a581cc917e7e3be1ff3fab624ac9be43435396\" returns successfully" Jan 13 21:33:12.795892 systemd[1]: Started cri-containerd-d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c.scope - libcontainer container d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c. Jan 13 21:33:12.875358 containerd[1897]: time="2025-01-13T21:33:12.875177681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-bpcd2,Uid:ad619d14-7172-42fc-baf3-aaaf4a095687,Namespace:kube-system,Attempt:0,} returns sandbox id \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\"" Jan 13 21:33:19.935955 kubelet[3433]: I0113 21:33:19.932654 3433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wzzr6" podStartSLOduration=8.932382798999999 podStartE2EDuration="8.932382799s" podCreationTimestamp="2025-01-13 21:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:13.766856518 +0000 UTC m=+14.384282410" watchObservedRunningTime="2025-01-13 21:33:19.932382799 +0000 UTC m=+20.549808699" Jan 13 21:33:23.340045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737852689.mount: Deactivated successfully. Jan 13 21:33:28.082969 containerd[1897]: time="2025-01-13T21:33:28.082661005Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:28.084765 containerd[1897]: time="2025-01-13T21:33:28.084588976Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735335" Jan 13 21:33:28.092976 containerd[1897]: time="2025-01-13T21:33:28.092469487Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:28.102037 containerd[1897]: time="2025-01-13T21:33:28.101886749Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.499693108s" Jan 13 21:33:28.102037 containerd[1897]: time="2025-01-13T21:33:28.101938494Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:33:28.106818 containerd[1897]: time="2025-01-13T21:33:28.106471438Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:33:28.110530 containerd[1897]: time="2025-01-13T21:33:28.110481933Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:33:28.306888 containerd[1897]: time="2025-01-13T21:33:28.306837362Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\"" Jan 13 21:33:28.308572 containerd[1897]: time="2025-01-13T21:33:28.307529496Z" level=info msg="StartContainer for \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\"" Jan 13 21:33:28.438685 systemd[1]: Started cri-containerd-fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc.scope - libcontainer container fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc. Jan 13 21:33:28.484675 containerd[1897]: time="2025-01-13T21:33:28.484526662Z" level=info msg="StartContainer for \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\" returns successfully" Jan 13 21:33:28.498497 systemd[1]: cri-containerd-fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc.scope: Deactivated successfully. Jan 13 21:33:29.295329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc-rootfs.mount: Deactivated successfully. Jan 13 21:33:30.581546 containerd[1897]: time="2025-01-13T21:33:30.551901874Z" level=info msg="shim disconnected" id=fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc namespace=k8s.io Jan 13 21:33:30.583301 containerd[1897]: time="2025-01-13T21:33:30.581551706Z" level=warning msg="cleaning up after shim disconnected" id=fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc namespace=k8s.io Jan 13 21:33:30.583301 containerd[1897]: time="2025-01-13T21:33:30.581571256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:30.733657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210271525.mount: Deactivated successfully. Jan 13 21:33:30.800960 containerd[1897]: time="2025-01-13T21:33:30.800715797Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:33:30.829412 containerd[1897]: time="2025-01-13T21:33:30.829296849Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\"" Jan 13 21:33:30.830886 containerd[1897]: time="2025-01-13T21:33:30.830850829Z" level=info msg="StartContainer for \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\"" Jan 13 21:33:30.939625 systemd[1]: Started cri-containerd-eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366.scope - libcontainer container eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366. Jan 13 21:33:31.212485 containerd[1897]: time="2025-01-13T21:33:31.211358880Z" level=info msg="StartContainer for \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\" returns successfully" Jan 13 21:33:31.237776 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:33:31.238153 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:33:31.238252 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:33:31.247984 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:33:31.255053 systemd[1]: cri-containerd-eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366.scope: Deactivated successfully. Jan 13 21:33:31.348647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:33:31.692669 containerd[1897]: time="2025-01-13T21:33:31.692599858Z" level=info msg="shim disconnected" id=eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366 namespace=k8s.io Jan 13 21:33:31.692669 containerd[1897]: time="2025-01-13T21:33:31.692662602Z" level=warning msg="cleaning up after shim disconnected" id=eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366 namespace=k8s.io Jan 13 21:33:31.692669 containerd[1897]: time="2025-01-13T21:33:31.692676284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:31.720518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366-rootfs.mount: Deactivated successfully. Jan 13 21:33:31.814168 containerd[1897]: time="2025-01-13T21:33:31.814114574Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:33:31.896325 containerd[1897]: time="2025-01-13T21:33:31.896280697Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\"" Jan 13 21:33:31.899026 containerd[1897]: time="2025-01-13T21:33:31.898476450Z" level=info msg="StartContainer for \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\"" Jan 13 21:33:31.975961 systemd[1]: Started cri-containerd-bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb.scope - libcontainer container bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb. Jan 13 21:33:32.048375 containerd[1897]: time="2025-01-13T21:33:32.048291748Z" level=info msg="StartContainer for \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\" returns successfully" Jan 13 21:33:32.049554 systemd[1]: cri-containerd-bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb.scope: Deactivated successfully. Jan 13 21:33:32.717162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb-rootfs.mount: Deactivated successfully. Jan 13 21:33:32.941329 containerd[1897]: time="2025-01-13T21:33:32.941268729Z" level=info msg="shim disconnected" id=bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb namespace=k8s.io Jan 13 21:33:32.941329 containerd[1897]: time="2025-01-13T21:33:32.941318802Z" level=warning msg="cleaning up after shim disconnected" id=bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb namespace=k8s.io Jan 13 21:33:32.941329 containerd[1897]: time="2025-01-13T21:33:32.941330250Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:32.976946 containerd[1897]: time="2025-01-13T21:33:32.976675921Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907209" Jan 13 21:33:32.978356 containerd[1897]: time="2025-01-13T21:33:32.976829997Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:32.985802 containerd[1897]: time="2025-01-13T21:33:32.985734903Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:32.995957 containerd[1897]: time="2025-01-13T21:33:32.995903825Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.8893841s" Jan 13 21:33:32.995957 containerd[1897]: time="2025-01-13T21:33:32.995947044Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:33:32.999423 containerd[1897]: time="2025-01-13T21:33:32.999234438Z" level=info msg="CreateContainer within sandbox \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:33:33.044224 containerd[1897]: time="2025-01-13T21:33:33.044178685Z" level=info msg="CreateContainer within sandbox \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\"" Jan 13 21:33:33.046096 containerd[1897]: time="2025-01-13T21:33:33.044950125Z" level=info msg="StartContainer for \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\"" Jan 13 21:33:33.111677 systemd[1]: Started cri-containerd-767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a.scope - libcontainer container 767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a. Jan 13 21:33:33.204234 containerd[1897]: time="2025-01-13T21:33:33.204163885Z" level=info msg="StartContainer for \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\" returns successfully" Jan 13 21:33:33.721806 systemd[1]: run-containerd-runc-k8s.io-767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a-runc.c2bYgW.mount: Deactivated successfully. Jan 13 21:33:33.828180 containerd[1897]: time="2025-01-13T21:33:33.827152290Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:33:33.869344 containerd[1897]: time="2025-01-13T21:33:33.869286193Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\"" Jan 13 21:33:33.875389 containerd[1897]: time="2025-01-13T21:33:33.874204824Z" level=info msg="StartContainer for \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\"" Jan 13 21:33:34.066677 systemd[1]: Started cri-containerd-05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f.scope - libcontainer container 05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f. Jan 13 21:33:34.141681 systemd[1]: cri-containerd-05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f.scope: Deactivated successfully. Jan 13 21:33:34.148219 containerd[1897]: time="2025-01-13T21:33:34.148160302Z" level=info msg="StartContainer for \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\" returns successfully" Jan 13 21:33:34.721493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f-rootfs.mount: Deactivated successfully. Jan 13 21:33:34.847747 containerd[1897]: time="2025-01-13T21:33:34.847659735Z" level=info msg="shim disconnected" id=05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f namespace=k8s.io Jan 13 21:33:34.847747 containerd[1897]: time="2025-01-13T21:33:34.847741531Z" level=warning msg="cleaning up after shim disconnected" id=05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f namespace=k8s.io Jan 13 21:33:34.847747 containerd[1897]: time="2025-01-13T21:33:34.847755759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:34.878993 kubelet[3433]: I0113 21:33:34.878949 3433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-bpcd2" podStartSLOduration=2.763383256 podStartE2EDuration="22.87883255s" podCreationTimestamp="2025-01-13 21:33:12 +0000 UTC" firstStartedPulling="2025-01-13 21:33:12.880978712 +0000 UTC m=+13.498404593" lastFinishedPulling="2025-01-13 21:33:32.996428011 +0000 UTC m=+33.613853887" observedRunningTime="2025-01-13 21:33:34.218422365 +0000 UTC m=+34.835848256" watchObservedRunningTime="2025-01-13 21:33:34.87883255 +0000 UTC m=+35.496258440" Jan 13 21:33:35.849626 containerd[1897]: time="2025-01-13T21:33:35.849582803Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:33:35.909428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2852646428.mount: Deactivated successfully. Jan 13 21:33:35.911467 containerd[1897]: time="2025-01-13T21:33:35.910697955Z" level=info msg="CreateContainer within sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\"" Jan 13 21:33:35.911767 containerd[1897]: time="2025-01-13T21:33:35.911737743Z" level=info msg="StartContainer for \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\"" Jan 13 21:33:35.983636 systemd[1]: Started cri-containerd-d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5.scope - libcontainer container d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5. Jan 13 21:33:36.067630 containerd[1897]: time="2025-01-13T21:33:36.067357884Z" level=info msg="StartContainer for \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\" returns successfully" Jan 13 21:33:36.300918 kubelet[3433]: I0113 21:33:36.300395 3433 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:33:36.334363 kubelet[3433]: I0113 21:33:36.333983 3433 topology_manager.go:215] "Topology Admit Handler" podUID="d1189b5a-5716-4c8a-859a-8cd762690c8e" podNamespace="kube-system" podName="coredns-76f75df574-kzwxl" Jan 13 21:33:36.334363 kubelet[3433]: I0113 21:33:36.334363 3433 topology_manager.go:215] "Topology Admit Handler" podUID="ecb13023-cb7d-43e5-bee7-535542964225" podNamespace="kube-system" podName="coredns-76f75df574-9kw8p" Jan 13 21:33:36.352116 systemd[1]: Created slice kubepods-burstable-podd1189b5a_5716_4c8a_859a_8cd762690c8e.slice - libcontainer container kubepods-burstable-podd1189b5a_5716_4c8a_859a_8cd762690c8e.slice. Jan 13 21:33:36.362376 systemd[1]: Created slice kubepods-burstable-podecb13023_cb7d_43e5_bee7_535542964225.slice - libcontainer container kubepods-burstable-podecb13023_cb7d_43e5_bee7_535542964225.slice. Jan 13 21:33:36.473923 kubelet[3433]: I0113 21:33:36.473884 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb13023-cb7d-43e5-bee7-535542964225-config-volume\") pod \"coredns-76f75df574-9kw8p\" (UID: \"ecb13023-cb7d-43e5-bee7-535542964225\") " pod="kube-system/coredns-76f75df574-9kw8p" Jan 13 21:33:36.473923 kubelet[3433]: I0113 21:33:36.473943 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwlpk\" (UniqueName: \"kubernetes.io/projected/d1189b5a-5716-4c8a-859a-8cd762690c8e-kube-api-access-wwlpk\") pod \"coredns-76f75df574-kzwxl\" (UID: \"d1189b5a-5716-4c8a-859a-8cd762690c8e\") " pod="kube-system/coredns-76f75df574-kzwxl" Jan 13 21:33:36.473923 kubelet[3433]: I0113 21:33:36.474111 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4ft6\" (UniqueName: \"kubernetes.io/projected/ecb13023-cb7d-43e5-bee7-535542964225-kube-api-access-f4ft6\") pod \"coredns-76f75df574-9kw8p\" (UID: \"ecb13023-cb7d-43e5-bee7-535542964225\") " pod="kube-system/coredns-76f75df574-9kw8p" Jan 13 21:33:36.473923 kubelet[3433]: I0113 21:33:36.474149 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1189b5a-5716-4c8a-859a-8cd762690c8e-config-volume\") pod \"coredns-76f75df574-kzwxl\" (UID: \"d1189b5a-5716-4c8a-859a-8cd762690c8e\") " pod="kube-system/coredns-76f75df574-kzwxl" Jan 13 21:33:36.658793 containerd[1897]: time="2025-01-13T21:33:36.658105656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kzwxl,Uid:d1189b5a-5716-4c8a-859a-8cd762690c8e,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:36.670744 containerd[1897]: time="2025-01-13T21:33:36.670703222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9kw8p,Uid:ecb13023-cb7d-43e5-bee7-535542964225,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:38.882769 systemd-networkd[1727]: cilium_host: Link UP Jan 13 21:33:38.885133 systemd-networkd[1727]: cilium_net: Link UP Jan 13 21:33:38.887130 (udev-worker)[4248]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:38.887465 systemd-networkd[1727]: cilium_net: Gained carrier Jan 13 21:33:38.888123 systemd-networkd[1727]: cilium_host: Gained carrier Jan 13 21:33:38.889695 (udev-worker)[4211]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:39.090748 systemd-networkd[1727]: cilium_vxlan: Link UP Jan 13 21:33:39.090758 systemd-networkd[1727]: cilium_vxlan: Gained carrier Jan 13 21:33:39.301676 systemd-networkd[1727]: cilium_host: Gained IPv6LL Jan 13 21:33:39.772592 systemd-networkd[1727]: cilium_net: Gained IPv6LL Jan 13 21:33:39.966542 kernel: NET: Registered PF_ALG protocol family Jan 13 21:33:40.671129 systemd-networkd[1727]: cilium_vxlan: Gained IPv6LL Jan 13 21:33:40.976287 (udev-worker)[4268]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:40.981730 systemd-networkd[1727]: lxc_health: Link UP Jan 13 21:33:40.992335 systemd-networkd[1727]: lxc_health: Gained carrier Jan 13 21:33:41.318102 systemd-networkd[1727]: lxcccd479e180ba: Link UP Jan 13 21:33:41.324494 kernel: eth0: renamed from tmpd1336 Jan 13 21:33:41.332396 systemd-networkd[1727]: lxcccd479e180ba: Gained carrier Jan 13 21:33:41.378740 systemd-networkd[1727]: lxcddff29a70a1d: Link UP Jan 13 21:33:41.388203 kernel: eth0: renamed from tmpb1848 Jan 13 21:33:41.392599 systemd-networkd[1727]: lxcddff29a70a1d: Gained carrier Jan 13 21:33:42.331950 kubelet[3433]: I0113 21:33:42.331905 3433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-m27zv" podStartSLOduration=15.825682515 podStartE2EDuration="31.331847241s" podCreationTimestamp="2025-01-13 21:33:11 +0000 UTC" firstStartedPulling="2025-01-13 21:33:12.596243636 +0000 UTC m=+13.213669517" lastFinishedPulling="2025-01-13 21:33:28.102408362 +0000 UTC m=+28.719834243" observedRunningTime="2025-01-13 21:33:36.945527804 +0000 UTC m=+37.562953694" watchObservedRunningTime="2025-01-13 21:33:42.331847241 +0000 UTC m=+42.949273131" Jan 13 21:33:42.780771 systemd-networkd[1727]: lxcccd479e180ba: Gained IPv6LL Jan 13 21:33:42.973579 systemd-networkd[1727]: lxc_health: Gained IPv6LL Jan 13 21:33:43.164679 systemd-networkd[1727]: lxcddff29a70a1d: Gained IPv6LL Jan 13 21:33:46.114514 ntpd[1861]: Listen normally on 7 cilium_host 192.168.0.203:123 Jan 13 21:33:46.115318 ntpd[1861]: 13 Jan 21:33:46 ntpd[1861]: Listen normally on 7 cilium_host 192.168.0.203:123 Jan 13 21:33:46.115318 ntpd[1861]: 13 Jan 21:33:46 ntpd[1861]: Listen normally on 8 cilium_net [fe80::b4fc:63ff:fe50:8b5f%4]:123 Jan 13 21:33:46.115318 ntpd[1861]: 13 Jan 21:33:46 ntpd[1861]: Listen normally on 9 cilium_host [fe80::8c68:2bff:fe85:31c6%5]:123 Jan 13 21:33:46.115318 ntpd[1861]: 13 Jan 21:33:46 ntpd[1861]: Listen normally on 10 cilium_vxlan [fe80::b844:b4ff:fe1f:8216%6]:123 Jan 13 21:33:46.115318 ntpd[1861]: 13 Jan 21:33:46 ntpd[1861]: Listen normally on 11 lxc_health [fe80::a4d8:8dff:fe11:9095%8]:123 Jan 13 21:33:46.115318 ntpd[1861]: 13 Jan 21:33:46 ntpd[1861]: Listen normally on 12 lxcccd479e180ba [fe80::b0f7:68ff:fe3c:3b9f%10]:123 Jan 13 21:33:46.115318 ntpd[1861]: 13 Jan 21:33:46 ntpd[1861]: Listen normally on 13 lxcddff29a70a1d [fe80::7032:33ff:fe81:9eae%12]:123 Jan 13 21:33:46.114614 ntpd[1861]: Listen normally on 8 cilium_net [fe80::b4fc:63ff:fe50:8b5f%4]:123 Jan 13 21:33:46.114676 ntpd[1861]: Listen normally on 9 cilium_host [fe80::8c68:2bff:fe85:31c6%5]:123 Jan 13 21:33:46.114718 ntpd[1861]: Listen normally on 10 cilium_vxlan [fe80::b844:b4ff:fe1f:8216%6]:123 Jan 13 21:33:46.114758 ntpd[1861]: Listen normally on 11 lxc_health [fe80::a4d8:8dff:fe11:9095%8]:123 Jan 13 21:33:46.114795 ntpd[1861]: Listen normally on 12 lxcccd479e180ba [fe80::b0f7:68ff:fe3c:3b9f%10]:123 Jan 13 21:33:46.114835 ntpd[1861]: Listen normally on 13 lxcddff29a70a1d [fe80::7032:33ff:fe81:9eae%12]:123 Jan 13 21:33:46.497757 systemd[1]: Started sshd@9-172.31.19.228:22-147.75.109.163:52848.service - OpenSSH per-connection server daemon (147.75.109.163:52848). Jan 13 21:33:46.712551 sshd[4616]: Accepted publickey for core from 147.75.109.163 port 52848 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:46.718964 sshd[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:46.741762 systemd-logind[1869]: New session 10 of user core. Jan 13 21:33:46.751884 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:33:47.934146 sshd[4616]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:47.941812 systemd[1]: sshd@9-172.31.19.228:22-147.75.109.163:52848.service: Deactivated successfully. Jan 13 21:33:47.946400 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:33:47.951605 systemd-logind[1869]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:33:47.954893 systemd-logind[1869]: Removed session 10. Jan 13 21:33:48.050585 containerd[1897]: time="2025-01-13T21:33:48.040417973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:48.050585 containerd[1897]: time="2025-01-13T21:33:48.045627809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:48.050585 containerd[1897]: time="2025-01-13T21:33:48.045651495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:48.050585 containerd[1897]: time="2025-01-13T21:33:48.045773031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:48.100319 systemd[1]: run-containerd-runc-k8s.io-b1848655e3840bcf9609a0ae8302aa1cc4495dfb38f9f8db8302c0cf4283cdfb-runc.8EJl5c.mount: Deactivated successfully. Jan 13 21:33:48.120669 systemd[1]: Started cri-containerd-b1848655e3840bcf9609a0ae8302aa1cc4495dfb38f9f8db8302c0cf4283cdfb.scope - libcontainer container b1848655e3840bcf9609a0ae8302aa1cc4495dfb38f9f8db8302c0cf4283cdfb. Jan 13 21:33:48.240813 containerd[1897]: time="2025-01-13T21:33:48.238420152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:48.240813 containerd[1897]: time="2025-01-13T21:33:48.239412680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:48.240813 containerd[1897]: time="2025-01-13T21:33:48.239455772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:48.240813 containerd[1897]: time="2025-01-13T21:33:48.239595214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:48.290693 systemd[1]: Started cri-containerd-d1336838d2867c03130cb19d60d5e2eedc04bed7fb6b08560338c1d1224434b2.scope - libcontainer container d1336838d2867c03130cb19d60d5e2eedc04bed7fb6b08560338c1d1224434b2. Jan 13 21:33:48.308139 containerd[1897]: time="2025-01-13T21:33:48.307985899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kzwxl,Uid:d1189b5a-5716-4c8a-859a-8cd762690c8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1848655e3840bcf9609a0ae8302aa1cc4495dfb38f9f8db8302c0cf4283cdfb\"" Jan 13 21:33:48.319308 containerd[1897]: time="2025-01-13T21:33:48.319032825Z" level=info msg="CreateContainer within sandbox \"b1848655e3840bcf9609a0ae8302aa1cc4495dfb38f9f8db8302c0cf4283cdfb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:33:48.390465 containerd[1897]: time="2025-01-13T21:33:48.390400810Z" level=info msg="CreateContainer within sandbox \"b1848655e3840bcf9609a0ae8302aa1cc4495dfb38f9f8db8302c0cf4283cdfb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9233ed5822c4c2712ab658a02b1c9363b6550aa5d3adbbb29ef3fa22d5a3563\"" Jan 13 21:33:48.397207 containerd[1897]: time="2025-01-13T21:33:48.397158330Z" level=info msg="StartContainer for \"d9233ed5822c4c2712ab658a02b1c9363b6550aa5d3adbbb29ef3fa22d5a3563\"" Jan 13 21:33:48.456051 containerd[1897]: time="2025-01-13T21:33:48.455973810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9kw8p,Uid:ecb13023-cb7d-43e5-bee7-535542964225,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1336838d2867c03130cb19d60d5e2eedc04bed7fb6b08560338c1d1224434b2\"" Jan 13 21:33:48.461397 containerd[1897]: time="2025-01-13T21:33:48.461317164Z" level=info msg="CreateContainer within sandbox \"d1336838d2867c03130cb19d60d5e2eedc04bed7fb6b08560338c1d1224434b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:33:48.494275 containerd[1897]: time="2025-01-13T21:33:48.493822187Z" level=info msg="CreateContainer within sandbox \"d1336838d2867c03130cb19d60d5e2eedc04bed7fb6b08560338c1d1224434b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0c150888c326af46e5b2aa5ed20cc3f1a0d4f16980820aaa8decef90daac4d3\"" Jan 13 21:33:48.495669 containerd[1897]: time="2025-01-13T21:33:48.495632343Z" level=info msg="StartContainer for \"f0c150888c326af46e5b2aa5ed20cc3f1a0d4f16980820aaa8decef90daac4d3\"" Jan 13 21:33:48.497716 systemd[1]: Started cri-containerd-d9233ed5822c4c2712ab658a02b1c9363b6550aa5d3adbbb29ef3fa22d5a3563.scope - libcontainer container d9233ed5822c4c2712ab658a02b1c9363b6550aa5d3adbbb29ef3fa22d5a3563. Jan 13 21:33:48.547406 systemd[1]: Started cri-containerd-f0c150888c326af46e5b2aa5ed20cc3f1a0d4f16980820aaa8decef90daac4d3.scope - libcontainer container f0c150888c326af46e5b2aa5ed20cc3f1a0d4f16980820aaa8decef90daac4d3. Jan 13 21:33:48.569625 containerd[1897]: time="2025-01-13T21:33:48.569552536Z" level=info msg="StartContainer for \"d9233ed5822c4c2712ab658a02b1c9363b6550aa5d3adbbb29ef3fa22d5a3563\" returns successfully" Jan 13 21:33:48.595912 containerd[1897]: time="2025-01-13T21:33:48.595868597Z" level=info msg="StartContainer for \"f0c150888c326af46e5b2aa5ed20cc3f1a0d4f16980820aaa8decef90daac4d3\" returns successfully" Jan 13 21:33:49.041591 kubelet[3433]: I0113 21:33:49.040218 3433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9kw8p" podStartSLOduration=37.039844867 podStartE2EDuration="37.039844867s" podCreationTimestamp="2025-01-13 21:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:49.00321319 +0000 UTC m=+49.620639081" watchObservedRunningTime="2025-01-13 21:33:49.039844867 +0000 UTC m=+49.657270757" Jan 13 21:33:49.044406 kubelet[3433]: I0113 21:33:49.041514 3433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kzwxl" podStartSLOduration=37.040997999 podStartE2EDuration="37.040997999s" podCreationTimestamp="2025-01-13 21:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:49.038579573 +0000 UTC m=+49.656005464" watchObservedRunningTime="2025-01-13 21:33:49.040997999 +0000 UTC m=+49.658423889" Jan 13 21:33:52.972937 systemd[1]: Started sshd@10-172.31.19.228:22-147.75.109.163:54340.service - OpenSSH per-connection server daemon (147.75.109.163:54340). Jan 13 21:33:53.180603 sshd[4806]: Accepted publickey for core from 147.75.109.163 port 54340 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:53.182705 sshd[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:53.187504 systemd-logind[1869]: New session 11 of user core. Jan 13 21:33:53.201669 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:33:53.530567 sshd[4806]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:53.534848 systemd[1]: sshd@10-172.31.19.228:22-147.75.109.163:54340.service: Deactivated successfully. Jan 13 21:33:53.537123 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:33:53.538297 systemd-logind[1869]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:33:53.539631 systemd-logind[1869]: Removed session 11. Jan 13 21:33:58.575323 systemd[1]: Started sshd@11-172.31.19.228:22-147.75.109.163:57410.service - OpenSSH per-connection server daemon (147.75.109.163:57410). Jan 13 21:33:58.733465 sshd[4820]: Accepted publickey for core from 147.75.109.163 port 57410 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:58.737000 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:58.757749 systemd-logind[1869]: New session 12 of user core. Jan 13 21:33:58.765816 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:33:59.030280 sshd[4820]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:59.035061 systemd[1]: sshd@11-172.31.19.228:22-147.75.109.163:57410.service: Deactivated successfully. Jan 13 21:33:59.038349 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:33:59.039304 systemd-logind[1869]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:33:59.040568 systemd-logind[1869]: Removed session 12. Jan 13 21:34:04.076956 systemd[1]: Started sshd@12-172.31.19.228:22-147.75.109.163:57420.service - OpenSSH per-connection server daemon (147.75.109.163:57420). Jan 13 21:34:04.259550 sshd[4836]: Accepted publickey for core from 147.75.109.163 port 57420 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:04.261717 sshd[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:04.277694 systemd-logind[1869]: New session 13 of user core. Jan 13 21:34:04.291257 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:34:04.547533 sshd[4836]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:04.566722 systemd[1]: sshd@12-172.31.19.228:22-147.75.109.163:57420.service: Deactivated successfully. Jan 13 21:34:04.580152 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:34:04.581798 systemd-logind[1869]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:34:04.597812 systemd[1]: Started sshd@13-172.31.19.228:22-147.75.109.163:57428.service - OpenSSH per-connection server daemon (147.75.109.163:57428). Jan 13 21:34:04.599981 systemd-logind[1869]: Removed session 13. Jan 13 21:34:04.774810 sshd[4850]: Accepted publickey for core from 147.75.109.163 port 57428 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:04.779282 sshd[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:04.799262 systemd-logind[1869]: New session 14 of user core. Jan 13 21:34:04.809543 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:34:05.195122 sshd[4850]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:05.212241 systemd[1]: sshd@13-172.31.19.228:22-147.75.109.163:57428.service: Deactivated successfully. Jan 13 21:34:05.233581 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:34:05.247545 systemd-logind[1869]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:34:05.289141 systemd[1]: Started sshd@14-172.31.19.228:22-147.75.109.163:57440.service - OpenSSH per-connection server daemon (147.75.109.163:57440). Jan 13 21:34:05.293562 systemd-logind[1869]: Removed session 14. Jan 13 21:34:05.474564 sshd[4861]: Accepted publickey for core from 147.75.109.163 port 57440 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:05.475895 sshd[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:05.482890 systemd-logind[1869]: New session 15 of user core. Jan 13 21:34:05.490960 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:34:05.730401 sshd[4861]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:05.742113 systemd[1]: sshd@14-172.31.19.228:22-147.75.109.163:57440.service: Deactivated successfully. Jan 13 21:34:05.746372 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:34:05.748032 systemd-logind[1869]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:34:05.749898 systemd-logind[1869]: Removed session 15. Jan 13 21:34:10.769115 systemd[1]: Started sshd@15-172.31.19.228:22-147.75.109.163:34742.service - OpenSSH per-connection server daemon (147.75.109.163:34742). Jan 13 21:34:10.938499 sshd[4875]: Accepted publickey for core from 147.75.109.163 port 34742 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:10.940860 sshd[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:10.948763 systemd-logind[1869]: New session 16 of user core. Jan 13 21:34:10.953757 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:34:11.210251 sshd[4875]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:11.214991 systemd[1]: sshd@15-172.31.19.228:22-147.75.109.163:34742.service: Deactivated successfully. Jan 13 21:34:11.219166 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:34:11.221002 systemd-logind[1869]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:34:11.223233 systemd-logind[1869]: Removed session 16. Jan 13 21:34:16.250830 systemd[1]: Started sshd@16-172.31.19.228:22-147.75.109.163:34758.service - OpenSSH per-connection server daemon (147.75.109.163:34758). Jan 13 21:34:16.433317 sshd[4891]: Accepted publickey for core from 147.75.109.163 port 34758 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:16.433710 sshd[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:16.444415 systemd-logind[1869]: New session 17 of user core. Jan 13 21:34:16.455823 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:34:16.780957 sshd[4891]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:16.785391 systemd[1]: sshd@16-172.31.19.228:22-147.75.109.163:34758.service: Deactivated successfully. Jan 13 21:34:16.790301 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:34:16.792007 systemd-logind[1869]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:34:16.793656 systemd-logind[1869]: Removed session 17. Jan 13 21:34:21.815825 systemd[1]: Started sshd@17-172.31.19.228:22-147.75.109.163:41018.service - OpenSSH per-connection server daemon (147.75.109.163:41018). Jan 13 21:34:21.976939 sshd[4904]: Accepted publickey for core from 147.75.109.163 port 41018 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:21.978831 sshd[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:22.016267 systemd-logind[1869]: New session 18 of user core. Jan 13 21:34:22.022007 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:34:22.290865 sshd[4904]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:22.295237 systemd[1]: sshd@17-172.31.19.228:22-147.75.109.163:41018.service: Deactivated successfully. Jan 13 21:34:22.298078 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:34:22.300547 systemd-logind[1869]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:34:22.302000 systemd-logind[1869]: Removed session 18. Jan 13 21:34:22.328064 systemd[1]: Started sshd@18-172.31.19.228:22-147.75.109.163:41034.service - OpenSSH per-connection server daemon (147.75.109.163:41034). Jan 13 21:34:22.522201 sshd[4917]: Accepted publickey for core from 147.75.109.163 port 41034 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:22.523268 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:22.533652 systemd-logind[1869]: New session 19 of user core. Jan 13 21:34:22.537681 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:34:23.267937 sshd[4917]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:23.271836 systemd[1]: sshd@18-172.31.19.228:22-147.75.109.163:41034.service: Deactivated successfully. Jan 13 21:34:23.275309 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:34:23.280516 systemd-logind[1869]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:34:23.284865 systemd-logind[1869]: Removed session 19. Jan 13 21:34:23.311838 systemd[1]: Started sshd@19-172.31.19.228:22-147.75.109.163:41050.service - OpenSSH per-connection server daemon (147.75.109.163:41050). Jan 13 21:34:23.480609 sshd[4928]: Accepted publickey for core from 147.75.109.163 port 41050 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:23.486153 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:23.537369 systemd-logind[1869]: New session 20 of user core. Jan 13 21:34:23.544820 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:34:26.104645 sshd[4928]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:26.116056 systemd[1]: sshd@19-172.31.19.228:22-147.75.109.163:41050.service: Deactivated successfully. Jan 13 21:34:26.121104 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:34:26.124327 systemd-logind[1869]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:34:26.141301 systemd[1]: Started sshd@20-172.31.19.228:22-147.75.109.163:41052.service - OpenSSH per-connection server daemon (147.75.109.163:41052). Jan 13 21:34:26.143075 systemd-logind[1869]: Removed session 20. Jan 13 21:34:26.342640 sshd[4950]: Accepted publickey for core from 147.75.109.163 port 41052 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:26.344308 sshd[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:26.350206 systemd-logind[1869]: New session 21 of user core. Jan 13 21:34:26.355665 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:34:26.901235 sshd[4950]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:26.908416 systemd[1]: sshd@20-172.31.19.228:22-147.75.109.163:41052.service: Deactivated successfully. Jan 13 21:34:26.911590 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:34:26.913548 systemd-logind[1869]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:34:26.915897 systemd-logind[1869]: Removed session 21. Jan 13 21:34:26.940189 systemd[1]: Started sshd@21-172.31.19.228:22-147.75.109.163:41062.service - OpenSSH per-connection server daemon (147.75.109.163:41062). Jan 13 21:34:27.122484 sshd[4961]: Accepted publickey for core from 147.75.109.163 port 41062 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:27.125474 sshd[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:27.131526 systemd-logind[1869]: New session 22 of user core. Jan 13 21:34:27.138718 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:34:27.387330 sshd[4961]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:27.393418 systemd[1]: sshd@21-172.31.19.228:22-147.75.109.163:41062.service: Deactivated successfully. Jan 13 21:34:27.398414 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:34:27.400729 systemd-logind[1869]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:34:27.403587 systemd-logind[1869]: Removed session 22. Jan 13 21:34:32.426543 systemd[1]: Started sshd@22-172.31.19.228:22-147.75.109.163:55824.service - OpenSSH per-connection server daemon (147.75.109.163:55824). Jan 13 21:34:32.599423 sshd[4974]: Accepted publickey for core from 147.75.109.163 port 55824 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:32.600743 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:32.609905 systemd-logind[1869]: New session 23 of user core. Jan 13 21:34:32.615706 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:34:32.837162 sshd[4974]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:32.841412 systemd[1]: sshd@22-172.31.19.228:22-147.75.109.163:55824.service: Deactivated successfully. Jan 13 21:34:32.844365 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:34:32.847502 systemd-logind[1869]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:34:32.849035 systemd-logind[1869]: Removed session 23. Jan 13 21:34:37.908000 systemd[1]: Started sshd@23-172.31.19.228:22-147.75.109.163:37504.service - OpenSSH per-connection server daemon (147.75.109.163:37504). Jan 13 21:34:38.076147 sshd[4990]: Accepted publickey for core from 147.75.109.163 port 37504 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:38.077849 sshd[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:38.092938 systemd-logind[1869]: New session 24 of user core. Jan 13 21:34:38.103338 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:34:38.349047 sshd[4990]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:38.353271 systemd-logind[1869]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:34:38.354015 systemd[1]: sshd@23-172.31.19.228:22-147.75.109.163:37504.service: Deactivated successfully. Jan 13 21:34:38.356900 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:34:38.358336 systemd-logind[1869]: Removed session 24. Jan 13 21:34:43.386863 systemd[1]: Started sshd@24-172.31.19.228:22-147.75.109.163:37506.service - OpenSSH per-connection server daemon (147.75.109.163:37506). Jan 13 21:34:43.601243 sshd[5006]: Accepted publickey for core from 147.75.109.163 port 37506 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:43.603072 sshd[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:43.608738 systemd-logind[1869]: New session 25 of user core. Jan 13 21:34:43.615099 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:34:43.826569 sshd[5006]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:43.831835 systemd-logind[1869]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:34:43.832812 systemd[1]: sshd@24-172.31.19.228:22-147.75.109.163:37506.service: Deactivated successfully. Jan 13 21:34:43.836914 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:34:43.842416 systemd-logind[1869]: Removed session 25. Jan 13 21:34:48.873322 systemd[1]: Started sshd@25-172.31.19.228:22-147.75.109.163:58570.service - OpenSSH per-connection server daemon (147.75.109.163:58570). Jan 13 21:34:49.048185 sshd[5019]: Accepted publickey for core from 147.75.109.163 port 58570 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:49.050116 sshd[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:49.059175 systemd-logind[1869]: New session 26 of user core. Jan 13 21:34:49.064705 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:34:49.276678 sshd[5019]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:49.285920 systemd[1]: sshd@25-172.31.19.228:22-147.75.109.163:58570.service: Deactivated successfully. Jan 13 21:34:49.289788 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:34:49.295882 systemd-logind[1869]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:34:49.320183 systemd[1]: Started sshd@26-172.31.19.228:22-147.75.109.163:58574.service - OpenSSH per-connection server daemon (147.75.109.163:58574). Jan 13 21:34:49.323608 systemd-logind[1869]: Removed session 26. Jan 13 21:34:49.498525 sshd[5032]: Accepted publickey for core from 147.75.109.163 port 58574 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:49.500169 sshd[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:49.505049 systemd-logind[1869]: New session 27 of user core. Jan 13 21:34:49.510669 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:34:51.276652 containerd[1897]: time="2025-01-13T21:34:51.275931448Z" level=info msg="StopContainer for \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\" with timeout 30 (s)" Jan 13 21:34:51.288793 containerd[1897]: time="2025-01-13T21:34:51.288724628Z" level=info msg="Stop container \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\" with signal terminated" Jan 13 21:34:51.296160 containerd[1897]: time="2025-01-13T21:34:51.296062851Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:34:51.302004 containerd[1897]: time="2025-01-13T21:34:51.301833986Z" level=info msg="StopContainer for \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\" with timeout 2 (s)" Jan 13 21:34:51.302500 containerd[1897]: time="2025-01-13T21:34:51.302307592Z" level=info msg="Stop container \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\" with signal terminated" Jan 13 21:34:51.319197 systemd[1]: cri-containerd-767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a.scope: Deactivated successfully. Jan 13 21:34:51.330060 systemd-networkd[1727]: lxc_health: Link DOWN Jan 13 21:34:51.330072 systemd-networkd[1727]: lxc_health: Lost carrier Jan 13 21:34:51.386319 systemd[1]: cri-containerd-d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5.scope: Deactivated successfully. Jan 13 21:34:51.387035 systemd[1]: cri-containerd-d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5.scope: Consumed 8.936s CPU time. Jan 13 21:34:51.441237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a-rootfs.mount: Deactivated successfully. Jan 13 21:34:51.451513 containerd[1897]: time="2025-01-13T21:34:51.451164412Z" level=info msg="shim disconnected" id=767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a namespace=k8s.io Jan 13 21:34:51.451513 containerd[1897]: time="2025-01-13T21:34:51.451233528Z" level=warning msg="cleaning up after shim disconnected" id=767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a namespace=k8s.io Jan 13 21:34:51.451513 containerd[1897]: time="2025-01-13T21:34:51.451248955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:51.457102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5-rootfs.mount: Deactivated successfully. Jan 13 21:34:51.467968 containerd[1897]: time="2025-01-13T21:34:51.467502935Z" level=info msg="shim disconnected" id=d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5 namespace=k8s.io Jan 13 21:34:51.467968 containerd[1897]: time="2025-01-13T21:34:51.467773848Z" level=warning msg="cleaning up after shim disconnected" id=d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5 namespace=k8s.io Jan 13 21:34:51.467968 containerd[1897]: time="2025-01-13T21:34:51.467796424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:51.494711 containerd[1897]: time="2025-01-13T21:34:51.494414269Z" level=info msg="StopContainer for \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\" returns successfully" Jan 13 21:34:51.496305 containerd[1897]: time="2025-01-13T21:34:51.496256777Z" level=info msg="StopPodSandbox for \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\"" Jan 13 21:34:51.496814 containerd[1897]: time="2025-01-13T21:34:51.496309139Z" level=info msg="Container to stop \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:51.502290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c-shm.mount: Deactivated successfully. Jan 13 21:34:51.522170 systemd[1]: cri-containerd-d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c.scope: Deactivated successfully. Jan 13 21:34:51.524375 containerd[1897]: time="2025-01-13T21:34:51.524254441Z" level=info msg="StopContainer for \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\" returns successfully" Jan 13 21:34:51.525659 containerd[1897]: time="2025-01-13T21:34:51.525376921Z" level=info msg="StopPodSandbox for \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\"" Jan 13 21:34:51.525659 containerd[1897]: time="2025-01-13T21:34:51.525410762Z" level=info msg="Container to stop \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:51.525659 containerd[1897]: time="2025-01-13T21:34:51.525423304Z" level=info msg="Container to stop \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:51.525659 containerd[1897]: time="2025-01-13T21:34:51.525457087Z" level=info msg="Container to stop \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:51.525659 containerd[1897]: time="2025-01-13T21:34:51.525473705Z" level=info msg="Container to stop \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:51.525659 containerd[1897]: time="2025-01-13T21:34:51.525488074Z" level=info msg="Container to stop \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:34:51.537622 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5-shm.mount: Deactivated successfully. Jan 13 21:34:51.551383 systemd[1]: cri-containerd-0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5.scope: Deactivated successfully. Jan 13 21:34:51.582481 containerd[1897]: time="2025-01-13T21:34:51.581722485Z" level=info msg="shim disconnected" id=d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c namespace=k8s.io Jan 13 21:34:51.582481 containerd[1897]: time="2025-01-13T21:34:51.581891885Z" level=warning msg="cleaning up after shim disconnected" id=d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c namespace=k8s.io Jan 13 21:34:51.582481 containerd[1897]: time="2025-01-13T21:34:51.581905561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:51.605900 containerd[1897]: time="2025-01-13T21:34:51.605834421Z" level=info msg="shim disconnected" id=0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5 namespace=k8s.io Jan 13 21:34:51.605900 containerd[1897]: time="2025-01-13T21:34:51.605889419Z" level=warning msg="cleaning up after shim disconnected" id=0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5 namespace=k8s.io Jan 13 21:34:51.605900 containerd[1897]: time="2025-01-13T21:34:51.605900857Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:51.634296 containerd[1897]: time="2025-01-13T21:34:51.627423244Z" level=info msg="TearDown network for sandbox \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\" successfully" Jan 13 21:34:51.634296 containerd[1897]: time="2025-01-13T21:34:51.627521054Z" level=info msg="StopPodSandbox for \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\" returns successfully" Jan 13 21:34:51.705185 containerd[1897]: time="2025-01-13T21:34:51.705071218Z" level=info msg="TearDown network for sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" successfully" Jan 13 21:34:51.705398 containerd[1897]: time="2025-01-13T21:34:51.705372523Z" level=info msg="StopPodSandbox for \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" returns successfully" Jan 13 21:34:51.728234 kubelet[3433]: I0113 21:34:51.727835 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad619d14-7172-42fc-baf3-aaaf4a095687-cilium-config-path\") pod \"ad619d14-7172-42fc-baf3-aaaf4a095687\" (UID: \"ad619d14-7172-42fc-baf3-aaaf4a095687\") " Jan 13 21:34:51.728234 kubelet[3433]: I0113 21:34:51.727923 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc7qb\" (UniqueName: \"kubernetes.io/projected/ad619d14-7172-42fc-baf3-aaaf4a095687-kube-api-access-cc7qb\") pod \"ad619d14-7172-42fc-baf3-aaaf4a095687\" (UID: \"ad619d14-7172-42fc-baf3-aaaf4a095687\") " Jan 13 21:34:51.731591 kubelet[3433]: I0113 21:34:51.727718 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad619d14-7172-42fc-baf3-aaaf4a095687-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad619d14-7172-42fc-baf3-aaaf4a095687" (UID: "ad619d14-7172-42fc-baf3-aaaf4a095687"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:34:51.767680 kubelet[3433]: I0113 21:34:51.767479 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad619d14-7172-42fc-baf3-aaaf4a095687-kube-api-access-cc7qb" (OuterVolumeSpecName: "kube-api-access-cc7qb") pod "ad619d14-7172-42fc-baf3-aaaf4a095687" (UID: "ad619d14-7172-42fc-baf3-aaaf4a095687"). InnerVolumeSpecName "kube-api-access-cc7qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:34:51.829141 kubelet[3433]: I0113 21:34:51.828540 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-lib-modules\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.829141 kubelet[3433]: I0113 21:34:51.828610 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-clustermesh-secrets\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.829141 kubelet[3433]: I0113 21:34:51.828638 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-xtables-lock\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.829141 kubelet[3433]: I0113 21:34:51.828665 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-hostproc\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.829141 kubelet[3433]: I0113 21:34:51.828690 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-host-proc-sys-net\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.829141 kubelet[3433]: I0113 21:34:51.828714 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-run\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.829527 kubelet[3433]: I0113 21:34:51.828744 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lzdt\" (UniqueName: \"kubernetes.io/projected/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-kube-api-access-9lzdt\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.829527 kubelet[3433]: I0113 21:34:51.828936 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-hostproc" (OuterVolumeSpecName: "hostproc") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.829527 kubelet[3433]: I0113 21:34:51.829002 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.835350 kubelet[3433]: I0113 21:34:51.834044 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.835350 kubelet[3433]: I0113 21:34:51.834122 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.835350 kubelet[3433]: I0113 21:34:51.835173 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.835350 kubelet[3433]: I0113 21:34:51.835243 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-config-path\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.835350 kubelet[3433]: I0113 21:34:51.835309 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-host-proc-sys-kernel\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.835960 kubelet[3433]: I0113 21:34:51.835346 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-etc-cni-netd\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.835960 kubelet[3433]: I0113 21:34:51.835389 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-bpf-maps\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.835960 kubelet[3433]: I0113 21:34:51.835412 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-cgroup\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.835960 kubelet[3433]: I0113 21:34:51.835508 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-hubble-tls\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.835960 kubelet[3433]: I0113 21:34:51.835536 3433 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cni-path\") pod \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\" (UID: \"b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e\") " Jan 13 21:34:51.835960 kubelet[3433]: I0113 21:34:51.835658 3433 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-hostproc\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.835960 kubelet[3433]: I0113 21:34:51.835678 3433 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-host-proc-sys-net\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.839033 kubelet[3433]: I0113 21:34:51.835708 3433 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad619d14-7172-42fc-baf3-aaaf4a095687-cilium-config-path\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.839033 kubelet[3433]: I0113 21:34:51.835802 3433 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-run\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.839033 kubelet[3433]: I0113 21:34:51.835819 3433 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cc7qb\" (UniqueName: \"kubernetes.io/projected/ad619d14-7172-42fc-baf3-aaaf4a095687-kube-api-access-cc7qb\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.839033 kubelet[3433]: I0113 21:34:51.835834 3433 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-xtables-lock\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.839033 kubelet[3433]: I0113 21:34:51.835849 3433 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-lib-modules\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.839033 kubelet[3433]: I0113 21:34:51.835891 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cni-path" (OuterVolumeSpecName: "cni-path") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.839033 kubelet[3433]: I0113 21:34:51.836852 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.841226 kubelet[3433]: I0113 21:34:51.836909 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.841226 kubelet[3433]: I0113 21:34:51.836932 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.841226 kubelet[3433]: I0113 21:34:51.836952 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:34:51.847907 kubelet[3433]: I0113 21:34:51.847861 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-kube-api-access-9lzdt" (OuterVolumeSpecName: "kube-api-access-9lzdt") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "kube-api-access-9lzdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:34:51.849994 kubelet[3433]: I0113 21:34:51.849944 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:34:51.852068 kubelet[3433]: I0113 21:34:51.852005 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:34:51.853017 kubelet[3433]: I0113 21:34:51.852981 3433 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" (UID: "b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:34:51.938463 kubelet[3433]: I0113 21:34:51.936845 3433 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-etc-cni-netd\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.938463 kubelet[3433]: I0113 21:34:51.936893 3433 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-hubble-tls\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.938463 kubelet[3433]: I0113 21:34:51.936909 3433 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cni-path\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.938463 kubelet[3433]: I0113 21:34:51.936923 3433 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-bpf-maps\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.938463 kubelet[3433]: I0113 21:34:51.936938 3433 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-cgroup\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.938463 kubelet[3433]: I0113 21:34:51.936952 3433 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-clustermesh-secrets\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.938463 kubelet[3433]: I0113 21:34:51.936968 3433 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-host-proc-sys-kernel\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.938463 kubelet[3433]: I0113 21:34:51.936982 3433 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9lzdt\" (UniqueName: \"kubernetes.io/projected/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-kube-api-access-9lzdt\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:51.938915 kubelet[3433]: I0113 21:34:51.936997 3433 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e-cilium-config-path\") on node \"ip-172-31-19-228\" DevicePath \"\"" Jan 13 21:34:52.235012 systemd[1]: Removed slice kubepods-burstable-podb6b5086c_3ec2_44fe_a3e7_b06e0a4ade5e.slice - libcontainer container kubepods-burstable-podb6b5086c_3ec2_44fe_a3e7_b06e0a4ade5e.slice. Jan 13 21:34:52.235226 systemd[1]: kubepods-burstable-podb6b5086c_3ec2_44fe_a3e7_b06e0a4ade5e.slice: Consumed 9.039s CPU time. Jan 13 21:34:52.256392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c-rootfs.mount: Deactivated successfully. Jan 13 21:34:52.256632 systemd[1]: var-lib-kubelet-pods-ad619d14\x2d7172\x2d42fc\x2dbaf3\x2daaaf4a095687-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcc7qb.mount: Deactivated successfully. Jan 13 21:34:52.256896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5-rootfs.mount: Deactivated successfully. Jan 13 21:34:52.261079 kubelet[3433]: I0113 21:34:52.260873 3433 scope.go:117] "RemoveContainer" containerID="d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5" Jan 13 21:34:52.256994 systemd[1]: var-lib-kubelet-pods-b6b5086c\x2d3ec2\x2d44fe\x2da3e7\x2db06e0a4ade5e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9lzdt.mount: Deactivated successfully. Jan 13 21:34:52.257204 systemd[1]: var-lib-kubelet-pods-b6b5086c\x2d3ec2\x2d44fe\x2da3e7\x2db06e0a4ade5e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:34:52.257312 systemd[1]: var-lib-kubelet-pods-b6b5086c\x2d3ec2\x2d44fe\x2da3e7\x2db06e0a4ade5e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:34:52.272882 containerd[1897]: time="2025-01-13T21:34:52.266003350Z" level=info msg="RemoveContainer for \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\"" Jan 13 21:34:52.290146 containerd[1897]: time="2025-01-13T21:34:52.289816305Z" level=info msg="RemoveContainer for \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\" returns successfully" Jan 13 21:34:52.292480 kubelet[3433]: I0113 21:34:52.292425 3433 scope.go:117] "RemoveContainer" containerID="05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f" Jan 13 21:34:52.294818 systemd[1]: Removed slice kubepods-besteffort-podad619d14_7172_42fc_baf3_aaaf4a095687.slice - libcontainer container kubepods-besteffort-podad619d14_7172_42fc_baf3_aaaf4a095687.slice. Jan 13 21:34:52.298650 containerd[1897]: time="2025-01-13T21:34:52.298347142Z" level=info msg="RemoveContainer for \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\"" Jan 13 21:34:52.322946 containerd[1897]: time="2025-01-13T21:34:52.322894662Z" level=info msg="RemoveContainer for \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\" returns successfully" Jan 13 21:34:52.324854 kubelet[3433]: I0113 21:34:52.324684 3433 scope.go:117] "RemoveContainer" containerID="bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb" Jan 13 21:34:52.334267 containerd[1897]: time="2025-01-13T21:34:52.334173851Z" level=info msg="RemoveContainer for \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\"" Jan 13 21:34:52.341590 containerd[1897]: time="2025-01-13T21:34:52.341294863Z" level=info msg="RemoveContainer for \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\" returns successfully" Jan 13 21:34:52.342076 kubelet[3433]: I0113 21:34:52.341988 3433 scope.go:117] "RemoveContainer" containerID="eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366" Jan 13 21:34:52.354124 containerd[1897]: time="2025-01-13T21:34:52.353747968Z" level=info msg="RemoveContainer for \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\"" Jan 13 21:34:52.361119 containerd[1897]: time="2025-01-13T21:34:52.359824902Z" level=info msg="RemoveContainer for \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\" returns successfully" Jan 13 21:34:52.361461 kubelet[3433]: I0113 21:34:52.360635 3433 scope.go:117] "RemoveContainer" containerID="fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc" Jan 13 21:34:52.362874 containerd[1897]: time="2025-01-13T21:34:52.362838832Z" level=info msg="RemoveContainer for \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\"" Jan 13 21:34:52.368343 containerd[1897]: time="2025-01-13T21:34:52.368300821Z" level=info msg="RemoveContainer for \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\" returns successfully" Jan 13 21:34:52.369344 kubelet[3433]: I0113 21:34:52.369284 3433 scope.go:117] "RemoveContainer" containerID="d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5" Jan 13 21:34:52.391663 containerd[1897]: time="2025-01-13T21:34:52.377435696Z" level=error msg="ContainerStatus for \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\": not found" Jan 13 21:34:52.398419 kubelet[3433]: E0113 21:34:52.397742 3433 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\": not found" containerID="d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5" Jan 13 21:34:52.414935 kubelet[3433]: I0113 21:34:52.414874 3433 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5"} err="failed to get container status \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d635c3628d72eac31f34767bf38432f6bdf3bc9b19cd23cb3f8e0714ccd203d5\": not found" Jan 13 21:34:52.414935 kubelet[3433]: I0113 21:34:52.414934 3433 scope.go:117] "RemoveContainer" containerID="05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f" Jan 13 21:34:52.415339 containerd[1897]: time="2025-01-13T21:34:52.415296110Z" level=error msg="ContainerStatus for \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\": not found" Jan 13 21:34:52.415522 kubelet[3433]: E0113 21:34:52.415507 3433 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\": not found" containerID="05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f" Jan 13 21:34:52.415580 kubelet[3433]: I0113 21:34:52.415552 3433 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f"} err="failed to get container status \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\": rpc error: code = NotFound desc = an error occurred when try to find container \"05533f6c0327118a1dba2b88ab0b763727e1eeca1e7e4aa48dd51429226aa44f\": not found" Jan 13 21:34:52.415580 kubelet[3433]: I0113 21:34:52.415572 3433 scope.go:117] "RemoveContainer" containerID="bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb" Jan 13 21:34:52.415837 containerd[1897]: time="2025-01-13T21:34:52.415793951Z" level=error msg="ContainerStatus for \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\": not found" Jan 13 21:34:52.415957 kubelet[3433]: E0113 21:34:52.415937 3433 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\": not found" containerID="bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb" Jan 13 21:34:52.416259 kubelet[3433]: I0113 21:34:52.415973 3433 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb"} err="failed to get container status \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfad6f6fe6a74cb9447a0bf17a937c59b0e9dd589ff7263820cf40c756a30cfb\": not found" Jan 13 21:34:52.416259 kubelet[3433]: I0113 21:34:52.415987 3433 scope.go:117] "RemoveContainer" containerID="eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366" Jan 13 21:34:52.416375 containerd[1897]: time="2025-01-13T21:34:52.416343197Z" level=error msg="ContainerStatus for \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\": not found" Jan 13 21:34:52.416542 kubelet[3433]: E0113 21:34:52.416503 3433 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\": not found" containerID="eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366" Jan 13 21:34:52.416542 kubelet[3433]: I0113 21:34:52.416538 3433 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366"} err="failed to get container status \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\": rpc error: code = NotFound desc = an error occurred when try to find container \"eba10f8069757374dadf15d157edf2ff592663d8771f7b6bac9df87efc901366\": not found" Jan 13 21:34:52.416845 kubelet[3433]: I0113 21:34:52.416552 3433 scope.go:117] "RemoveContainer" containerID="fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc" Jan 13 21:34:52.416924 containerd[1897]: time="2025-01-13T21:34:52.416850207Z" level=error msg="ContainerStatus for \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\": not found" Jan 13 21:34:52.417023 kubelet[3433]: E0113 21:34:52.417001 3433 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\": not found" containerID="fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc" Jan 13 21:34:52.417148 kubelet[3433]: I0113 21:34:52.417038 3433 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc"} err="failed to get container status \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbcd8401d0720229f590f8134ae7b1f5c81b1a20dbcaf5dfddad8accee0d3ecc\": not found" Jan 13 21:34:52.417148 kubelet[3433]: I0113 21:34:52.417052 3433 scope.go:117] "RemoveContainer" containerID="767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a" Jan 13 21:34:52.418241 containerd[1897]: time="2025-01-13T21:34:52.418215168Z" level=info msg="RemoveContainer for \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\"" Jan 13 21:34:52.425015 containerd[1897]: time="2025-01-13T21:34:52.424957063Z" level=info msg="RemoveContainer for \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\" returns successfully" Jan 13 21:34:52.425690 kubelet[3433]: I0113 21:34:52.425657 3433 scope.go:117] "RemoveContainer" containerID="767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a" Jan 13 21:34:52.426701 containerd[1897]: time="2025-01-13T21:34:52.426661155Z" level=error msg="ContainerStatus for \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\": not found" Jan 13 21:34:52.427128 kubelet[3433]: E0113 21:34:52.427100 3433 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\": not found" containerID="767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a" Jan 13 21:34:52.427267 kubelet[3433]: I0113 21:34:52.427154 3433 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a"} err="failed to get container status \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"767e0e167827c321ecf8c6335aa38835fc08a95dc9766d23591eb8a5455dff9a\": not found" Jan 13 21:34:53.102729 sshd[5032]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:53.108532 systemd[1]: sshd@26-172.31.19.228:22-147.75.109.163:58574.service: Deactivated successfully. Jan 13 21:34:53.122218 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:34:53.124741 systemd-logind[1869]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:34:53.143839 systemd[1]: Started sshd@27-172.31.19.228:22-147.75.109.163:58590.service - OpenSSH per-connection server daemon (147.75.109.163:58590). Jan 13 21:34:53.145690 systemd-logind[1869]: Removed session 27. Jan 13 21:34:53.330847 sshd[5189]: Accepted publickey for core from 147.75.109.163 port 58590 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:53.332553 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:53.337572 systemd-logind[1869]: New session 28 of user core. Jan 13 21:34:53.347782 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:34:53.644897 kubelet[3433]: I0113 21:34:53.644790 3433 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ad619d14-7172-42fc-baf3-aaaf4a095687" path="/var/lib/kubelet/pods/ad619d14-7172-42fc-baf3-aaaf4a095687/volumes" Jan 13 21:34:53.647324 kubelet[3433]: I0113 21:34:53.646369 3433 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" path="/var/lib/kubelet/pods/b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e/volumes" Jan 13 21:34:54.118351 ntpd[1861]: Deleting interface #11 lxc_health, fe80::a4d8:8dff:fe11:9095%8#123, interface stats: received=0, sent=0, dropped=0, active_time=68 secs Jan 13 21:34:54.119032 ntpd[1861]: 13 Jan 21:34:54 ntpd[1861]: Deleting interface #11 lxc_health, fe80::a4d8:8dff:fe11:9095%8#123, interface stats: received=0, sent=0, dropped=0, active_time=68 secs Jan 13 21:34:54.198249 sshd[5189]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:54.207861 systemd[1]: sshd@27-172.31.19.228:22-147.75.109.163:58590.service: Deactivated successfully. Jan 13 21:34:54.222115 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:34:54.224959 systemd-logind[1869]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:34:54.250223 kubelet[3433]: I0113 21:34:54.246416 3433 topology_manager.go:215] "Topology Admit Handler" podUID="f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f" podNamespace="kube-system" podName="cilium-8xbm5" Jan 13 21:34:54.258388 kubelet[3433]: E0113 21:34:54.257514 3433 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" containerName="apply-sysctl-overwrites" Jan 13 21:34:54.258388 kubelet[3433]: E0113 21:34:54.257552 3433 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad619d14-7172-42fc-baf3-aaaf4a095687" containerName="cilium-operator" Jan 13 21:34:54.258388 kubelet[3433]: E0113 21:34:54.257564 3433 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" containerName="clean-cilium-state" Jan 13 21:34:54.258388 kubelet[3433]: E0113 21:34:54.257575 3433 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" containerName="cilium-agent" Jan 13 21:34:54.258388 kubelet[3433]: E0113 21:34:54.257587 3433 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" containerName="mount-cgroup" Jan 13 21:34:54.258388 kubelet[3433]: E0113 21:34:54.257599 3433 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" containerName="mount-bpf-fs" Jan 13 21:34:54.258388 kubelet[3433]: I0113 21:34:54.257769 3433 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b5086c-3ec2-44fe-a3e7-b06e0a4ade5e" containerName="cilium-agent" Jan 13 21:34:54.258388 kubelet[3433]: I0113 21:34:54.257783 3433 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad619d14-7172-42fc-baf3-aaaf4a095687" containerName="cilium-operator" Jan 13 21:34:54.267862 systemd[1]: Started sshd@28-172.31.19.228:22-147.75.109.163:58602.service - OpenSSH per-connection server daemon (147.75.109.163:58602). Jan 13 21:34:54.271494 systemd-logind[1869]: Removed session 28. Jan 13 21:34:54.341683 systemd[1]: Created slice kubepods-burstable-podf66dfc4f_0432_4b7c_b9ae_0dfe4a7a482f.slice - libcontainer container kubepods-burstable-podf66dfc4f_0432_4b7c_b9ae_0dfe4a7a482f.slice. Jan 13 21:34:54.386639 kubelet[3433]: I0113 21:34:54.383593 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-hostproc\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.388502 kubelet[3433]: I0113 21:34:54.387341 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-xtables-lock\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.389031 kubelet[3433]: I0113 21:34:54.389005 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-bpf-maps\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.389229 kubelet[3433]: I0113 21:34:54.389208 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-host-proc-sys-kernel\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.389485 kubelet[3433]: I0113 21:34:54.389413 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-cilium-run\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.389636 kubelet[3433]: I0113 21:34:54.389573 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-lib-modules\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.389851 kubelet[3433]: I0113 21:34:54.389733 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-clustermesh-secrets\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.389851 kubelet[3433]: I0113 21:34:54.389816 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvrv\" (UniqueName: \"kubernetes.io/projected/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-kube-api-access-9fvrv\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.390346 kubelet[3433]: I0113 21:34:54.390142 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-cilium-cgroup\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.390346 kubelet[3433]: I0113 21:34:54.390233 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-cilium-ipsec-secrets\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.390346 kubelet[3433]: I0113 21:34:54.390304 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-cni-path\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.391833 kubelet[3433]: I0113 21:34:54.391638 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-hubble-tls\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.391833 kubelet[3433]: I0113 21:34:54.391709 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-cilium-config-path\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.391833 kubelet[3433]: I0113 21:34:54.391786 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-host-proc-sys-net\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.392206 kubelet[3433]: I0113 21:34:54.392091 3433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f-etc-cni-netd\") pod \"cilium-8xbm5\" (UID: \"f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f\") " pod="kube-system/cilium-8xbm5" Jan 13 21:34:54.469011 sshd[5201]: Accepted publickey for core from 147.75.109.163 port 58602 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:54.471731 sshd[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:54.479289 systemd-logind[1869]: New session 29 of user core. Jan 13 21:34:54.486715 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:34:54.604544 sshd[5201]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:54.613829 systemd-logind[1869]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:34:54.615115 systemd[1]: sshd@28-172.31.19.228:22-147.75.109.163:58602.service: Deactivated successfully. Jan 13 21:34:54.619119 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:34:54.620859 systemd-logind[1869]: Removed session 29. Jan 13 21:34:54.642825 systemd[1]: Started sshd@29-172.31.19.228:22-147.75.109.163:58612.service - OpenSSH per-connection server daemon (147.75.109.163:58612). Jan 13 21:34:54.674303 containerd[1897]: time="2025-01-13T21:34:54.674187202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8xbm5,Uid:f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f,Namespace:kube-system,Attempt:0,}" Jan 13 21:34:54.721024 containerd[1897]: time="2025-01-13T21:34:54.720645438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:34:54.721024 containerd[1897]: time="2025-01-13T21:34:54.720938014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:34:54.722256 containerd[1897]: time="2025-01-13T21:34:54.721716661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:34:54.722256 containerd[1897]: time="2025-01-13T21:34:54.722058007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:34:54.745724 systemd[1]: Started cri-containerd-e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71.scope - libcontainer container e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71. Jan 13 21:34:54.790100 containerd[1897]: time="2025-01-13T21:34:54.790058068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8xbm5,Uid:f66dfc4f-0432-4b7c-b9ae-0dfe4a7a482f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\"" Jan 13 21:34:54.793665 containerd[1897]: time="2025-01-13T21:34:54.793615938Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:34:54.820645 containerd[1897]: time="2025-01-13T21:34:54.814048203Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4c7a386911e0d8cbc0c3b45fc76d51f6000d5fba4c76efc20af49c972f538c3c\"" Jan 13 21:34:54.820974 kubelet[3433]: E0113 21:34:54.820536 3433 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:34:54.823660 containerd[1897]: time="2025-01-13T21:34:54.823598190Z" level=info msg="StartContainer for \"4c7a386911e0d8cbc0c3b45fc76d51f6000d5fba4c76efc20af49c972f538c3c\"" Jan 13 21:34:54.824828 sshd[5213]: Accepted publickey for core from 147.75.109.163 port 58612 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:54.830006 sshd[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:54.839067 systemd-logind[1869]: New session 30 of user core. Jan 13 21:34:54.846621 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 21:34:54.876899 systemd[1]: Started cri-containerd-4c7a386911e0d8cbc0c3b45fc76d51f6000d5fba4c76efc20af49c972f538c3c.scope - libcontainer container 4c7a386911e0d8cbc0c3b45fc76d51f6000d5fba4c76efc20af49c972f538c3c. Jan 13 21:34:54.915000 containerd[1897]: time="2025-01-13T21:34:54.913511212Z" level=info msg="StartContainer for \"4c7a386911e0d8cbc0c3b45fc76d51f6000d5fba4c76efc20af49c972f538c3c\" returns successfully" Jan 13 21:34:54.979662 systemd[1]: cri-containerd-4c7a386911e0d8cbc0c3b45fc76d51f6000d5fba4c76efc20af49c972f538c3c.scope: Deactivated successfully. Jan 13 21:34:55.154633 containerd[1897]: time="2025-01-13T21:34:55.154546413Z" level=info msg="shim disconnected" id=4c7a386911e0d8cbc0c3b45fc76d51f6000d5fba4c76efc20af49c972f538c3c namespace=k8s.io Jan 13 21:34:55.154633 containerd[1897]: time="2025-01-13T21:34:55.154622160Z" level=warning msg="cleaning up after shim disconnected" id=4c7a386911e0d8cbc0c3b45fc76d51f6000d5fba4c76efc20af49c972f538c3c namespace=k8s.io Jan 13 21:34:55.154633 containerd[1897]: time="2025-01-13T21:34:55.154634578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:55.172684 containerd[1897]: time="2025-01-13T21:34:55.172489796Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:34:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:34:55.307852 containerd[1897]: time="2025-01-13T21:34:55.299906079Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:34:55.353814 containerd[1897]: time="2025-01-13T21:34:55.353754984Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d67e14496cb3f1ae86be18bb1e9847eec38ef514fae0f869e17f20e1d8000b4\"" Jan 13 21:34:55.355485 containerd[1897]: time="2025-01-13T21:34:55.354762622Z" level=info msg="StartContainer for \"1d67e14496cb3f1ae86be18bb1e9847eec38ef514fae0f869e17f20e1d8000b4\"" Jan 13 21:34:55.398674 systemd[1]: Started cri-containerd-1d67e14496cb3f1ae86be18bb1e9847eec38ef514fae0f869e17f20e1d8000b4.scope - libcontainer container 1d67e14496cb3f1ae86be18bb1e9847eec38ef514fae0f869e17f20e1d8000b4. Jan 13 21:34:55.518103 containerd[1897]: time="2025-01-13T21:34:55.517229827Z" level=info msg="StartContainer for \"1d67e14496cb3f1ae86be18bb1e9847eec38ef514fae0f869e17f20e1d8000b4\" returns successfully" Jan 13 21:34:55.543986 systemd[1]: cri-containerd-1d67e14496cb3f1ae86be18bb1e9847eec38ef514fae0f869e17f20e1d8000b4.scope: Deactivated successfully. Jan 13 21:34:55.585371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d67e14496cb3f1ae86be18bb1e9847eec38ef514fae0f869e17f20e1d8000b4-rootfs.mount: Deactivated successfully. Jan 13 21:34:55.597897 containerd[1897]: time="2025-01-13T21:34:55.597837715Z" level=info msg="shim disconnected" id=1d67e14496cb3f1ae86be18bb1e9847eec38ef514fae0f869e17f20e1d8000b4 namespace=k8s.io Jan 13 21:34:55.597897 containerd[1897]: time="2025-01-13T21:34:55.597893818Z" level=warning msg="cleaning up after shim disconnected" id=1d67e14496cb3f1ae86be18bb1e9847eec38ef514fae0f869e17f20e1d8000b4 namespace=k8s.io Jan 13 21:34:55.597897 containerd[1897]: time="2025-01-13T21:34:55.597903878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:56.291177 containerd[1897]: time="2025-01-13T21:34:56.290614595Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:34:56.332426 containerd[1897]: time="2025-01-13T21:34:56.332379054Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c37f77ad539781288cececf90d2e975b273e9559e0d663b7d2cb109854ed7e6\"" Jan 13 21:34:56.337585 containerd[1897]: time="2025-01-13T21:34:56.336037398Z" level=info msg="StartContainer for \"1c37f77ad539781288cececf90d2e975b273e9559e0d663b7d2cb109854ed7e6\"" Jan 13 21:34:56.379681 systemd[1]: Started cri-containerd-1c37f77ad539781288cececf90d2e975b273e9559e0d663b7d2cb109854ed7e6.scope - libcontainer container 1c37f77ad539781288cececf90d2e975b273e9559e0d663b7d2cb109854ed7e6. Jan 13 21:34:56.422218 containerd[1897]: time="2025-01-13T21:34:56.422165683Z" level=info msg="StartContainer for \"1c37f77ad539781288cececf90d2e975b273e9559e0d663b7d2cb109854ed7e6\" returns successfully" Jan 13 21:34:56.434309 systemd[1]: cri-containerd-1c37f77ad539781288cececf90d2e975b273e9559e0d663b7d2cb109854ed7e6.scope: Deactivated successfully. Jan 13 21:34:56.479514 containerd[1897]: time="2025-01-13T21:34:56.479209945Z" level=info msg="shim disconnected" id=1c37f77ad539781288cececf90d2e975b273e9559e0d663b7d2cb109854ed7e6 namespace=k8s.io Jan 13 21:34:56.479514 containerd[1897]: time="2025-01-13T21:34:56.479273619Z" level=warning msg="cleaning up after shim disconnected" id=1c37f77ad539781288cececf90d2e975b273e9559e0d663b7d2cb109854ed7e6 namespace=k8s.io Jan 13 21:34:56.479514 containerd[1897]: time="2025-01-13T21:34:56.479287056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:56.511008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c37f77ad539781288cececf90d2e975b273e9559e0d663b7d2cb109854ed7e6-rootfs.mount: Deactivated successfully. Jan 13 21:34:57.299119 containerd[1897]: time="2025-01-13T21:34:57.299035696Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:34:57.329125 containerd[1897]: time="2025-01-13T21:34:57.329012465Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"905cf615779480324732e87ea836845934077680d5b7fe863098dc40a931fcbd\"" Jan 13 21:34:57.333859 containerd[1897]: time="2025-01-13T21:34:57.330307157Z" level=info msg="StartContainer for \"905cf615779480324732e87ea836845934077680d5b7fe863098dc40a931fcbd\"" Jan 13 21:34:57.419884 systemd[1]: Started cri-containerd-905cf615779480324732e87ea836845934077680d5b7fe863098dc40a931fcbd.scope - libcontainer container 905cf615779480324732e87ea836845934077680d5b7fe863098dc40a931fcbd. Jan 13 21:34:57.464670 systemd[1]: cri-containerd-905cf615779480324732e87ea836845934077680d5b7fe863098dc40a931fcbd.scope: Deactivated successfully. Jan 13 21:34:57.467121 containerd[1897]: time="2025-01-13T21:34:57.466837200Z" level=info msg="StartContainer for \"905cf615779480324732e87ea836845934077680d5b7fe863098dc40a931fcbd\" returns successfully" Jan 13 21:34:57.505728 containerd[1897]: time="2025-01-13T21:34:57.505660595Z" level=info msg="shim disconnected" id=905cf615779480324732e87ea836845934077680d5b7fe863098dc40a931fcbd namespace=k8s.io Jan 13 21:34:57.505728 containerd[1897]: time="2025-01-13T21:34:57.505721638Z" level=warning msg="cleaning up after shim disconnected" id=905cf615779480324732e87ea836845934077680d5b7fe863098dc40a931fcbd namespace=k8s.io Jan 13 21:34:57.505728 containerd[1897]: time="2025-01-13T21:34:57.505732738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:57.512768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-905cf615779480324732e87ea836845934077680d5b7fe863098dc40a931fcbd-rootfs.mount: Deactivated successfully. Jan 13 21:34:58.313065 containerd[1897]: time="2025-01-13T21:34:58.312554188Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:34:58.348294 containerd[1897]: time="2025-01-13T21:34:58.348245793Z" level=info msg="CreateContainer within sandbox \"e7db2aec7bb5eab918ab14fe0dbf7f7b880e0d1305281afccf95b5330cea0e71\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8919377eb83ce990a10cd8c287027da19ee6fd06b0b1b64ef666a1cdccf32b02\"" Jan 13 21:34:58.349258 containerd[1897]: time="2025-01-13T21:34:58.349208720Z" level=info msg="StartContainer for \"8919377eb83ce990a10cd8c287027da19ee6fd06b0b1b64ef666a1cdccf32b02\"" Jan 13 21:34:58.408735 systemd[1]: Started cri-containerd-8919377eb83ce990a10cd8c287027da19ee6fd06b0b1b64ef666a1cdccf32b02.scope - libcontainer container 8919377eb83ce990a10cd8c287027da19ee6fd06b0b1b64ef666a1cdccf32b02. Jan 13 21:34:58.454526 containerd[1897]: time="2025-01-13T21:34:58.454475352Z" level=info msg="StartContainer for \"8919377eb83ce990a10cd8c287027da19ee6fd06b0b1b64ef666a1cdccf32b02\" returns successfully" Jan 13 21:34:58.510751 systemd[1]: run-containerd-runc-k8s.io-8919377eb83ce990a10cd8c287027da19ee6fd06b0b1b64ef666a1cdccf32b02-runc.R67K86.mount: Deactivated successfully. Jan 13 21:34:59.469523 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:34:59.696086 containerd[1897]: time="2025-01-13T21:34:59.692780530Z" level=info msg="StopPodSandbox for \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\"" Jan 13 21:34:59.703912 containerd[1897]: time="2025-01-13T21:34:59.703786361Z" level=info msg="TearDown network for sandbox \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\" successfully" Jan 13 21:34:59.703912 containerd[1897]: time="2025-01-13T21:34:59.703828913Z" level=info msg="StopPodSandbox for \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\" returns successfully" Jan 13 21:34:59.706493 containerd[1897]: time="2025-01-13T21:34:59.706365754Z" level=info msg="RemovePodSandbox for \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\"" Jan 13 21:34:59.717308 containerd[1897]: time="2025-01-13T21:34:59.717045232Z" level=info msg="Forcibly stopping sandbox \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\"" Jan 13 21:34:59.717308 containerd[1897]: time="2025-01-13T21:34:59.717242300Z" level=info msg="TearDown network for sandbox \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\" successfully" Jan 13 21:34:59.724822 containerd[1897]: time="2025-01-13T21:34:59.723164451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:34:59.724822 containerd[1897]: time="2025-01-13T21:34:59.723246597Z" level=info msg="RemovePodSandbox \"d702764f1e7037732886eb290b52032b15fe7590470c95a864ac969c5566415c\" returns successfully" Jan 13 21:34:59.725308 containerd[1897]: time="2025-01-13T21:34:59.725273219Z" level=info msg="StopPodSandbox for \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\"" Jan 13 21:34:59.725396 containerd[1897]: time="2025-01-13T21:34:59.725370978Z" level=info msg="TearDown network for sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" successfully" Jan 13 21:34:59.725396 containerd[1897]: time="2025-01-13T21:34:59.725387156Z" level=info msg="StopPodSandbox for \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" returns successfully" Jan 13 21:34:59.727462 containerd[1897]: time="2025-01-13T21:34:59.726015910Z" level=info msg="RemovePodSandbox for \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\"" Jan 13 21:34:59.727462 containerd[1897]: time="2025-01-13T21:34:59.726047563Z" level=info msg="Forcibly stopping sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\"" Jan 13 21:34:59.727462 containerd[1897]: time="2025-01-13T21:34:59.726111641Z" level=info msg="TearDown network for sandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" successfully" Jan 13 21:34:59.730273 containerd[1897]: time="2025-01-13T21:34:59.730229013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:34:59.730387 containerd[1897]: time="2025-01-13T21:34:59.730291978Z" level=info msg="RemovePodSandbox \"0b9c63b5bf609831f8957f5b753eb0d1bfb6a8f2e87982f7a37f3c37bcbe7db5\" returns successfully" Jan 13 21:35:02.360981 systemd[1]: run-containerd-runc-k8s.io-8919377eb83ce990a10cd8c287027da19ee6fd06b0b1b64ef666a1cdccf32b02-runc.lQ5QFw.mount: Deactivated successfully. Jan 13 21:35:06.121280 systemd-networkd[1727]: lxc_health: Link UP Jan 13 21:35:06.126206 systemd-networkd[1727]: lxc_health: Gained carrier Jan 13 21:35:06.133492 (udev-worker)[6081]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:35:06.738462 kubelet[3433]: I0113 21:35:06.738281 3433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8xbm5" podStartSLOduration=12.738162164 podStartE2EDuration="12.738162164s" podCreationTimestamp="2025-01-13 21:34:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:34:59.359572352 +0000 UTC m=+119.976998241" watchObservedRunningTime="2025-01-13 21:35:06.738162164 +0000 UTC m=+127.355588054" Jan 13 21:35:07.198215 systemd-networkd[1727]: lxc_health: Gained IPv6LL Jan 13 21:35:10.114464 ntpd[1861]: Listen normally on 14 lxc_health [fe80::4831:6aff:fe3f:11b4%14]:123 Jan 13 21:35:10.115584 ntpd[1861]: 13 Jan 21:35:10 ntpd[1861]: Listen normally on 14 lxc_health [fe80::4831:6aff:fe3f:11b4%14]:123 Jan 13 21:35:10.403029 systemd[1]: run-containerd-runc-k8s.io-8919377eb83ce990a10cd8c287027da19ee6fd06b0b1b64ef666a1cdccf32b02-runc.Tc6Xmz.mount: Deactivated successfully. Jan 13 21:35:13.009019 sshd[5213]: pam_unix(sshd:session): session closed for user core Jan 13 21:35:13.017126 systemd[1]: sshd@29-172.31.19.228:22-147.75.109.163:58612.service: Deactivated successfully. Jan 13 21:35:13.017541 systemd-logind[1869]: Session 30 logged out. Waiting for processes to exit. Jan 13 21:35:13.021847 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 21:35:13.024751 systemd-logind[1869]: Removed session 30. Jan 13 21:35:26.946521 systemd[1]: cri-containerd-3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5.scope: Deactivated successfully. Jan 13 21:35:26.949764 systemd[1]: cri-containerd-3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5.scope: Consumed 3.794s CPU time, 30.2M memory peak, 0B memory swap peak. Jan 13 21:35:26.997682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5-rootfs.mount: Deactivated successfully. Jan 13 21:35:27.011601 containerd[1897]: time="2025-01-13T21:35:27.011518169Z" level=info msg="shim disconnected" id=3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5 namespace=k8s.io Jan 13 21:35:27.011601 containerd[1897]: time="2025-01-13T21:35:27.011585861Z" level=warning msg="cleaning up after shim disconnected" id=3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5 namespace=k8s.io Jan 13 21:35:27.011601 containerd[1897]: time="2025-01-13T21:35:27.011599342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:35:27.517822 kubelet[3433]: I0113 21:35:27.517786 3433 scope.go:117] "RemoveContainer" containerID="3b82ffac209a555609e8a3a90ddf426265670b5fb7c76aaaa3fd00c700e203a5" Jan 13 21:35:27.523094 containerd[1897]: time="2025-01-13T21:35:27.523052310Z" level=info msg="CreateContainer within sandbox \"af622bcce0eed0a6d06deaaea2daa3090261e0750a7de4fda32e6d15efa46daf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 21:35:27.558240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800912985.mount: Deactivated successfully. Jan 13 21:35:27.637728 containerd[1897]: time="2025-01-13T21:35:27.637672058Z" level=info msg="CreateContainer within sandbox \"af622bcce0eed0a6d06deaaea2daa3090261e0750a7de4fda32e6d15efa46daf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cf2f6ded8c340c517693fa191a5ec7a86498b06a6c85c17e21b345f213ccd7d2\"" Jan 13 21:35:27.642642 containerd[1897]: time="2025-01-13T21:35:27.641817745Z" level=info msg="StartContainer for \"cf2f6ded8c340c517693fa191a5ec7a86498b06a6c85c17e21b345f213ccd7d2\"" Jan 13 21:35:27.714824 systemd[1]: Started cri-containerd-cf2f6ded8c340c517693fa191a5ec7a86498b06a6c85c17e21b345f213ccd7d2.scope - libcontainer container cf2f6ded8c340c517693fa191a5ec7a86498b06a6c85c17e21b345f213ccd7d2. Jan 13 21:35:27.802663 containerd[1897]: time="2025-01-13T21:35:27.802150934Z" level=info msg="StartContainer for \"cf2f6ded8c340c517693fa191a5ec7a86498b06a6c85c17e21b345f213ccd7d2\" returns successfully" Jan 13 21:35:32.926216 kubelet[3433]: E0113 21:35:32.925036 3433 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-228?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 21:35:33.053624 systemd[1]: cri-containerd-e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282.scope: Deactivated successfully. Jan 13 21:35:33.054328 systemd[1]: cri-containerd-e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282.scope: Consumed 1.896s CPU time, 24.3M memory peak, 0B memory swap peak. Jan 13 21:35:33.091910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282-rootfs.mount: Deactivated successfully. Jan 13 21:35:33.112700 containerd[1897]: time="2025-01-13T21:35:33.112625066Z" level=info msg="shim disconnected" id=e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282 namespace=k8s.io Jan 13 21:35:33.112700 containerd[1897]: time="2025-01-13T21:35:33.112696962Z" level=warning msg="cleaning up after shim disconnected" id=e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282 namespace=k8s.io Jan 13 21:35:33.112700 containerd[1897]: time="2025-01-13T21:35:33.112709369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:35:33.552241 kubelet[3433]: I0113 21:35:33.552200 3433 scope.go:117] "RemoveContainer" containerID="e1ad2ea747efc879bfdb61b631ebebd55288ab53cf04c85afc80324aaf3c4282" Jan 13 21:35:33.563515 containerd[1897]: time="2025-01-13T21:35:33.563474266Z" level=info msg="CreateContainer within sandbox \"086c5e48a44564a7091ffbba917453f5bbc9b080a262a036e2d7e844c13e8727\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 21:35:33.595564 containerd[1897]: time="2025-01-13T21:35:33.595451369Z" level=info msg="CreateContainer within sandbox \"086c5e48a44564a7091ffbba917453f5bbc9b080a262a036e2d7e844c13e8727\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9897cd4391cfb32c453e39c362f88d8e5b9dd355ea16c5f78f689be8121b8d0b\"" Jan 13 21:35:33.597515 containerd[1897]: time="2025-01-13T21:35:33.597414393Z" level=info msg="StartContainer for \"9897cd4391cfb32c453e39c362f88d8e5b9dd355ea16c5f78f689be8121b8d0b\"" Jan 13 21:35:33.656671 systemd[1]: Started cri-containerd-9897cd4391cfb32c453e39c362f88d8e5b9dd355ea16c5f78f689be8121b8d0b.scope - libcontainer container 9897cd4391cfb32c453e39c362f88d8e5b9dd355ea16c5f78f689be8121b8d0b. Jan 13 21:35:33.781001 containerd[1897]: time="2025-01-13T21:35:33.780955398Z" level=info msg="StartContainer for \"9897cd4391cfb32c453e39c362f88d8e5b9dd355ea16c5f78f689be8121b8d0b\" returns successfully" Jan 13 21:35:42.927422 kubelet[3433]: E0113 21:35:42.927377 3433 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-19-228)"