Dec 13 01:34:18.108129 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:34:18.108167 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:34:18.108183 kernel: BIOS-provided physical RAM map: Dec 13 01:34:18.108194 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:34:18.108204 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:34:18.108215 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:34:18.108232 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 01:34:18.108244 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 01:34:18.108256 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 01:34:18.108268 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:34:18.108279 kernel: NX (Execute Disable) protection: active Dec 13 01:34:18.108291 kernel: APIC: Static calls initialized Dec 13 01:34:18.108303 kernel: SMBIOS 2.7 present. Dec 13 01:34:18.108315 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 01:34:18.108332 kernel: Hypervisor detected: KVM Dec 13 01:34:18.108346 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:34:18.108359 kernel: kvm-clock: using sched offset of 6167735071 cycles Dec 13 01:34:18.108373 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:34:18.108386 kernel: tsc: Detected 2499.996 MHz processor Dec 13 01:34:18.108399 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:34:18.108413 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:34:18.108428 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 01:34:18.108441 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:34:18.108454 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:34:18.108467 kernel: Using GB pages for direct mapping Dec 13 01:34:18.108478 kernel: ACPI: Early table checksum verification disabled Dec 13 01:34:18.108491 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 01:34:18.108504 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 01:34:18.108517 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:34:18.108530 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 01:34:18.108547 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 01:34:18.108573 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:34:18.108586 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:34:18.108598 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 01:34:18.108609 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:34:18.108622 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 01:34:18.108636 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 01:34:18.108793 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:34:18.108813 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 01:34:18.108832 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 01:34:18.108848 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 01:34:18.108861 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 01:34:18.108924 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 01:34:18.108938 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 01:34:18.108955 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 01:34:18.108970 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 01:34:18.108985 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 01:34:18.109000 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 01:34:18.109014 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:34:18.109028 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:34:18.109040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 01:34:18.109053 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 01:34:18.109065 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 01:34:18.109082 kernel: Zone ranges: Dec 13 01:34:18.109095 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:34:18.109110 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 01:34:18.109125 kernel: Normal empty Dec 13 01:34:18.109139 kernel: Movable zone start for each node Dec 13 01:34:18.109154 kernel: Early memory node ranges Dec 13 01:34:18.109167 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:34:18.109180 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 01:34:18.109192 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 01:34:18.109205 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:34:18.109222 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:34:18.109236 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 01:34:18.109251 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:34:18.109267 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:34:18.109280 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 01:34:18.109297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:34:18.109312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:34:18.109325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:34:18.109339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:34:18.109356 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:34:18.109370 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:34:18.109383 kernel: TSC deadline timer available Dec 13 01:34:18.109397 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:34:18.109418 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:34:18.109432 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 01:34:18.109445 kernel: Booting paravirtualized kernel on KVM Dec 13 01:34:18.109459 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:34:18.109473 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:34:18.109491 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:34:18.109505 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:34:18.109520 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:34:18.109534 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:34:18.109548 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:34:18.109565 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:34:18.109581 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:34:18.109597 kernel: random: crng init done Dec 13 01:34:18.109614 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:34:18.109631 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:34:18.109646 kernel: Fallback order for Node 0: 0 Dec 13 01:34:18.109677 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 01:34:18.109691 kernel: Policy zone: DMA32 Dec 13 01:34:18.109704 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:34:18.109717 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Dec 13 01:34:18.109731 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:34:18.109745 kernel: Kernel/User page tables isolation: enabled Dec 13 01:34:18.109764 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:34:18.109780 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:34:18.109796 kernel: Dynamic Preempt: voluntary Dec 13 01:34:18.109812 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:34:18.109828 kernel: rcu: RCU event tracing is enabled. Dec 13 01:34:18.109842 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:34:18.109855 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:34:18.109959 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:34:18.109978 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:34:18.109996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:34:18.110117 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:34:18.110133 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:34:18.110147 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:34:18.110162 kernel: Console: colour VGA+ 80x25 Dec 13 01:34:18.110176 kernel: printk: console [ttyS0] enabled Dec 13 01:34:18.110190 kernel: ACPI: Core revision 20230628 Dec 13 01:34:18.110205 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 01:34:18.110219 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:34:18.110238 kernel: x2apic enabled Dec 13 01:34:18.110253 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:34:18.110281 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 01:34:18.110301 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 01:34:18.110318 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:34:18.110334 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:34:18.110350 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:34:18.110408 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:34:18.110424 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:34:18.110441 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:34:18.110457 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:34:18.110473 kernel: RETBleed: Vulnerable Dec 13 01:34:18.110494 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:34:18.110512 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:34:18.110631 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:34:18.110664 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:34:18.110682 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:34:18.110698 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:34:18.110715 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:34:18.110736 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 01:34:18.110753 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 01:34:18.110770 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:34:18.110786 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:34:18.110802 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:34:18.110819 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 01:34:18.110836 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:34:18.110860 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 01:34:18.110877 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 01:34:18.110894 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 01:34:18.110910 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 01:34:18.110930 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 01:34:18.110947 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 01:34:18.110964 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 01:34:18.110980 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:34:18.110996 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:34:18.111013 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:34:18.111029 kernel: landlock: Up and running. Dec 13 01:34:18.111046 kernel: SELinux: Initializing. Dec 13 01:34:18.111063 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:34:18.111079 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:34:18.111096 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:34:18.111117 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:34:18.111134 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:34:18.111151 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:34:18.111168 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:34:18.111185 kernel: signal: max sigframe size: 3632 Dec 13 01:34:18.111288 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:34:18.111307 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:34:18.111323 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:34:18.111340 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:34:18.111361 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:34:18.111377 kernel: .... node #0, CPUs: #1 Dec 13 01:34:18.111395 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:34:18.111412 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:34:18.111497 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:34:18.111515 kernel: smpboot: Max logical packages: 1 Dec 13 01:34:18.111532 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 01:34:18.111549 kernel: devtmpfs: initialized Dec 13 01:34:18.111569 kernel: x86/mm: Memory block size: 128MB Dec 13 01:34:18.111586 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:34:18.111603 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:34:18.111620 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:34:18.111637 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:34:18.111665 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:34:18.111678 kernel: audit: type=2000 audit(1734053657.151:1): state=initialized audit_enabled=0 res=1 Dec 13 01:34:18.111692 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:34:18.111705 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:34:18.111725 kernel: cpuidle: using governor menu Dec 13 01:34:18.111742 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:34:18.111758 kernel: dca service started, version 1.12.1 Dec 13 01:34:18.111774 kernel: PCI: Using configuration type 1 for base access Dec 13 01:34:18.111791 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:34:18.111807 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:34:18.111824 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:34:18.111840 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:34:18.111857 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:34:18.111876 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:34:18.111893 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:34:18.111910 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:34:18.111924 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:34:18.111938 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:34:18.111954 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:34:18.111969 kernel: ACPI: Interpreter enabled Dec 13 01:34:18.111985 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:34:18.112000 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:34:18.112016 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:34:18.112035 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:34:18.112050 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:34:18.112065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:34:18.112268 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:34:18.112404 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:34:18.112529 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:34:18.112547 kernel: acpiphp: Slot [3] registered Dec 13 01:34:18.112567 kernel: acpiphp: Slot [4] registered Dec 13 01:34:18.112582 kernel: acpiphp: Slot [5] registered Dec 13 01:34:18.112597 kernel: acpiphp: Slot [6] registered Dec 13 01:34:18.112612 kernel: acpiphp: Slot [7] registered Dec 13 01:34:18.112628 kernel: acpiphp: Slot [8] registered Dec 13 01:34:18.112643 kernel: acpiphp: Slot [9] registered Dec 13 01:34:18.112670 kernel: acpiphp: Slot [10] registered Dec 13 01:34:18.112684 kernel: acpiphp: Slot [11] registered Dec 13 01:34:18.112700 kernel: acpiphp: Slot [12] registered Dec 13 01:34:18.112719 kernel: acpiphp: Slot [13] registered Dec 13 01:34:18.112734 kernel: acpiphp: Slot [14] registered Dec 13 01:34:18.112749 kernel: acpiphp: Slot [15] registered Dec 13 01:34:18.112764 kernel: acpiphp: Slot [16] registered Dec 13 01:34:18.113274 kernel: acpiphp: Slot [17] registered Dec 13 01:34:18.113297 kernel: acpiphp: Slot [18] registered Dec 13 01:34:18.113384 kernel: acpiphp: Slot [19] registered Dec 13 01:34:18.113417 kernel: acpiphp: Slot [20] registered Dec 13 01:34:18.113432 kernel: acpiphp: Slot [21] registered Dec 13 01:34:18.113450 kernel: acpiphp: Slot [22] registered Dec 13 01:34:18.113473 kernel: acpiphp: Slot [23] registered Dec 13 01:34:18.113492 kernel: acpiphp: Slot [24] registered Dec 13 01:34:18.113507 kernel: acpiphp: Slot [25] registered Dec 13 01:34:18.113524 kernel: acpiphp: Slot [26] registered Dec 13 01:34:18.113541 kernel: acpiphp: Slot [27] registered Dec 13 01:34:18.113560 kernel: acpiphp: Slot [28] registered Dec 13 01:34:18.113579 kernel: acpiphp: Slot [29] registered Dec 13 01:34:18.113646 kernel: acpiphp: Slot [30] registered Dec 13 01:34:18.113695 kernel: acpiphp: Slot [31] registered Dec 13 01:34:18.113721 kernel: PCI host bridge to bus 0000:00 Dec 13 01:34:18.114087 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:34:18.114224 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:34:18.114390 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:34:18.114517 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 01:34:18.114753 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:34:18.115222 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:34:18.115389 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 01:34:18.115544 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 01:34:18.115784 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:34:18.115924 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 01:34:18.116059 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 01:34:18.116195 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 01:34:18.116333 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 01:34:18.116475 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 01:34:18.116611 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 01:34:18.116760 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 01:34:18.116895 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 14648 usecs Dec 13 01:34:18.117042 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 01:34:18.117178 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 01:34:18.117318 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 01:34:18.117452 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:34:18.117595 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:34:18.117760 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 01:34:18.117918 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:34:18.118061 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 01:34:18.118080 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:34:18.118100 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:34:18.118115 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:34:18.118130 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:34:18.118145 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:34:18.118159 kernel: iommu: Default domain type: Translated Dec 13 01:34:18.118175 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:34:18.118190 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:34:18.118205 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:34:18.118220 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:34:18.118238 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 01:34:18.118436 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 01:34:18.118787 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 01:34:18.118962 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:34:18.118985 kernel: vgaarb: loaded Dec 13 01:34:18.119002 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 01:34:18.119016 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 01:34:18.119031 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:34:18.119046 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:34:18.119065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:34:18.119080 kernel: pnp: PnP ACPI init Dec 13 01:34:18.119095 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:34:18.119111 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:34:18.119126 kernel: NET: Registered PF_INET protocol family Dec 13 01:34:18.119570 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:34:18.119784 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:34:18.119805 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:34:18.119824 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:34:18.119848 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:34:18.119917 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:34:18.119937 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:34:18.119955 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:34:18.120101 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:34:18.120133 kernel: NET: Registered PF_XDP protocol family Dec 13 01:34:18.121233 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:34:18.121950 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:34:18.122764 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:34:18.122924 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 01:34:18.123087 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:34:18.123110 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:34:18.123128 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:34:18.123144 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 01:34:18.123161 kernel: clocksource: Switched to clocksource tsc Dec 13 01:34:18.123178 kernel: Initialise system trusted keyrings Dec 13 01:34:18.123199 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:34:18.123338 kernel: Key type asymmetric registered Dec 13 01:34:18.123355 kernel: Asymmetric key parser 'x509' registered Dec 13 01:34:18.123371 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:34:18.123767 kernel: io scheduler mq-deadline registered Dec 13 01:34:18.123791 kernel: io scheduler kyber registered Dec 13 01:34:18.123809 kernel: io scheduler bfq registered Dec 13 01:34:18.123828 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:34:18.123847 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:34:18.123871 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:34:18.123891 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:34:18.123908 kernel: i8042: Warning: Keylock active Dec 13 01:34:18.123994 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:34:18.124017 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:34:18.124252 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:34:18.124947 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:34:18.125238 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:34:17 UTC (1734053657) Dec 13 01:34:18.125395 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:34:18.125416 kernel: intel_pstate: CPU model not supported Dec 13 01:34:18.125434 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:34:18.125451 kernel: Segment Routing with IPv6 Dec 13 01:34:18.125468 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:34:18.125485 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:34:18.125502 kernel: Key type dns_resolver registered Dec 13 01:34:18.125518 kernel: IPI shorthand broadcast: enabled Dec 13 01:34:18.125535 kernel: sched_clock: Marking stable (684003591, 265506627)->(1050639357, -101129139) Dec 13 01:34:18.125555 kernel: registered taskstats version 1 Dec 13 01:34:18.125573 kernel: Loading compiled-in X.509 certificates Dec 13 01:34:18.125590 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:34:18.125606 kernel: Key type .fscrypt registered Dec 13 01:34:18.125621 kernel: Key type fscrypt-provisioning registered Dec 13 01:34:18.125637 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:34:18.125671 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:34:18.125685 kernel: ima: No architecture policies found Dec 13 01:34:18.125703 kernel: clk: Disabling unused clocks Dec 13 01:34:18.125724 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:34:18.125742 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:34:18.125759 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:34:18.125775 kernel: Run /init as init process Dec 13 01:34:18.125790 kernel: with arguments: Dec 13 01:34:18.125878 kernel: /init Dec 13 01:34:18.125893 kernel: with environment: Dec 13 01:34:18.125907 kernel: HOME=/ Dec 13 01:34:18.125921 kernel: TERM=linux Dec 13 01:34:18.125940 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:34:18.125983 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:34:18.126002 systemd[1]: Detected virtualization amazon. Dec 13 01:34:18.126018 systemd[1]: Detected architecture x86-64. Dec 13 01:34:18.126034 systemd[1]: Running in initrd. Dec 13 01:34:18.126049 systemd[1]: No hostname configured, using default hostname. Dec 13 01:34:18.126064 systemd[1]: Hostname set to . Dec 13 01:34:18.126083 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:34:18.126099 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:34:18.126115 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:34:18.126132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:34:18.126149 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:34:18.126166 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:34:18.126182 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:34:18.126199 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:34:18.126220 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:34:18.126237 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:34:18.126254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:34:18.126270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:34:18.126287 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:34:18.126304 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:34:18.126321 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:34:18.126340 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:34:18.126388 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:34:18.126405 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:34:18.126422 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:34:18.126439 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:34:18.126455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:34:18.126472 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:34:18.126489 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:34:18.126509 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:34:18.126525 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:34:18.126611 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:34:18.126630 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:34:18.126668 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:34:18.126690 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:34:18.126711 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:34:18.126730 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:34:18.126784 systemd-journald[178]: Collecting audit messages is disabled. Dec 13 01:34:18.126824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:34:18.126879 systemd-journald[178]: Journal started Dec 13 01:34:18.126916 systemd-journald[178]: Runtime Journal (/run/log/journal/ec24afc6c65af385c0b251128f280456) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:34:18.106676 systemd-modules-load[179]: Inserted module 'overlay' Dec 13 01:34:18.137002 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:34:18.132905 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:34:18.135035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:34:18.137449 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:34:18.166544 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:34:18.272314 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:34:18.272399 kernel: Bridge firewalling registered Dec 13 01:34:18.177083 systemd-modules-load[179]: Inserted module 'br_netfilter' Dec 13 01:34:18.283088 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:34:18.286237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:34:18.290562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:34:18.297279 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:34:18.309874 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:34:18.314558 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:34:18.320450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:34:18.323605 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:34:18.355901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:34:18.369915 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:34:18.372791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:34:18.388146 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:34:18.398937 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:34:18.427946 dracut-cmdline[214]: dracut-dracut-053 Dec 13 01:34:18.432401 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:34:18.439663 systemd-resolved[207]: Positive Trust Anchors: Dec 13 01:34:18.439680 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:34:18.439730 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:34:18.453688 systemd-resolved[207]: Defaulting to hostname 'linux'. Dec 13 01:34:18.455276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:34:18.459200 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:34:18.535675 kernel: SCSI subsystem initialized Dec 13 01:34:18.545675 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:34:18.556688 kernel: iscsi: registered transport (tcp) Dec 13 01:34:18.598681 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:34:18.598758 kernel: QLogic iSCSI HBA Driver Dec 13 01:34:18.662031 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:34:18.668859 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:34:18.711680 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:34:18.711755 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:34:18.711770 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:34:18.766705 kernel: raid6: avx512x4 gen() 10973 MB/s Dec 13 01:34:18.783697 kernel: raid6: avx512x2 gen() 15029 MB/s Dec 13 01:34:18.800704 kernel: raid6: avx512x1 gen() 15892 MB/s Dec 13 01:34:18.817702 kernel: raid6: avx2x4 gen() 15795 MB/s Dec 13 01:34:18.834732 kernel: raid6: avx2x2 gen() 14165 MB/s Dec 13 01:34:18.851689 kernel: raid6: avx2x1 gen() 12479 MB/s Dec 13 01:34:18.851827 kernel: raid6: using algorithm avx512x1 gen() 15892 MB/s Dec 13 01:34:18.868807 kernel: raid6: .... xor() 19365 MB/s, rmw enabled Dec 13 01:34:18.868888 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:34:18.891684 kernel: xor: automatically using best checksumming function avx Dec 13 01:34:19.081681 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:34:19.114455 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:34:19.126946 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:34:19.154571 systemd-udevd[396]: Using default interface naming scheme 'v255'. Dec 13 01:34:19.160777 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:34:19.169382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:34:19.204367 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 01:34:19.242397 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:34:19.246898 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:34:19.324954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:34:19.337973 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:34:19.380707 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:34:19.385769 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:34:19.387268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:34:19.388857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:34:19.402111 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:34:19.449293 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:34:19.486158 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:34:19.486360 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 01:34:19.486624 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:34:19.486663 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:2b:5c:44:ca:9b Dec 13 01:34:19.452967 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:34:19.497046 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:34:19.503663 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:34:19.497201 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:34:19.506542 kernel: AES CTR mode by8 optimization enabled Dec 13 01:34:19.499398 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:34:19.502608 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:34:19.502817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:34:19.507799 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:34:19.517099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:34:19.519117 (udev-worker)[459]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:34:19.562150 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:34:19.562444 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 01:34:19.573687 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:34:19.584630 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:34:19.584708 kernel: GPT:9289727 != 16777215 Dec 13 01:34:19.584728 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:34:19.584746 kernel: GPT:9289727 != 16777215 Dec 13 01:34:19.584792 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:34:19.584811 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:34:19.696682 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (442) Dec 13 01:34:19.762523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:34:19.765371 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (447) Dec 13 01:34:19.775982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:34:19.823531 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:34:19.825258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:34:19.848563 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:34:19.874847 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:34:19.874983 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:34:19.887252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:34:19.893833 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:34:19.900421 disk-uuid[627]: Primary Header is updated. Dec 13 01:34:19.900421 disk-uuid[627]: Secondary Entries is updated. Dec 13 01:34:19.900421 disk-uuid[627]: Secondary Header is updated. Dec 13 01:34:19.906677 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:34:19.915678 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:34:19.928683 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:34:20.935045 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:34:20.935691 disk-uuid[628]: The operation has completed successfully. Dec 13 01:34:21.146977 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:34:21.147106 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:34:21.154822 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:34:21.160573 sh[971]: Success Dec 13 01:34:21.175675 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:34:21.286774 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:34:21.304805 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:34:21.306933 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:34:21.357969 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:34:21.358027 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:34:21.359092 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:34:21.359236 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:34:21.360146 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:34:21.415678 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:34:21.418926 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:34:21.421853 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:34:21.433105 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:34:21.436797 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:34:21.465203 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:34:21.465282 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:34:21.465306 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:34:21.471678 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:34:21.486923 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:34:21.486739 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:34:21.508631 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:34:21.528873 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:34:21.632503 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:34:21.650266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:34:21.691038 systemd-networkd[1163]: lo: Link UP Dec 13 01:34:21.691052 systemd-networkd[1163]: lo: Gained carrier Dec 13 01:34:21.692685 systemd-networkd[1163]: Enumeration completed Dec 13 01:34:21.693057 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:34:21.693616 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:34:21.693621 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:34:21.695800 systemd[1]: Reached target network.target - Network. Dec 13 01:34:21.704291 systemd-networkd[1163]: eth0: Link UP Dec 13 01:34:21.704301 systemd-networkd[1163]: eth0: Gained carrier Dec 13 01:34:21.704317 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:34:21.715737 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.20.66/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:34:21.895926 ignition[1103]: Ignition 2.19.0 Dec 13 01:34:21.895940 ignition[1103]: Stage: fetch-offline Dec 13 01:34:21.896204 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:34:21.896216 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:34:21.898172 ignition[1103]: Ignition finished successfully Dec 13 01:34:21.902812 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:34:21.909857 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:34:21.930632 ignition[1175]: Ignition 2.19.0 Dec 13 01:34:21.930661 ignition[1175]: Stage: fetch Dec 13 01:34:21.931973 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:34:21.931990 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:34:21.932149 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:34:21.944614 ignition[1175]: PUT result: OK Dec 13 01:34:21.947308 ignition[1175]: parsed url from cmdline: "" Dec 13 01:34:21.947318 ignition[1175]: no config URL provided Dec 13 01:34:21.947329 ignition[1175]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:34:21.947344 ignition[1175]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:34:21.947365 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:34:21.949945 ignition[1175]: PUT result: OK Dec 13 01:34:21.950008 ignition[1175]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:34:21.953255 ignition[1175]: GET result: OK Dec 13 01:34:21.953339 ignition[1175]: parsing config with SHA512: adb4d54c1093073971d2c8d1017478658bf36c3242dc99a925edd2e95f27169a05390985350c9466cd35192b12877df9ce8dd12b343bc72975871167e3bff0eb Dec 13 01:34:21.963249 unknown[1175]: fetched base config from "system" Dec 13 01:34:21.963262 unknown[1175]: fetched base config from "system" Dec 13 01:34:21.963269 unknown[1175]: fetched user config from "aws" Dec 13 01:34:21.969045 ignition[1175]: fetch: fetch complete Dec 13 01:34:21.969055 ignition[1175]: fetch: fetch passed Dec 13 01:34:21.969236 ignition[1175]: Ignition finished successfully Dec 13 01:34:21.974531 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:34:21.980020 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:34:21.999807 ignition[1181]: Ignition 2.19.0 Dec 13 01:34:21.999820 ignition[1181]: Stage: kargs Dec 13 01:34:22.000257 ignition[1181]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:34:22.000271 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:34:22.000380 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:34:22.002504 ignition[1181]: PUT result: OK Dec 13 01:34:22.009731 ignition[1181]: kargs: kargs passed Dec 13 01:34:22.009919 ignition[1181]: Ignition finished successfully Dec 13 01:34:22.012348 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:34:22.016846 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:34:22.033001 ignition[1187]: Ignition 2.19.0 Dec 13 01:34:22.033014 ignition[1187]: Stage: disks Dec 13 01:34:22.033579 ignition[1187]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:34:22.033592 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:34:22.033722 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:34:22.034998 ignition[1187]: PUT result: OK Dec 13 01:34:22.042802 ignition[1187]: disks: disks passed Dec 13 01:34:22.042890 ignition[1187]: Ignition finished successfully Dec 13 01:34:22.045141 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:34:22.045814 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:34:22.048741 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:34:22.051121 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:34:22.051189 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:34:22.055157 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:34:22.063122 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:34:22.089705 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:34:22.093493 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:34:22.102934 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:34:22.231717 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:34:22.232561 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:34:22.234926 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:34:22.243873 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:34:22.249809 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:34:22.252503 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:34:22.252672 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:34:22.252714 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:34:22.267254 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:34:22.274526 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:34:22.279082 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1214) Dec 13 01:34:22.279118 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:34:22.279137 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:34:22.279156 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:34:22.294676 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:34:22.297805 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:34:22.467375 initrd-setup-root[1241]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:34:22.476709 initrd-setup-root[1248]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:34:22.484450 initrd-setup-root[1255]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:34:22.492801 initrd-setup-root[1262]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:34:22.650015 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:34:22.663868 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:34:22.671950 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:34:22.680489 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:34:22.682403 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:34:22.718293 ignition[1330]: INFO : Ignition 2.19.0 Dec 13 01:34:22.718293 ignition[1330]: INFO : Stage: mount Dec 13 01:34:22.721945 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:34:22.723785 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:34:22.723785 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:34:22.728359 ignition[1330]: INFO : PUT result: OK Dec 13 01:34:22.731815 ignition[1330]: INFO : mount: mount passed Dec 13 01:34:22.731815 ignition[1330]: INFO : Ignition finished successfully Dec 13 01:34:22.736056 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:34:22.742126 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:34:22.746997 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:34:22.766934 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:34:22.788681 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1342) Dec 13 01:34:22.790674 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:34:22.790729 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:34:22.791680 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:34:22.796916 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:34:22.798635 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:34:22.831572 ignition[1359]: INFO : Ignition 2.19.0 Dec 13 01:34:22.831572 ignition[1359]: INFO : Stage: files Dec 13 01:34:22.834406 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:34:22.834406 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:34:22.834406 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:34:22.842759 ignition[1359]: INFO : PUT result: OK Dec 13 01:34:22.847746 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:34:22.849811 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:34:22.849811 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:34:22.868409 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:34:22.870251 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:34:22.874808 unknown[1359]: wrote ssh authorized keys file for user: core Dec 13 01:34:22.876331 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:34:22.879926 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:34:22.882041 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:34:22.969733 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:34:23.124414 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:34:23.124414 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:34:23.128804 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:34:23.247832 systemd-networkd[1163]: eth0: Gained IPv6LL Dec 13 01:34:23.735966 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:34:23.861702 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:34:23.863910 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:34:23.866353 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:34:23.866353 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:34:23.870488 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:34:23.870488 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:34:23.875064 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:34:23.877338 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:34:23.877338 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:34:23.877338 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:34:23.877338 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:34:23.877338 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:34:23.890851 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:34:23.890851 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:34:23.890851 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:34:24.157441 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:34:24.509387 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:34:24.509387 ignition[1359]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:34:24.515758 ignition[1359]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:34:24.518634 ignition[1359]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:34:24.518634 ignition[1359]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:34:24.518634 ignition[1359]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:34:24.523738 ignition[1359]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:34:24.523738 ignition[1359]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:34:24.527340 ignition[1359]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:34:24.529329 ignition[1359]: INFO : files: files passed Dec 13 01:34:24.529329 ignition[1359]: INFO : Ignition finished successfully Dec 13 01:34:24.532781 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:34:24.539840 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:34:24.544430 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:34:24.548755 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:34:24.548864 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:34:24.576673 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:34:24.576673 initrd-setup-root-after-ignition[1388]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:34:24.580981 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:34:24.584885 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:34:24.588795 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:34:24.595818 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:34:24.634601 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:34:24.634724 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:34:24.638921 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:34:24.642229 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:34:24.645245 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:34:24.651897 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:34:24.677214 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:34:24.686837 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:34:24.701515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:34:24.707519 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:34:24.710348 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:34:24.710695 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:34:24.710845 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:34:24.711319 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:34:24.711554 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:34:24.711898 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:34:24.712486 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:34:24.712631 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:34:24.713025 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:34:24.713149 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:34:24.713394 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:34:24.713967 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:34:24.714138 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:34:24.714366 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:34:24.714481 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:34:24.715041 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:34:24.715271 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:34:24.715425 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:34:24.732343 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:34:24.737204 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:34:24.740779 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:34:24.763372 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:34:24.763507 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:34:24.766249 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:34:24.768590 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:34:24.777055 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:34:24.780057 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:34:24.781436 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:34:24.781639 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:34:24.783412 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:34:24.783607 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:34:24.790610 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:34:24.790797 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:34:24.805974 ignition[1412]: INFO : Ignition 2.19.0 Dec 13 01:34:24.805974 ignition[1412]: INFO : Stage: umount Dec 13 01:34:24.816187 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:34:24.816187 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:34:24.816187 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:34:24.816187 ignition[1412]: INFO : PUT result: OK Dec 13 01:34:24.816187 ignition[1412]: INFO : umount: umount passed Dec 13 01:34:24.816187 ignition[1412]: INFO : Ignition finished successfully Dec 13 01:34:24.815854 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:34:24.816062 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:34:24.822095 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:34:24.822202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:34:24.823513 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:34:24.823575 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:34:24.824736 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:34:24.824786 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:34:24.825969 systemd[1]: Stopped target network.target - Network. Dec 13 01:34:24.826980 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:34:24.827040 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:34:24.828326 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:34:24.829547 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:34:24.832762 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:34:24.834105 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:34:24.836290 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:34:24.837439 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:34:24.837503 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:34:24.838771 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:34:24.838832 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:34:24.841299 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:34:24.841371 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:34:24.844208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:34:24.844266 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:34:24.852073 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:34:24.854563 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:34:24.857691 systemd-networkd[1163]: eth0: DHCPv6 lease lost Dec 13 01:34:24.863454 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:34:24.864828 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:34:24.865168 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:34:24.868937 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:34:24.869139 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:34:24.871881 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:34:24.872039 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:34:24.879840 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:34:24.879924 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:34:24.882058 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:34:24.882126 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:34:24.890803 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:34:24.898852 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:34:24.899004 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:34:24.904543 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:34:24.904619 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:34:24.911736 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:34:24.911875 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:34:24.914068 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:34:24.914137 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:34:24.917119 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:34:24.947134 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:34:24.947523 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:34:24.949996 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:34:24.950200 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:34:24.954324 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:34:24.955902 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:34:24.958347 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:34:24.958398 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:34:24.965492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:34:24.965553 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:34:24.974001 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:34:24.975693 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:34:24.978635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:34:24.981834 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:34:24.993173 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:34:24.995020 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:34:24.995417 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:34:24.997211 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:34:24.997264 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:34:24.999981 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:34:25.000036 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:34:25.004888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:34:25.005001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:34:25.011085 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:34:25.011175 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:34:25.016605 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:34:25.030592 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:34:25.048127 systemd[1]: Switching root. Dec 13 01:34:25.080392 systemd-journald[178]: Journal stopped Dec 13 01:34:27.021194 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Dec 13 01:34:27.021359 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:34:27.021383 kernel: SELinux: policy capability open_perms=1 Dec 13 01:34:27.021402 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:34:27.021420 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:34:27.021437 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:34:27.021460 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:34:27.021479 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:34:27.021496 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:34:27.021513 kernel: audit: type=1403 audit(1734053665.501:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:34:27.021540 systemd[1]: Successfully loaded SELinux policy in 48.144ms. Dec 13 01:34:27.021575 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.681ms. Dec 13 01:34:27.021596 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:34:27.021619 systemd[1]: Detected virtualization amazon. Dec 13 01:34:27.021637 systemd[1]: Detected architecture x86-64. Dec 13 01:34:27.021672 systemd[1]: Detected first boot. Dec 13 01:34:27.021691 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:34:27.021709 zram_generator::config[1454]: No configuration found. Dec 13 01:34:27.021729 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:34:27.021748 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:34:27.021766 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:34:27.023404 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:34:27.023438 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:34:27.023459 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:34:27.023478 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:34:27.023497 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:34:27.023518 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:34:27.023538 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:34:27.023559 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:34:27.023581 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:34:27.023608 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:34:27.023629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:34:27.023739 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:34:27.023770 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:34:27.023791 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:34:27.023812 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:34:27.023832 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:34:27.023853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:34:27.023929 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:34:27.023958 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:34:27.023979 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:34:27.023999 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:34:27.024021 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:34:27.024044 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:34:27.024067 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:34:27.024088 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:34:27.024111 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:34:27.024139 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:34:27.024160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:34:27.024180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:34:27.024203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:34:27.024226 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:34:27.024249 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:34:27.024271 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:34:27.024293 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:34:27.024317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:34:27.024343 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:34:27.024365 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:34:27.024383 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:34:27.024404 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:34:27.024426 systemd[1]: Reached target machines.target - Containers. Dec 13 01:34:27.024445 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:34:27.024464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:34:27.024484 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:34:27.024507 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:34:27.024528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:34:27.024547 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:34:27.024568 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:34:27.024589 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:34:27.024610 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:34:27.024631 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:34:27.024685 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:34:27.024712 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:34:27.024733 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:34:27.024754 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:34:27.024775 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:34:27.024817 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:34:27.024836 kernel: fuse: init (API version 7.39) Dec 13 01:34:27.024857 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:34:27.024876 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:34:27.024895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:34:27.024919 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:34:27.024939 systemd[1]: Stopped verity-setup.service. Dec 13 01:34:27.024960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:34:27.024980 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:34:27.024999 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:34:27.025019 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:34:27.025039 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:34:27.025059 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:34:27.025079 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:34:27.025102 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:34:27.025122 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:34:27.025144 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:34:27.025164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:34:27.025182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:34:27.025205 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:34:27.025226 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:34:27.025305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:34:27.025332 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:34:27.025353 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:34:27.025375 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:34:27.025395 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:34:27.025418 kernel: ACPI: bus type drm_connector registered Dec 13 01:34:27.025437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:34:27.025459 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:34:27.025481 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:34:27.025501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:34:27.025521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:34:27.025541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:34:27.025602 systemd-journald[1529]: Collecting audit messages is disabled. Dec 13 01:34:27.025641 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:34:27.025688 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:34:27.025710 systemd-journald[1529]: Journal started Dec 13 01:34:27.025752 systemd-journald[1529]: Runtime Journal (/run/log/journal/ec24afc6c65af385c0b251128f280456) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:34:27.031600 kernel: loop: module loaded Dec 13 01:34:26.463625 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:34:26.485114 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:34:26.485517 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:34:27.047362 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:34:27.051227 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:34:27.053945 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:34:27.054370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:34:27.102935 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:34:27.103707 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:34:27.113158 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:34:27.125128 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:34:27.138992 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:34:27.140930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:34:27.147990 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:34:27.156794 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:34:27.159840 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:34:27.167874 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:34:27.170274 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:34:27.176951 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:34:27.180217 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:34:27.183136 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:34:27.190723 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:34:27.203062 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:34:27.206040 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:34:27.216903 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:34:27.253917 systemd-journald[1529]: Time spent on flushing to /var/log/journal/ec24afc6c65af385c0b251128f280456 is 82.302ms for 970 entries. Dec 13 01:34:27.253917 systemd-journald[1529]: System Journal (/var/log/journal/ec24afc6c65af385c0b251128f280456) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:34:27.387519 systemd-journald[1529]: Received client request to flush runtime journal. Dec 13 01:34:27.387598 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:34:27.252331 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Dec 13 01:34:27.252353 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Dec 13 01:34:27.274453 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:34:27.278593 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:34:27.304759 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:34:27.313985 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:34:27.319777 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:34:27.336276 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:34:27.393703 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:34:27.395682 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:34:27.399780 udevadm[1595]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:34:27.436686 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 01:34:27.477421 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:34:27.492628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:34:27.536680 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Dec 13 01:34:27.536709 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Dec 13 01:34:27.545862 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:34:27.563674 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:34:27.692953 kernel: loop3: detected capacity change from 0 to 61336 Dec 13 01:34:27.775683 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:34:27.849994 kernel: loop5: detected capacity change from 0 to 210664 Dec 13 01:34:27.906794 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 01:34:27.986687 kernel: loop7: detected capacity change from 0 to 61336 Dec 13 01:34:28.022448 (sd-merge)[1610]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:34:28.023174 (sd-merge)[1610]: Merged extensions into '/usr'. Dec 13 01:34:28.041751 systemd[1]: Reloading requested from client PID 1585 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:34:28.042372 systemd[1]: Reloading... Dec 13 01:34:28.227713 zram_generator::config[1635]: No configuration found. Dec 13 01:34:28.542673 ldconfig[1580]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:34:28.603051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:34:28.720617 systemd[1]: Reloading finished in 677 ms. Dec 13 01:34:28.766447 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:34:28.768487 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:34:28.783079 systemd[1]: Starting ensure-sysext.service... Dec 13 01:34:28.788246 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:34:28.811765 systemd[1]: Reloading requested from client PID 1685 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:34:28.812708 systemd[1]: Reloading... Dec 13 01:34:28.851304 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:34:28.852447 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:34:28.858364 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:34:28.859105 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Dec 13 01:34:28.859213 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Dec 13 01:34:28.874855 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:34:28.874871 systemd-tmpfiles[1686]: Skipping /boot Dec 13 01:34:28.928346 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:34:28.929742 systemd-tmpfiles[1686]: Skipping /boot Dec 13 01:34:28.965678 zram_generator::config[1712]: No configuration found. Dec 13 01:34:29.121630 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:34:29.194740 systemd[1]: Reloading finished in 381 ms. Dec 13 01:34:29.215230 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:34:29.220278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:34:29.237887 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:34:29.245936 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:34:29.255704 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:34:29.267416 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:34:29.271848 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:34:29.281015 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:34:29.295336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:34:29.298011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:34:29.301877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:34:29.306945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:34:29.318883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:34:29.321441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:34:29.321643 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:34:29.327010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:34:29.327384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:34:29.327628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:34:29.329047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:34:29.343025 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:34:29.350422 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:34:29.350893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:34:29.361239 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:34:29.365349 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:34:29.365636 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:34:29.367254 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:34:29.387747 systemd[1]: Finished ensure-sysext.service. Dec 13 01:34:29.389862 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:34:29.392535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:34:29.394002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:34:29.410895 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:34:29.413158 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:34:29.421304 systemd-udevd[1769]: Using default interface naming scheme 'v255'. Dec 13 01:34:29.429103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:34:29.430752 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:34:29.433160 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:34:29.435115 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:34:29.435324 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:34:29.438844 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:34:29.439046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:34:29.462003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:34:29.479052 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:34:29.526915 augenrules[1798]: No rules Dec 13 01:34:29.527135 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:34:29.531765 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:34:29.549160 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:34:29.559838 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:34:29.571746 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:34:29.573748 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:34:29.696252 systemd-resolved[1768]: Positive Trust Anchors: Dec 13 01:34:29.696271 systemd-resolved[1768]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:34:29.696332 systemd-resolved[1768]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:34:29.714015 systemd-resolved[1768]: Defaulting to hostname 'linux'. Dec 13 01:34:29.720979 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:34:29.725225 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:34:29.745175 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1812) Dec 13 01:34:29.745106 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:34:29.757812 systemd-networkd[1809]: lo: Link UP Dec 13 01:34:29.758155 systemd-networkd[1809]: lo: Gained carrier Dec 13 01:34:29.763512 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1812) Dec 13 01:34:29.760253 systemd-networkd[1809]: Enumeration completed Dec 13 01:34:29.760369 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:34:29.762416 systemd[1]: Reached target network.target - Network. Dec 13 01:34:29.770488 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:34:29.777803 (udev-worker)[1819]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:34:29.846722 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 01:34:29.872431 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Dec 13 01:34:29.877534 systemd-networkd[1809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:34:29.878970 systemd-networkd[1809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:34:29.888801 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:34:29.884890 systemd-networkd[1809]: eth0: Link UP Dec 13 01:34:29.885055 systemd-networkd[1809]: eth0: Gained carrier Dec 13 01:34:29.885076 systemd-networkd[1809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:34:29.894740 systemd-networkd[1809]: eth0: DHCPv4 address 172.31.20.66/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:34:29.899721 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:34:29.902852 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 01:34:29.911721 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:34:29.937080 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1822) Dec 13 01:34:29.987692 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:34:30.037247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:34:30.150800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:34:30.158303 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:34:30.161723 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:34:30.168922 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:34:30.192453 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:34:30.208683 lvm[1926]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:34:30.247044 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:34:30.383477 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:34:30.397877 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:34:30.400271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:34:30.422477 lvm[1932]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:34:30.402997 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:34:30.405995 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:34:30.407835 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:34:30.415683 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:34:30.420222 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:34:30.423003 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:34:30.425191 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:34:30.425247 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:34:30.426784 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:34:30.432544 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:34:30.437226 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:34:30.443984 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:34:30.446175 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:34:30.448058 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:34:30.451525 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:34:30.458225 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:34:30.458264 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:34:30.471876 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:34:30.490004 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:34:30.504122 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:34:30.514228 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:34:30.538135 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:34:30.538242 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:34:30.541922 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:34:30.568103 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:34:30.577975 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:34:30.590861 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:34:30.599194 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:34:30.606362 jq[1939]: false Dec 13 01:34:30.605290 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:34:30.617174 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:34:30.619157 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:34:30.621395 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:34:30.625960 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:34:30.631290 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:34:30.636796 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:34:30.645393 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:34:30.645639 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:34:30.697083 (ntainerd)[1960]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:34:30.716831 extend-filesystems[1940]: Found loop4 Dec 13 01:34:30.716831 extend-filesystems[1940]: Found loop5 Dec 13 01:34:30.716831 extend-filesystems[1940]: Found loop6 Dec 13 01:34:30.716831 extend-filesystems[1940]: Found loop7 Dec 13 01:34:30.716831 extend-filesystems[1940]: Found nvme0n1 Dec 13 01:34:30.716831 extend-filesystems[1940]: Found nvme0n1p1 Dec 13 01:34:30.742242 extend-filesystems[1940]: Found nvme0n1p2 Dec 13 01:34:30.742242 extend-filesystems[1940]: Found nvme0n1p3 Dec 13 01:34:30.742242 extend-filesystems[1940]: Found usr Dec 13 01:34:30.742242 extend-filesystems[1940]: Found nvme0n1p4 Dec 13 01:34:30.742242 extend-filesystems[1940]: Found nvme0n1p6 Dec 13 01:34:30.742242 extend-filesystems[1940]: Found nvme0n1p7 Dec 13 01:34:30.742242 extend-filesystems[1940]: Found nvme0n1p9 Dec 13 01:34:30.742242 extend-filesystems[1940]: Checking size of /dev/nvme0n1p9 Dec 13 01:34:30.740098 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:34:30.762826 jq[1952]: true Dec 13 01:34:30.732345 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:34:30.763278 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:34:30.763278 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:34:30.763278 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: ---------------------------------------------------- Dec 13 01:34:30.763278 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:34:30.763278 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:34:30.763278 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: corporation. Support and training for ntp-4 are Dec 13 01:34:30.763278 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: available at https://www.nwtime.org/support Dec 13 01:34:30.763278 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: ---------------------------------------------------- Dec 13 01:34:30.763278 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: proto: precision = 0.067 usec (-24) Dec 13 01:34:30.749716 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:34:30.732371 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:34:30.749763 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:34:30.732382 ntpd[1942]: ---------------------------------------------------- Dec 13 01:34:30.775993 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: basedate set to 2024-11-30 Dec 13 01:34:30.775993 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: gps base set to 2024-12-01 (week 2343) Dec 13 01:34:30.766211 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:34:30.732391 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:34:30.766239 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:34:30.732401 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:34:30.769273 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:34:30.732410 ntpd[1942]: corporation. Support and training for ntp-4 are Dec 13 01:34:30.769502 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:34:30.732420 ntpd[1942]: available at https://www.nwtime.org/support Dec 13 01:34:30.732429 ntpd[1942]: ---------------------------------------------------- Dec 13 01:34:30.739864 dbus-daemon[1938]: [system] SELinux support is enabled Dec 13 01:34:30.747176 ntpd[1942]: proto: precision = 0.067 usec (-24) Dec 13 01:34:30.784463 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:34:30.784463 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:34:30.784463 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:34:30.784463 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: Listen normally on 3 eth0 172.31.20.66:123 Dec 13 01:34:30.784463 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: Listen normally on 4 lo [::1]:123 Dec 13 01:34:30.784463 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: bind(21) AF_INET6 fe80::42b:5cff:fe44:ca9b%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:34:30.784463 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: unable to create socket on eth0 (5) for fe80::42b:5cff:fe44:ca9b%2#123 Dec 13 01:34:30.784463 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: failed to init interface for address fe80::42b:5cff:fe44:ca9b%2 Dec 13 01:34:30.784463 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Dec 13 01:34:30.772743 ntpd[1942]: basedate set to 2024-11-30 Dec 13 01:34:30.772769 ntpd[1942]: gps base set to 2024-12-01 (week 2343) Dec 13 01:34:30.778464 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:34:30.778583 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:34:30.779737 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:34:30.800306 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:34:30.800306 ntpd[1942]: 13 Dec 01:34:30 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:34:30.779790 ntpd[1942]: Listen normally on 3 eth0 172.31.20.66:123 Dec 13 01:34:30.779830 ntpd[1942]: Listen normally on 4 lo [::1]:123 Dec 13 01:34:30.779875 ntpd[1942]: bind(21) AF_INET6 fe80::42b:5cff:fe44:ca9b%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:34:30.779900 ntpd[1942]: unable to create socket on eth0 (5) for fe80::42b:5cff:fe44:ca9b%2#123 Dec 13 01:34:30.779916 ntpd[1942]: failed to init interface for address fe80::42b:5cff:fe44:ca9b%2 Dec 13 01:34:30.779947 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Dec 13 01:34:30.790346 dbus-daemon[1938]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1809 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:34:30.790641 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:34:30.790692 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:34:30.830846 update_engine[1951]: I20241213 01:34:30.823073 1951 main.cc:92] Flatcar Update Engine starting Dec 13 01:34:30.830705 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:34:30.864025 jq[1970]: true Dec 13 01:34:30.863919 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:34:30.873713 extend-filesystems[1940]: Resized partition /dev/nvme0n1p9 Dec 13 01:34:30.875526 update_engine[1951]: I20241213 01:34:30.870502 1951 update_check_scheduler.cc:74] Next update check in 10m53s Dec 13 01:34:30.885098 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:34:30.891347 extend-filesystems[1989]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:34:30.895509 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:34:30.902917 tar[1965]: linux-amd64/helm Dec 13 01:34:30.909907 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:34:30.910549 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:34:30.919258 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:34:30.919763 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:34:31.003553 systemd-logind[1950]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:34:31.004008 systemd-logind[1950]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 13 01:34:31.004037 systemd-logind[1950]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:34:31.011918 systemd-logind[1950]: New seat seat0. Dec 13 01:34:31.018842 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:34:31.035917 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:34:31.071367 extend-filesystems[1989]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:34:31.071367 extend-filesystems[1989]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:34:31.071367 extend-filesystems[1989]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:34:31.065989 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:34:31.080905 extend-filesystems[1940]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:34:31.066607 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:34:31.111730 coreos-metadata[1937]: Dec 13 01:34:31.110 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:34:31.117148 coreos-metadata[1937]: Dec 13 01:34:31.117 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:34:31.119710 coreos-metadata[1937]: Dec 13 01:34:31.119 INFO Fetch successful Dec 13 01:34:31.119710 coreos-metadata[1937]: Dec 13 01:34:31.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:34:31.121677 coreos-metadata[1937]: Dec 13 01:34:31.120 INFO Fetch successful Dec 13 01:34:31.121677 coreos-metadata[1937]: Dec 13 01:34:31.120 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:34:31.128866 coreos-metadata[1937]: Dec 13 01:34:31.128 INFO Fetch successful Dec 13 01:34:31.128968 coreos-metadata[1937]: Dec 13 01:34:31.128 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:34:31.130984 coreos-metadata[1937]: Dec 13 01:34:31.130 INFO Fetch successful Dec 13 01:34:31.131085 coreos-metadata[1937]: Dec 13 01:34:31.131 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:34:31.143155 coreos-metadata[1937]: Dec 13 01:34:31.143 INFO Fetch failed with 404: resource not found Dec 13 01:34:31.143155 coreos-metadata[1937]: Dec 13 01:34:31.143 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:34:31.149410 coreos-metadata[1937]: Dec 13 01:34:31.149 INFO Fetch successful Dec 13 01:34:31.149410 coreos-metadata[1937]: Dec 13 01:34:31.149 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:34:31.153909 coreos-metadata[1937]: Dec 13 01:34:31.153 INFO Fetch successful Dec 13 01:34:31.153909 coreos-metadata[1937]: Dec 13 01:34:31.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:34:31.154815 coreos-metadata[1937]: Dec 13 01:34:31.154 INFO Fetch successful Dec 13 01:34:31.154901 coreos-metadata[1937]: Dec 13 01:34:31.154 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:34:31.158969 bash[2016]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:34:31.162485 coreos-metadata[1937]: Dec 13 01:34:31.160 INFO Fetch successful Dec 13 01:34:31.162485 coreos-metadata[1937]: Dec 13 01:34:31.160 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:34:31.162485 coreos-metadata[1937]: Dec 13 01:34:31.161 INFO Fetch successful Dec 13 01:34:31.163161 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:34:31.173510 systemd[1]: Starting sshkeys.service... Dec 13 01:34:31.197371 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:34:31.197770 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:34:31.199909 dbus-daemon[1938]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1986 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:34:31.211164 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:34:31.221677 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1825) Dec 13 01:34:31.240594 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:34:31.251866 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:34:31.262858 polkitd[2026]: Started polkitd version 121 Dec 13 01:34:31.275707 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:34:31.277573 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:34:31.308567 polkitd[2026]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:34:31.310269 polkitd[2026]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:34:31.318038 polkitd[2026]: Finished loading, compiling and executing 2 rules Dec 13 01:34:31.322922 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:34:31.323145 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:34:31.331977 polkitd[2026]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:34:31.433764 systemd-hostnamed[1986]: Hostname set to (transient) Dec 13 01:34:31.434542 systemd-resolved[1768]: System hostname changed to 'ip-172-31-20-66'. Dec 13 01:34:31.448393 systemd-networkd[1809]: eth0: Gained IPv6LL Dec 13 01:34:31.468580 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:34:31.472145 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:34:31.487270 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:34:31.539090 coreos-metadata[2027]: Dec 13 01:34:31.532 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:34:31.542220 coreos-metadata[2027]: Dec 13 01:34:31.541 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:34:31.543678 coreos-metadata[2027]: Dec 13 01:34:31.543 INFO Fetch successful Dec 13 01:34:31.543678 coreos-metadata[2027]: Dec 13 01:34:31.543 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:34:31.544411 coreos-metadata[2027]: Dec 13 01:34:31.544 INFO Fetch successful Dec 13 01:34:31.787588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:31.796315 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:34:31.944213 locksmithd[1992]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:34:32.100115 sshd_keygen[1980]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:34:32.172013 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:34:32.189095 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:34:32.230157 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:34:32.231182 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:34:32.248081 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:34:32.302006 unknown[2027]: wrote ssh authorized keys file for user: core Dec 13 01:34:32.335561 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:34:32.343241 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:34:33.359981 amazon-ssm-agent[2077]: Initializing new seelog logger Dec 13 01:34:33.359981 amazon-ssm-agent[2077]: New Seelog Logger Creation Complete Dec 13 01:34:33.359981 amazon-ssm-agent[2077]: 2024/12/13 01:34:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:34:33.359981 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:34:33.359981 amazon-ssm-agent[2077]: 2024/12/13 01:34:32 processing appconfig overrides Dec 13 01:34:33.359981 amazon-ssm-agent[2077]: 2024/12/13 01:34:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:34:33.359981 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:34:33.359981 amazon-ssm-agent[2077]: 2024/12/13 01:34:33 processing appconfig overrides Dec 13 01:34:33.365010 containerd[1960]: time="2024-12-13T01:34:33.359371545Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:34:32.356152 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:34:33.366567 amazon-ssm-agent[2077]: 2024/12/13 01:34:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:34:33.366567 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:34:33.366567 amazon-ssm-agent[2077]: 2024/12/13 01:34:33 processing appconfig overrides Dec 13 01:34:33.366567 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO Proxy environment variables: Dec 13 01:34:32.359098 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:34:33.369937 amazon-ssm-agent[2077]: 2024/12/13 01:34:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:34:33.369937 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:34:33.370924 amazon-ssm-agent[2077]: 2024/12/13 01:34:33 processing appconfig overrides Dec 13 01:34:33.444731 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:34:33.457794 update-ssh-keys[2166]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:34:33.453429 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:34:33.464322 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO https_proxy: Dec 13 01:34:33.465973 systemd[1]: Finished sshkeys.service. Dec 13 01:34:33.468285 containerd[1960]: time="2024-12-13T01:34:33.466347546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:34:33.468672 containerd[1960]: time="2024-12-13T01:34:33.468525521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:34:33.468672 containerd[1960]: time="2024-12-13T01:34:33.468573953Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:34:33.468672 containerd[1960]: time="2024-12-13T01:34:33.468599514Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:34:33.468831 containerd[1960]: time="2024-12-13T01:34:33.468807484Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:34:33.468874 containerd[1960]: time="2024-12-13T01:34:33.468840111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.468922991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.468944990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.469477504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.469504568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.469524282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.469537762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.469662616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.470008828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.470218428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.470243139Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:34:33.472746 containerd[1960]: time="2024-12-13T01:34:33.470345687Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:34:33.473447 containerd[1960]: time="2024-12-13T01:34:33.470401979Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:34:33.483389 containerd[1960]: time="2024-12-13T01:34:33.482507595Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:34:33.483389 containerd[1960]: time="2024-12-13T01:34:33.482609303Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:34:33.483389 containerd[1960]: time="2024-12-13T01:34:33.482636392Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:34:33.483389 containerd[1960]: time="2024-12-13T01:34:33.482852180Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:34:33.483389 containerd[1960]: time="2024-12-13T01:34:33.482886390Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:34:33.483389 containerd[1960]: time="2024-12-13T01:34:33.483154216Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486028316Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486216040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486242736Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486268715Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486288769Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486311432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486332802Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486369593Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486392303Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486413105Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486432761Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486451829Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486481329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.486672 containerd[1960]: time="2024-12-13T01:34:33.486503202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.487321 containerd[1960]: time="2024-12-13T01:34:33.486523371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.487321 containerd[1960]: time="2024-12-13T01:34:33.486541314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.487321 containerd[1960]: time="2024-12-13T01:34:33.486559419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.487321 containerd[1960]: time="2024-12-13T01:34:33.486579454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.487321 containerd[1960]: time="2024-12-13T01:34:33.486598413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.487321 containerd[1960]: time="2024-12-13T01:34:33.486618565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.490703579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.490830628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.490860860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.490883229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.490905486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.491207739Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.491261916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.491285805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.491304489Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.491401181Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.491428258Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.491574814Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:34:33.495052 containerd[1960]: time="2024-12-13T01:34:33.491597035Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:34:33.495640 containerd[1960]: time="2024-12-13T01:34:33.491613018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495640 containerd[1960]: time="2024-12-13T01:34:33.491631887Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:34:33.495640 containerd[1960]: time="2024-12-13T01:34:33.491647181Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:34:33.495640 containerd[1960]: time="2024-12-13T01:34:33.491679925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:34:33.495811 containerd[1960]: time="2024-12-13T01:34:33.492755175Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:34:33.495811 containerd[1960]: time="2024-12-13T01:34:33.493162512Z" level=info msg="Connect containerd service" Dec 13 01:34:33.495811 containerd[1960]: time="2024-12-13T01:34:33.493221855Z" level=info msg="using legacy CRI server" Dec 13 01:34:33.495811 containerd[1960]: time="2024-12-13T01:34:33.493232921Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:34:33.495811 containerd[1960]: time="2024-12-13T01:34:33.493473427Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:34:33.495811 containerd[1960]: time="2024-12-13T01:34:33.494769502Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:34:33.495811 containerd[1960]: time="2024-12-13T01:34:33.494932029Z" level=info msg="Start subscribing containerd event" Dec 13 01:34:33.498157 containerd[1960]: time="2024-12-13T01:34:33.497475617Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:34:33.498157 containerd[1960]: time="2024-12-13T01:34:33.497505187Z" level=info msg="Start recovering state" Dec 13 01:34:33.498157 containerd[1960]: time="2024-12-13T01:34:33.497550803Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:34:33.498157 containerd[1960]: time="2024-12-13T01:34:33.497638288Z" level=info msg="Start event monitor" Dec 13 01:34:33.498157 containerd[1960]: time="2024-12-13T01:34:33.497691179Z" level=info msg="Start snapshots syncer" Dec 13 01:34:33.498157 containerd[1960]: time="2024-12-13T01:34:33.497705266Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:34:33.498157 containerd[1960]: time="2024-12-13T01:34:33.497717855Z" level=info msg="Start streaming server" Dec 13 01:34:33.498157 containerd[1960]: time="2024-12-13T01:34:33.497921434Z" level=info msg="containerd successfully booted in 0.229581s" Dec 13 01:34:33.498044 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:34:33.548671 tar[1965]: linux-amd64/LICENSE Dec 13 01:34:33.548671 tar[1965]: linux-amd64/README.md Dec 13 01:34:33.567696 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO http_proxy: Dec 13 01:34:33.571261 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:34:33.668765 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO no_proxy: Dec 13 01:34:33.732944 ntpd[1942]: Listen normally on 6 eth0 [fe80::42b:5cff:fe44:ca9b%2]:123 Dec 13 01:34:33.735733 ntpd[1942]: 13 Dec 01:34:33 ntpd[1942]: Listen normally on 6 eth0 [fe80::42b:5cff:fe44:ca9b%2]:123 Dec 13 01:34:33.769182 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:34:33.868957 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:34:33.968318 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO Agent will take identity from EC2 Dec 13 01:34:34.069056 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:34:34.168492 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:34:34.201456 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:34:34.201456 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:34:34.201456 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 13 01:34:34.201456 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:34:34.201456 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:34:34.201921 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO [Registrar] Starting registrar module Dec 13 01:34:34.201921 amazon-ssm-agent[2077]: 2024-12-13 01:34:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:34:34.201921 amazon-ssm-agent[2077]: 2024-12-13 01:34:34 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:34:34.201921 amazon-ssm-agent[2077]: 2024-12-13 01:34:34 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:34:34.201921 amazon-ssm-agent[2077]: 2024-12-13 01:34:34 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:34:34.201921 amazon-ssm-agent[2077]: 2024-12-13 01:34:34 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:34:34.268363 amazon-ssm-agent[2077]: 2024-12-13 01:34:34 INFO [CredentialRefresher] Next credential rotation will be in 31.04165905303333 minutes Dec 13 01:34:35.225081 amazon-ssm-agent[2077]: 2024-12-13 01:34:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:34:35.321474 amazon-ssm-agent[2077]: 2024-12-13 01:34:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2182) started Dec 13 01:34:35.422454 amazon-ssm-agent[2077]: 2024-12-13 01:34:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:34:35.959480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:35.962376 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:34:35.966815 systemd[1]: Startup finished in 905ms (kernel) + 7.702s (initrd) + 10.510s (userspace) = 19.118s. Dec 13 01:34:36.113238 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:34:37.409585 kubelet[2198]: E1213 01:34:37.409529 2198 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:34:37.412168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:34:37.412377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:34:37.413145 systemd[1]: kubelet.service: Consumed 1.005s CPU time. Dec 13 01:34:38.348786 systemd-resolved[1768]: Clock change detected. Flushing caches. Dec 13 01:34:40.957658 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:34:40.968672 systemd[1]: Started sshd@0-172.31.20.66:22-139.178.68.195:34384.service - OpenSSH per-connection server daemon (139.178.68.195:34384). Dec 13 01:34:41.163090 sshd[2211]: Accepted publickey for core from 139.178.68.195 port 34384 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:41.172779 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:41.195820 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:34:41.216997 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:34:41.224016 systemd-logind[1950]: New session 1 of user core. Dec 13 01:34:41.242561 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:34:41.249002 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:34:41.266380 (systemd)[2215]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:34:41.448999 systemd[2215]: Queued start job for default target default.target. Dec 13 01:34:41.457998 systemd[2215]: Created slice app.slice - User Application Slice. Dec 13 01:34:41.458042 systemd[2215]: Reached target paths.target - Paths. Dec 13 01:34:41.458062 systemd[2215]: Reached target timers.target - Timers. Dec 13 01:34:41.459513 systemd[2215]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:34:41.474695 systemd[2215]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:34:41.474843 systemd[2215]: Reached target sockets.target - Sockets. Dec 13 01:34:41.474864 systemd[2215]: Reached target basic.target - Basic System. Dec 13 01:34:41.474967 systemd[2215]: Reached target default.target - Main User Target. Dec 13 01:34:41.475726 systemd[2215]: Startup finished in 187ms. Dec 13 01:34:41.476043 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:34:41.483741 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:34:41.644908 systemd[1]: Started sshd@1-172.31.20.66:22-139.178.68.195:34386.service - OpenSSH per-connection server daemon (139.178.68.195:34386). Dec 13 01:34:41.810709 sshd[2226]: Accepted publickey for core from 139.178.68.195 port 34386 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:41.812643 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:41.817607 systemd-logind[1950]: New session 2 of user core. Dec 13 01:34:41.834756 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:34:41.954856 sshd[2226]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:41.958595 systemd[1]: sshd@1-172.31.20.66:22-139.178.68.195:34386.service: Deactivated successfully. Dec 13 01:34:41.960845 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:34:41.962262 systemd-logind[1950]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:34:41.963847 systemd-logind[1950]: Removed session 2. Dec 13 01:34:41.989863 systemd[1]: Started sshd@2-172.31.20.66:22-139.178.68.195:34392.service - OpenSSH per-connection server daemon (139.178.68.195:34392). Dec 13 01:34:42.178283 sshd[2233]: Accepted publickey for core from 139.178.68.195 port 34392 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:42.180157 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:42.186308 systemd-logind[1950]: New session 3 of user core. Dec 13 01:34:42.202766 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:34:42.319949 sshd[2233]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:42.323918 systemd[1]: sshd@2-172.31.20.66:22-139.178.68.195:34392.service: Deactivated successfully. Dec 13 01:34:42.325984 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:34:42.327886 systemd-logind[1950]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:34:42.329177 systemd-logind[1950]: Removed session 3. Dec 13 01:34:42.356252 systemd[1]: Started sshd@3-172.31.20.66:22-139.178.68.195:34404.service - OpenSSH per-connection server daemon (139.178.68.195:34404). Dec 13 01:34:42.511996 sshd[2240]: Accepted publickey for core from 139.178.68.195 port 34404 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:42.513357 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:42.520891 systemd-logind[1950]: New session 4 of user core. Dec 13 01:34:42.523755 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:34:42.646333 sshd[2240]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:42.650670 systemd[1]: sshd@3-172.31.20.66:22-139.178.68.195:34404.service: Deactivated successfully. Dec 13 01:34:42.653065 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:34:42.654409 systemd-logind[1950]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:34:42.656219 systemd-logind[1950]: Removed session 4. Dec 13 01:34:42.685227 systemd[1]: Started sshd@4-172.31.20.66:22-139.178.68.195:34410.service - OpenSSH per-connection server daemon (139.178.68.195:34410). Dec 13 01:34:42.855291 sshd[2247]: Accepted publickey for core from 139.178.68.195 port 34410 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:42.857312 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:42.861846 systemd-logind[1950]: New session 5 of user core. Dec 13 01:34:42.868747 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:34:42.988221 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:34:42.988833 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:34:43.004897 sudo[2250]: pam_unix(sudo:session): session closed for user root Dec 13 01:34:43.028628 sshd[2247]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:43.034354 systemd-logind[1950]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:34:43.036388 systemd[1]: sshd@4-172.31.20.66:22-139.178.68.195:34410.service: Deactivated successfully. Dec 13 01:34:43.039332 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:34:43.040463 systemd-logind[1950]: Removed session 5. Dec 13 01:34:43.061185 systemd[1]: Started sshd@5-172.31.20.66:22-139.178.68.195:34422.service - OpenSSH per-connection server daemon (139.178.68.195:34422). Dec 13 01:34:43.229437 sshd[2255]: Accepted publickey for core from 139.178.68.195 port 34422 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:43.230513 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:43.237344 systemd-logind[1950]: New session 6 of user core. Dec 13 01:34:43.244789 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:34:43.345049 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:34:43.345437 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:34:43.350022 sudo[2259]: pam_unix(sudo:session): session closed for user root Dec 13 01:34:43.357297 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:34:43.357834 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:34:43.377098 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:34:43.382562 auditctl[2262]: No rules Dec 13 01:34:43.384625 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:34:43.384905 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:34:43.395982 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:34:43.426900 augenrules[2280]: No rules Dec 13 01:34:43.428652 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:34:43.430401 sudo[2258]: pam_unix(sudo:session): session closed for user root Dec 13 01:34:43.454029 sshd[2255]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:43.461309 systemd[1]: sshd@5-172.31.20.66:22-139.178.68.195:34422.service: Deactivated successfully. Dec 13 01:34:43.463926 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:34:43.464808 systemd-logind[1950]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:34:43.466007 systemd-logind[1950]: Removed session 6. Dec 13 01:34:43.492851 systemd[1]: Started sshd@6-172.31.20.66:22-139.178.68.195:34434.service - OpenSSH per-connection server daemon (139.178.68.195:34434). Dec 13 01:34:43.664481 sshd[2288]: Accepted publickey for core from 139.178.68.195 port 34434 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:43.667931 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:43.681430 systemd-logind[1950]: New session 7 of user core. Dec 13 01:34:43.689745 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:34:43.802974 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:34:43.803613 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:34:44.407260 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:34:44.418005 (dockerd)[2307]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:34:44.946354 dockerd[2307]: time="2024-12-13T01:34:44.946295524Z" level=info msg="Starting up" Dec 13 01:34:45.104507 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1935288114-merged.mount: Deactivated successfully. Dec 13 01:34:45.148980 dockerd[2307]: time="2024-12-13T01:34:45.148930157Z" level=info msg="Loading containers: start." Dec 13 01:34:45.305552 kernel: Initializing XFRM netlink socket Dec 13 01:34:45.359980 (udev-worker)[2373]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:34:45.435462 systemd-networkd[1809]: docker0: Link UP Dec 13 01:34:45.459571 dockerd[2307]: time="2024-12-13T01:34:45.459507358Z" level=info msg="Loading containers: done." Dec 13 01:34:45.479389 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck828037463-merged.mount: Deactivated successfully. Dec 13 01:34:45.481163 dockerd[2307]: time="2024-12-13T01:34:45.480546856Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:34:45.481281 dockerd[2307]: time="2024-12-13T01:34:45.481251891Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:34:45.481415 dockerd[2307]: time="2024-12-13T01:34:45.481391936Z" level=info msg="Daemon has completed initialization" Dec 13 01:34:45.516010 dockerd[2307]: time="2024-12-13T01:34:45.515407437Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:34:45.515513 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:34:46.804727 containerd[1960]: time="2024-12-13T01:34:46.804681292Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:34:47.472183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864561606.mount: Deactivated successfully. Dec 13 01:34:48.278602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:34:48.283829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:48.513913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:48.528051 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:34:48.612816 kubelet[2510]: E1213 01:34:48.612769 2510 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:34:48.616907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:34:48.617105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:34:50.090331 containerd[1960]: time="2024-12-13T01:34:50.090272143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:50.091883 containerd[1960]: time="2024-12-13T01:34:50.091830300Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 01:34:50.093936 containerd[1960]: time="2024-12-13T01:34:50.092808374Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:50.102012 containerd[1960]: time="2024-12-13T01:34:50.101960496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:50.103413 containerd[1960]: time="2024-12-13T01:34:50.103362143Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.298636103s" Dec 13 01:34:50.103413 containerd[1960]: time="2024-12-13T01:34:50.103409655Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:34:50.133812 containerd[1960]: time="2024-12-13T01:34:50.133699988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:34:52.953021 containerd[1960]: time="2024-12-13T01:34:52.952968001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:52.955164 containerd[1960]: time="2024-12-13T01:34:52.954828485Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 01:34:52.958377 containerd[1960]: time="2024-12-13T01:34:52.956348783Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:52.959667 containerd[1960]: time="2024-12-13T01:34:52.959629246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:52.961107 containerd[1960]: time="2024-12-13T01:34:52.961070750Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.827324927s" Dec 13 01:34:52.961239 containerd[1960]: time="2024-12-13T01:34:52.961220023Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:34:53.006474 containerd[1960]: time="2024-12-13T01:34:53.006434374Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:34:55.347354 containerd[1960]: time="2024-12-13T01:34:55.347302766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:55.348700 containerd[1960]: time="2024-12-13T01:34:55.348627074Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 01:34:55.350741 containerd[1960]: time="2024-12-13T01:34:55.350002041Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:55.353643 containerd[1960]: time="2024-12-13T01:34:55.353602730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:55.355108 containerd[1960]: time="2024-12-13T01:34:55.355051953Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 2.348578626s" Dec 13 01:34:55.355398 containerd[1960]: time="2024-12-13T01:34:55.355369727Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:34:55.413798 containerd[1960]: time="2024-12-13T01:34:55.413755082Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:34:56.678885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311050011.mount: Deactivated successfully. Dec 13 01:34:57.382133 containerd[1960]: time="2024-12-13T01:34:57.382080376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:57.383670 containerd[1960]: time="2024-12-13T01:34:57.383620502Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 01:34:57.386631 containerd[1960]: time="2024-12-13T01:34:57.386062777Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:57.390300 containerd[1960]: time="2024-12-13T01:34:57.390253403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:57.391296 containerd[1960]: time="2024-12-13T01:34:57.391254594Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.97744715s" Dec 13 01:34:57.391391 containerd[1960]: time="2024-12-13T01:34:57.391301799Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:34:57.436219 containerd[1960]: time="2024-12-13T01:34:57.434951521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:34:57.967761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271127743.mount: Deactivated successfully. Dec 13 01:34:58.621386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:34:58.628998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:34:58.968730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:34:58.985963 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:34:59.142190 kubelet[2606]: E1213 01:34:59.142121 2606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:34:59.155733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:34:59.156160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:34:59.452533 containerd[1960]: time="2024-12-13T01:34:59.452463265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:59.455452 containerd[1960]: time="2024-12-13T01:34:59.455233503Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:34:59.456474 containerd[1960]: time="2024-12-13T01:34:59.456437518Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:59.459544 containerd[1960]: time="2024-12-13T01:34:59.459468433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:59.461094 containerd[1960]: time="2024-12-13T01:34:59.461049492Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.024645922s" Dec 13 01:34:59.461368 containerd[1960]: time="2024-12-13T01:34:59.461088893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:34:59.533292 containerd[1960]: time="2024-12-13T01:34:59.533121566Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:35:00.086777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651007461.mount: Deactivated successfully. Dec 13 01:35:00.106452 containerd[1960]: time="2024-12-13T01:35:00.106382213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:00.108775 containerd[1960]: time="2024-12-13T01:35:00.108689864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:35:00.110102 containerd[1960]: time="2024-12-13T01:35:00.110032466Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:00.117408 containerd[1960]: time="2024-12-13T01:35:00.117360410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:00.119187 containerd[1960]: time="2024-12-13T01:35:00.118744060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 584.70393ms" Dec 13 01:35:00.119187 containerd[1960]: time="2024-12-13T01:35:00.118790924Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:35:00.167439 containerd[1960]: time="2024-12-13T01:35:00.167401925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:35:00.875000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350608084.mount: Deactivated successfully. Dec 13 01:35:02.107081 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:35:05.113594 containerd[1960]: time="2024-12-13T01:35:05.113538478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:05.115104 containerd[1960]: time="2024-12-13T01:35:05.115052416Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 01:35:05.116546 containerd[1960]: time="2024-12-13T01:35:05.116066696Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:05.119549 containerd[1960]: time="2024-12-13T01:35:05.119193763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:05.121396 containerd[1960]: time="2024-12-13T01:35:05.120683041Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.953241408s" Dec 13 01:35:05.121396 containerd[1960]: time="2024-12-13T01:35:05.120733695Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:35:09.100646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:35:09.113119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:35:09.152847 systemd[1]: Reloading requested from client PID 2743 ('systemctl') (unit session-7.scope)... Dec 13 01:35:09.152867 systemd[1]: Reloading... Dec 13 01:35:09.314670 zram_generator::config[2783]: No configuration found. Dec 13 01:35:09.612132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:35:09.732307 systemd[1]: Reloading finished in 578 ms. Dec 13 01:35:09.807581 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:35:09.807717 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:35:09.808159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:35:09.817088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:35:10.094089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:35:10.098410 (kubelet)[2842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:35:10.197568 kubelet[2842]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:35:10.197568 kubelet[2842]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:35:10.197568 kubelet[2842]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:35:10.197568 kubelet[2842]: I1213 01:35:10.196896 2842 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:35:10.498149 kubelet[2842]: I1213 01:35:10.497967 2842 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:35:10.498149 kubelet[2842]: I1213 01:35:10.498146 2842 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:35:10.498445 kubelet[2842]: I1213 01:35:10.498422 2842 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:35:10.531424 kubelet[2842]: I1213 01:35:10.531391 2842 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:35:10.533492 kubelet[2842]: E1213 01:35:10.533459 2842 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:10.569263 kubelet[2842]: I1213 01:35:10.569225 2842 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:35:10.577227 kubelet[2842]: I1213 01:35:10.576769 2842 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:35:10.577227 kubelet[2842]: I1213 01:35:10.576847 2842 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-66","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:35:10.578093 kubelet[2842]: I1213 01:35:10.578062 2842 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:35:10.578093 kubelet[2842]: I1213 01:35:10.578097 2842 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:35:10.582154 kubelet[2842]: I1213 01:35:10.582105 2842 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:35:10.585545 kubelet[2842]: I1213 01:35:10.583995 2842 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:35:10.585545 kubelet[2842]: I1213 01:35:10.584028 2842 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:35:10.585545 kubelet[2842]: I1213 01:35:10.584060 2842 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:35:10.585545 kubelet[2842]: I1213 01:35:10.584083 2842 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:35:10.602307 kubelet[2842]: W1213 01:35:10.600957 2842 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:10.602307 kubelet[2842]: E1213 01:35:10.602086 2842 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:10.602307 kubelet[2842]: W1213 01:35:10.602210 2842 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-66&limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:10.602307 kubelet[2842]: E1213 01:35:10.602258 2842 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-66&limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:10.607362 kubelet[2842]: I1213 01:35:10.605475 2842 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:35:10.612527 kubelet[2842]: I1213 01:35:10.610087 2842 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:35:10.612527 kubelet[2842]: W1213 01:35:10.610988 2842 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:35:10.612701 kubelet[2842]: I1213 01:35:10.612685 2842 server.go:1264] "Started kubelet" Dec 13 01:35:10.629873 kubelet[2842]: I1213 01:35:10.629794 2842 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:35:10.640782 kubelet[2842]: I1213 01:35:10.640434 2842 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:35:10.641806 kubelet[2842]: I1213 01:35:10.640400 2842 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:35:10.641806 kubelet[2842]: I1213 01:35:10.641772 2842 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:35:10.646191 kubelet[2842]: E1213 01:35:10.643016 2842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.66:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.66:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-66.181098a99283d3bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-66,UID:ip-172-31-20-66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-66,},FirstTimestamp:2024-12-13 01:35:10.612648895 +0000 UTC m=+0.505909664,LastTimestamp:2024-12-13 01:35:10.612648895 +0000 UTC m=+0.505909664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-66,}" Dec 13 01:35:10.650308 kubelet[2842]: I1213 01:35:10.650122 2842 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:35:10.656233 kubelet[2842]: E1213 01:35:10.654851 2842 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-66\" not found" Dec 13 01:35:10.656233 kubelet[2842]: I1213 01:35:10.654920 2842 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:35:10.661646 kubelet[2842]: I1213 01:35:10.658200 2842 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:35:10.661646 kubelet[2842]: I1213 01:35:10.658299 2842 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:35:10.661646 kubelet[2842]: W1213 01:35:10.659230 2842 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:10.661646 kubelet[2842]: E1213 01:35:10.659291 2842 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:10.666703 kubelet[2842]: E1213 01:35:10.666385 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-66?timeout=10s\": dial tcp 172.31.20.66:6443: connect: connection refused" interval="200ms" Dec 13 01:35:10.669204 kubelet[2842]: I1213 01:35:10.669084 2842 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:35:10.669822 kubelet[2842]: I1213 01:35:10.669454 2842 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:35:10.670738 kubelet[2842]: E1213 01:35:10.670715 2842 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:35:10.672782 kubelet[2842]: I1213 01:35:10.672758 2842 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:35:10.703393 kubelet[2842]: I1213 01:35:10.703349 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:35:10.707080 kubelet[2842]: I1213 01:35:10.707042 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:35:10.707080 kubelet[2842]: I1213 01:35:10.707082 2842 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:35:10.707262 kubelet[2842]: I1213 01:35:10.707105 2842 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:35:10.707262 kubelet[2842]: E1213 01:35:10.707152 2842 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:35:10.710397 kubelet[2842]: W1213 01:35:10.710123 2842 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:10.710397 kubelet[2842]: E1213 01:35:10.710193 2842 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:10.717146 kubelet[2842]: I1213 01:35:10.716889 2842 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:35:10.717146 kubelet[2842]: I1213 01:35:10.716908 2842 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:35:10.717146 kubelet[2842]: I1213 01:35:10.716928 2842 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:35:10.721131 kubelet[2842]: I1213 01:35:10.721094 2842 policy_none.go:49] "None policy: Start" Dec 13 01:35:10.721975 kubelet[2842]: I1213 01:35:10.721952 2842 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:35:10.721975 kubelet[2842]: I1213 01:35:10.721980 2842 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:35:10.737188 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:35:10.749992 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:35:10.757556 kubelet[2842]: I1213 01:35:10.757509 2842 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-66" Dec 13 01:35:10.758026 kubelet[2842]: E1213 01:35:10.758001 2842 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.66:6443/api/v1/nodes\": dial tcp 172.31.20.66:6443: connect: connection refused" node="ip-172-31-20-66" Dec 13 01:35:10.761662 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:35:10.764009 kubelet[2842]: I1213 01:35:10.763981 2842 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:35:10.764268 kubelet[2842]: I1213 01:35:10.764230 2842 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:35:10.767540 kubelet[2842]: I1213 01:35:10.765784 2842 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:35:10.769369 kubelet[2842]: E1213 01:35:10.769332 2842 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-66\" not found" Dec 13 01:35:10.807420 kubelet[2842]: I1213 01:35:10.807349 2842 topology_manager.go:215] "Topology Admit Handler" podUID="a979a8d35a2f99582b98c32cd27c6ff3" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:10.808955 kubelet[2842]: I1213 01:35:10.808920 2842 topology_manager.go:215] "Topology Admit Handler" podUID="6de13d3b069536bcc80c7d3c0241be5a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-66" Dec 13 01:35:10.810231 kubelet[2842]: I1213 01:35:10.810206 2842 topology_manager.go:215] "Topology Admit Handler" podUID="7d33ce58a806bdc6b27ff1c729af0d65" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-66" Dec 13 01:35:10.826765 systemd[1]: Created slice kubepods-burstable-poda979a8d35a2f99582b98c32cd27c6ff3.slice - libcontainer container kubepods-burstable-poda979a8d35a2f99582b98c32cd27c6ff3.slice. Dec 13 01:35:10.841855 systemd[1]: Created slice kubepods-burstable-pod6de13d3b069536bcc80c7d3c0241be5a.slice - libcontainer container kubepods-burstable-pod6de13d3b069536bcc80c7d3c0241be5a.slice. Dec 13 01:35:10.854419 systemd[1]: Created slice kubepods-burstable-pod7d33ce58a806bdc6b27ff1c729af0d65.slice - libcontainer container kubepods-burstable-pod7d33ce58a806bdc6b27ff1c729af0d65.slice. Dec 13 01:35:10.867230 kubelet[2842]: E1213 01:35:10.867173 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-66?timeout=10s\": dial tcp 172.31.20.66:6443: connect: connection refused" interval="400ms" Dec 13 01:35:10.960586 kubelet[2842]: I1213 01:35:10.960187 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d33ce58a806bdc6b27ff1c729af0d65-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-66\" (UID: \"7d33ce58a806bdc6b27ff1c729af0d65\") " pod="kube-system/kube-apiserver-ip-172-31-20-66" Dec 13 01:35:10.960586 kubelet[2842]: I1213 01:35:10.960232 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:10.960586 kubelet[2842]: I1213 01:35:10.960260 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:10.960586 kubelet[2842]: I1213 01:35:10.960341 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d33ce58a806bdc6b27ff1c729af0d65-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-66\" (UID: \"7d33ce58a806bdc6b27ff1c729af0d65\") " pod="kube-system/kube-apiserver-ip-172-31-20-66" Dec 13 01:35:10.960586 kubelet[2842]: I1213 01:35:10.960369 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6de13d3b069536bcc80c7d3c0241be5a-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-66\" (UID: \"6de13d3b069536bcc80c7d3c0241be5a\") " pod="kube-system/kube-scheduler-ip-172-31-20-66" Dec 13 01:35:10.960895 kubelet[2842]: I1213 01:35:10.960392 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d33ce58a806bdc6b27ff1c729af0d65-ca-certs\") pod \"kube-apiserver-ip-172-31-20-66\" (UID: \"7d33ce58a806bdc6b27ff1c729af0d65\") " pod="kube-system/kube-apiserver-ip-172-31-20-66" Dec 13 01:35:10.960895 kubelet[2842]: I1213 01:35:10.960412 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:10.960895 kubelet[2842]: I1213 01:35:10.960435 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:10.960895 kubelet[2842]: I1213 01:35:10.960458 2842 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:10.961084 kubelet[2842]: I1213 01:35:10.961064 2842 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-66" Dec 13 01:35:10.961630 kubelet[2842]: E1213 01:35:10.961595 2842 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.66:6443/api/v1/nodes\": dial tcp 172.31.20.66:6443: connect: connection refused" node="ip-172-31-20-66" Dec 13 01:35:11.141132 containerd[1960]: time="2024-12-13T01:35:11.140952372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-66,Uid:a979a8d35a2f99582b98c32cd27c6ff3,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:11.157367 containerd[1960]: time="2024-12-13T01:35:11.157305289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-66,Uid:6de13d3b069536bcc80c7d3c0241be5a,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:11.159245 containerd[1960]: time="2024-12-13T01:35:11.159202712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-66,Uid:7d33ce58a806bdc6b27ff1c729af0d65,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:11.269595 kubelet[2842]: E1213 01:35:11.269543 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-66?timeout=10s\": dial tcp 172.31.20.66:6443: connect: connection refused" interval="800ms" Dec 13 01:35:11.363503 kubelet[2842]: I1213 01:35:11.363469 2842 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-66" Dec 13 01:35:11.363921 kubelet[2842]: E1213 01:35:11.363888 2842 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.66:6443/api/v1/nodes\": dial tcp 172.31.20.66:6443: connect: connection refused" node="ip-172-31-20-66" Dec 13 01:35:11.607972 kubelet[2842]: W1213 01:35:11.607739 2842 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:11.607972 kubelet[2842]: E1213 01:35:11.607866 2842 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:11.679035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459912315.mount: Deactivated successfully. Dec 13 01:35:11.700294 kubelet[2842]: W1213 01:35:11.699500 2842 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:11.700294 kubelet[2842]: E1213 01:35:11.699606 2842 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:11.700499 containerd[1960]: time="2024-12-13T01:35:11.699578823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:35:11.701983 containerd[1960]: time="2024-12-13T01:35:11.701927242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:35:11.704164 containerd[1960]: time="2024-12-13T01:35:11.704056710Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:35:11.707401 containerd[1960]: time="2024-12-13T01:35:11.707351892Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:35:11.712170 containerd[1960]: time="2024-12-13T01:35:11.710395019Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:35:11.712170 containerd[1960]: time="2024-12-13T01:35:11.712006691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:35:11.714900 containerd[1960]: time="2024-12-13T01:35:11.714622706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:35:11.719101 containerd[1960]: time="2024-12-13T01:35:11.719039615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:35:11.724188 containerd[1960]: time="2024-12-13T01:35:11.723501910Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 564.216541ms" Dec 13 01:35:11.727533 containerd[1960]: time="2024-12-13T01:35:11.727077341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 585.899077ms" Dec 13 01:35:11.729755 containerd[1960]: time="2024-12-13T01:35:11.729713827Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 572.31453ms" Dec 13 01:35:11.763353 kubelet[2842]: W1213 01:35:11.763286 2842 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-66&limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:11.763353 kubelet[2842]: E1213 01:35:11.763354 2842 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-66&limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:12.068441 containerd[1960]: time="2024-12-13T01:35:12.068235407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:12.068850 containerd[1960]: time="2024-12-13T01:35:12.068310392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:12.068850 containerd[1960]: time="2024-12-13T01:35:12.068508609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:12.068850 containerd[1960]: time="2024-12-13T01:35:12.068666682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:12.074206 kubelet[2842]: E1213 01:35:12.074104 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-66?timeout=10s\": dial tcp 172.31.20.66:6443: connect: connection refused" interval="1.6s" Dec 13 01:35:12.095948 containerd[1960]: time="2024-12-13T01:35:12.095154913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:12.095948 containerd[1960]: time="2024-12-13T01:35:12.095400765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:12.095948 containerd[1960]: time="2024-12-13T01:35:12.095420002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:12.095948 containerd[1960]: time="2024-12-13T01:35:12.095575503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:12.097368 containerd[1960]: time="2024-12-13T01:35:12.096639476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:12.097368 containerd[1960]: time="2024-12-13T01:35:12.096702333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:12.097368 containerd[1960]: time="2024-12-13T01:35:12.096719700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:12.097368 containerd[1960]: time="2024-12-13T01:35:12.096960602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:12.141268 systemd[1]: Started cri-containerd-07959f609571dc624649521af595143595e32a8b07713a0a6a179101f0c3793b.scope - libcontainer container 07959f609571dc624649521af595143595e32a8b07713a0a6a179101f0c3793b. Dec 13 01:35:12.158873 systemd[1]: Started cri-containerd-adb61a92db1047e1e6a7315eb932b50f144292d70bf4f875d1f40a84dd76baf4.scope - libcontainer container adb61a92db1047e1e6a7315eb932b50f144292d70bf4f875d1f40a84dd76baf4. Dec 13 01:35:12.193644 kubelet[2842]: W1213 01:35:12.184302 2842 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:12.193644 kubelet[2842]: E1213 01:35:12.184743 2842 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:12.194286 kubelet[2842]: I1213 01:35:12.194098 2842 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-66" Dec 13 01:35:12.197550 kubelet[2842]: E1213 01:35:12.195801 2842 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.66:6443/api/v1/nodes\": dial tcp 172.31.20.66:6443: connect: connection refused" node="ip-172-31-20-66" Dec 13 01:35:12.218662 systemd[1]: Started cri-containerd-c7c3415b6ae99650322ef0da3bd4517eac5c4ab3bf4e6cb37c5fbf0aec14760e.scope - libcontainer container c7c3415b6ae99650322ef0da3bd4517eac5c4ab3bf4e6cb37c5fbf0aec14760e. Dec 13 01:35:12.297201 containerd[1960]: time="2024-12-13T01:35:12.297152832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-66,Uid:7d33ce58a806bdc6b27ff1c729af0d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"07959f609571dc624649521af595143595e32a8b07713a0a6a179101f0c3793b\"" Dec 13 01:35:12.312159 containerd[1960]: time="2024-12-13T01:35:12.312098727Z" level=info msg="CreateContainer within sandbox \"07959f609571dc624649521af595143595e32a8b07713a0a6a179101f0c3793b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:35:12.369653 containerd[1960]: time="2024-12-13T01:35:12.369586464Z" level=info msg="CreateContainer within sandbox \"07959f609571dc624649521af595143595e32a8b07713a0a6a179101f0c3793b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"788795347f877bb611cfc4d1f2175250b4feabf2e785134b08a0dc02a466ff7a\"" Dec 13 01:35:12.378979 containerd[1960]: time="2024-12-13T01:35:12.378826106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-66,Uid:a979a8d35a2f99582b98c32cd27c6ff3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7c3415b6ae99650322ef0da3bd4517eac5c4ab3bf4e6cb37c5fbf0aec14760e\"" Dec 13 01:35:12.380268 containerd[1960]: time="2024-12-13T01:35:12.380228031Z" level=info msg="StartContainer for \"788795347f877bb611cfc4d1f2175250b4feabf2e785134b08a0dc02a466ff7a\"" Dec 13 01:35:12.383374 containerd[1960]: time="2024-12-13T01:35:12.383339552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-66,Uid:6de13d3b069536bcc80c7d3c0241be5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"adb61a92db1047e1e6a7315eb932b50f144292d70bf4f875d1f40a84dd76baf4\"" Dec 13 01:35:12.385491 containerd[1960]: time="2024-12-13T01:35:12.385443530Z" level=info msg="CreateContainer within sandbox \"c7c3415b6ae99650322ef0da3bd4517eac5c4ab3bf4e6cb37c5fbf0aec14760e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:35:12.409227 containerd[1960]: time="2024-12-13T01:35:12.409132434Z" level=info msg="CreateContainer within sandbox \"adb61a92db1047e1e6a7315eb932b50f144292d70bf4f875d1f40a84dd76baf4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:35:12.435056 containerd[1960]: time="2024-12-13T01:35:12.434669447Z" level=info msg="CreateContainer within sandbox \"c7c3415b6ae99650322ef0da3bd4517eac5c4ab3bf4e6cb37c5fbf0aec14760e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad\"" Dec 13 01:35:12.439948 systemd[1]: Started cri-containerd-788795347f877bb611cfc4d1f2175250b4feabf2e785134b08a0dc02a466ff7a.scope - libcontainer container 788795347f877bb611cfc4d1f2175250b4feabf2e785134b08a0dc02a466ff7a. Dec 13 01:35:12.444062 containerd[1960]: time="2024-12-13T01:35:12.443783237Z" level=info msg="StartContainer for \"27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad\"" Dec 13 01:35:12.477063 containerd[1960]: time="2024-12-13T01:35:12.476983747Z" level=info msg="CreateContainer within sandbox \"adb61a92db1047e1e6a7315eb932b50f144292d70bf4f875d1f40a84dd76baf4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35\"" Dec 13 01:35:12.480309 containerd[1960]: time="2024-12-13T01:35:12.480197452Z" level=info msg="StartContainer for \"9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35\"" Dec 13 01:35:12.536145 systemd[1]: Started cri-containerd-27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad.scope - libcontainer container 27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad. Dec 13 01:35:12.552722 kubelet[2842]: E1213 01:35:12.552691 2842 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.66:6443: connect: connection refused Dec 13 01:35:12.570421 containerd[1960]: time="2024-12-13T01:35:12.569577806Z" level=info msg="StartContainer for \"788795347f877bb611cfc4d1f2175250b4feabf2e785134b08a0dc02a466ff7a\" returns successfully" Dec 13 01:35:12.586734 systemd[1]: Started cri-containerd-9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35.scope - libcontainer container 9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35. Dec 13 01:35:12.667640 containerd[1960]: time="2024-12-13T01:35:12.667535884Z" level=info msg="StartContainer for \"27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad\" returns successfully" Dec 13 01:35:12.726957 containerd[1960]: time="2024-12-13T01:35:12.726604467Z" level=info msg="StartContainer for \"9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35\" returns successfully" Dec 13 01:35:13.798219 kubelet[2842]: I1213 01:35:13.798184 2842 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-66" Dec 13 01:35:15.774907 kubelet[2842]: E1213 01:35:15.774836 2842 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-66\" not found" node="ip-172-31-20-66" Dec 13 01:35:15.932399 kubelet[2842]: I1213 01:35:15.932350 2842 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-66" Dec 13 01:35:16.466866 update_engine[1951]: I20241213 01:35:16.466715 1951 update_attempter.cc:509] Updating boot flags... Dec 13 01:35:16.581675 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3132) Dec 13 01:35:16.599982 kubelet[2842]: I1213 01:35:16.599717 2842 apiserver.go:52] "Watching apiserver" Dec 13 01:35:16.658874 kubelet[2842]: I1213 01:35:16.658828 2842 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:35:16.962552 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3135) Dec 13 01:35:18.579921 systemd[1]: Reloading requested from client PID 3301 ('systemctl') (unit session-7.scope)... Dec 13 01:35:18.579945 systemd[1]: Reloading... Dec 13 01:35:18.760680 zram_generator::config[3344]: No configuration found. Dec 13 01:35:19.043726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:35:19.241439 systemd[1]: Reloading finished in 660 ms. Dec 13 01:35:19.300224 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:35:19.316281 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:35:19.316684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:35:19.327890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:35:19.705546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:35:19.721117 (kubelet)[3398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:35:19.827489 kubelet[3398]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:35:19.827489 kubelet[3398]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:35:19.827489 kubelet[3398]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:35:19.827975 kubelet[3398]: I1213 01:35:19.827589 3398 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:35:19.836102 kubelet[3398]: I1213 01:35:19.836069 3398 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:35:19.836102 kubelet[3398]: I1213 01:35:19.836099 3398 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:35:19.838132 kubelet[3398]: I1213 01:35:19.837107 3398 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:35:19.841930 kubelet[3398]: I1213 01:35:19.841657 3398 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:35:19.844218 kubelet[3398]: I1213 01:35:19.844012 3398 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:35:19.861939 kubelet[3398]: I1213 01:35:19.861835 3398 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:35:19.863393 kubelet[3398]: I1213 01:35:19.862614 3398 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:35:19.863393 kubelet[3398]: I1213 01:35:19.862660 3398 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-66","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:35:19.863393 kubelet[3398]: I1213 01:35:19.863277 3398 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:35:19.863393 kubelet[3398]: I1213 01:35:19.863301 3398 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:35:19.872915 kubelet[3398]: I1213 01:35:19.871407 3398 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:35:19.881595 kubelet[3398]: I1213 01:35:19.876632 3398 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:35:19.881595 kubelet[3398]: I1213 01:35:19.876927 3398 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:35:19.881595 kubelet[3398]: I1213 01:35:19.877409 3398 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:35:19.883259 kubelet[3398]: I1213 01:35:19.883187 3398 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:35:19.898489 kubelet[3398]: I1213 01:35:19.894364 3398 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:35:19.902369 kubelet[3398]: I1213 01:35:19.901648 3398 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:35:19.912764 kubelet[3398]: I1213 01:35:19.904296 3398 server.go:1264] "Started kubelet" Dec 13 01:35:19.922602 sudo[3412]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:35:19.923616 sudo[3412]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:35:19.932606 kubelet[3398]: I1213 01:35:19.932421 3398 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:35:19.948590 kubelet[3398]: I1213 01:35:19.944259 3398 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:35:19.950444 kubelet[3398]: I1213 01:35:19.949496 3398 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:35:19.953412 kubelet[3398]: I1213 01:35:19.952931 3398 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:35:19.953412 kubelet[3398]: I1213 01:35:19.953184 3398 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:35:19.959479 kubelet[3398]: I1213 01:35:19.958258 3398 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:35:19.961969 kubelet[3398]: I1213 01:35:19.959957 3398 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:35:19.961969 kubelet[3398]: I1213 01:35:19.960120 3398 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:35:19.977036 kubelet[3398]: I1213 01:35:19.975490 3398 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:35:19.980582 kubelet[3398]: I1213 01:35:19.978392 3398 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:35:19.984221 kubelet[3398]: I1213 01:35:19.980843 3398 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:35:19.984221 kubelet[3398]: I1213 01:35:19.980855 3398 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:35:20.000056 kubelet[3398]: I1213 01:35:19.999367 3398 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:35:20.000056 kubelet[3398]: I1213 01:35:19.999417 3398 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:35:20.000056 kubelet[3398]: E1213 01:35:19.999508 3398 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:35:20.005202 kubelet[3398]: I1213 01:35:20.004972 3398 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:35:20.005343 kubelet[3398]: E1213 01:35:20.005222 3398 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:35:20.107857 kubelet[3398]: E1213 01:35:20.107578 3398 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:35:20.109914 kubelet[3398]: I1213 01:35:20.109456 3398 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-66" Dec 13 01:35:20.164972 kubelet[3398]: I1213 01:35:20.164940 3398 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-66" Dec 13 01:35:20.165199 kubelet[3398]: I1213 01:35:20.165122 3398 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-66" Dec 13 01:35:20.205756 kubelet[3398]: I1213 01:35:20.205729 3398 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:35:20.205756 kubelet[3398]: I1213 01:35:20.205751 3398 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:35:20.205756 kubelet[3398]: I1213 01:35:20.205774 3398 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:35:20.212593 kubelet[3398]: I1213 01:35:20.212037 3398 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:35:20.212593 kubelet[3398]: I1213 01:35:20.212267 3398 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:35:20.212593 kubelet[3398]: I1213 01:35:20.212433 3398 policy_none.go:49] "None policy: Start" Dec 13 01:35:20.217712 kubelet[3398]: I1213 01:35:20.217687 3398 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:35:20.217712 kubelet[3398]: I1213 01:35:20.217723 3398 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:35:20.219733 kubelet[3398]: I1213 01:35:20.218640 3398 state_mem.go:75] "Updated machine memory state" Dec 13 01:35:20.241295 kubelet[3398]: I1213 01:35:20.240716 3398 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:35:20.241295 kubelet[3398]: I1213 01:35:20.240945 3398 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:35:20.245885 kubelet[3398]: I1213 01:35:20.244300 3398 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:35:20.309056 kubelet[3398]: I1213 01:35:20.308360 3398 topology_manager.go:215] "Topology Admit Handler" podUID="6de13d3b069536bcc80c7d3c0241be5a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-66" Dec 13 01:35:20.309056 kubelet[3398]: I1213 01:35:20.308557 3398 topology_manager.go:215] "Topology Admit Handler" podUID="7d33ce58a806bdc6b27ff1c729af0d65" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-66" Dec 13 01:35:20.309056 kubelet[3398]: I1213 01:35:20.308607 3398 topology_manager.go:215] "Topology Admit Handler" podUID="a979a8d35a2f99582b98c32cd27c6ff3" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:20.364252 kubelet[3398]: I1213 01:35:20.364213 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6de13d3b069536bcc80c7d3c0241be5a-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-66\" (UID: \"6de13d3b069536bcc80c7d3c0241be5a\") " pod="kube-system/kube-scheduler-ip-172-31-20-66" Dec 13 01:35:20.466492 kubelet[3398]: I1213 01:35:20.465628 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d33ce58a806bdc6b27ff1c729af0d65-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-66\" (UID: \"7d33ce58a806bdc6b27ff1c729af0d65\") " pod="kube-system/kube-apiserver-ip-172-31-20-66" Dec 13 01:35:20.466492 kubelet[3398]: I1213 01:35:20.465689 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:20.466492 kubelet[3398]: I1213 01:35:20.465719 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:20.466492 kubelet[3398]: I1213 01:35:20.465744 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:20.466492 kubelet[3398]: I1213 01:35:20.465795 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d33ce58a806bdc6b27ff1c729af0d65-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-66\" (UID: \"7d33ce58a806bdc6b27ff1c729af0d65\") " pod="kube-system/kube-apiserver-ip-172-31-20-66" Dec 13 01:35:20.466807 kubelet[3398]: I1213 01:35:20.465819 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:20.466807 kubelet[3398]: I1213 01:35:20.465843 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d33ce58a806bdc6b27ff1c729af0d65-ca-certs\") pod \"kube-apiserver-ip-172-31-20-66\" (UID: \"7d33ce58a806bdc6b27ff1c729af0d65\") " pod="kube-system/kube-apiserver-ip-172-31-20-66" Dec 13 01:35:20.466807 kubelet[3398]: I1213 01:35:20.465866 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a979a8d35a2f99582b98c32cd27c6ff3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-66\" (UID: \"a979a8d35a2f99582b98c32cd27c6ff3\") " pod="kube-system/kube-controller-manager-ip-172-31-20-66" Dec 13 01:35:20.892244 kubelet[3398]: I1213 01:35:20.891784 3398 apiserver.go:52] "Watching apiserver" Dec 13 01:35:20.960649 kubelet[3398]: I1213 01:35:20.960447 3398 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:35:21.005944 sudo[3412]: pam_unix(sudo:session): session closed for user root Dec 13 01:35:21.069022 kubelet[3398]: E1213 01:35:21.068313 3398 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-66\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-66" Dec 13 01:35:21.102944 kubelet[3398]: I1213 01:35:21.102872 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-66" podStartSLOduration=1.102845015 podStartE2EDuration="1.102845015s" podCreationTimestamp="2024-12-13 01:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:35:21.102211884 +0000 UTC m=+1.370335336" watchObservedRunningTime="2024-12-13 01:35:21.102845015 +0000 UTC m=+1.370968470" Dec 13 01:35:21.157196 kubelet[3398]: I1213 01:35:21.156651 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-66" podStartSLOduration=1.156627832 podStartE2EDuration="1.156627832s" podCreationTimestamp="2024-12-13 01:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:35:21.14025584 +0000 UTC m=+1.408379295" watchObservedRunningTime="2024-12-13 01:35:21.156627832 +0000 UTC m=+1.424751287" Dec 13 01:35:21.183446 kubelet[3398]: I1213 01:35:21.183283 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-66" podStartSLOduration=1.183260669 podStartE2EDuration="1.183260669s" podCreationTimestamp="2024-12-13 01:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:35:21.158340493 +0000 UTC m=+1.426463930" watchObservedRunningTime="2024-12-13 01:35:21.183260669 +0000 UTC m=+1.451384125" Dec 13 01:35:23.009131 sudo[2291]: pam_unix(sudo:session): session closed for user root Dec 13 01:35:23.036918 sshd[2288]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:23.043391 systemd[1]: sshd@6-172.31.20.66:22-139.178.68.195:34434.service: Deactivated successfully. Dec 13 01:35:23.048766 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:35:23.049066 systemd[1]: session-7.scope: Consumed 5.100s CPU time, 186.5M memory peak, 0B memory swap peak. Dec 13 01:35:23.050188 systemd-logind[1950]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:35:23.051633 systemd-logind[1950]: Removed session 7. Dec 13 01:35:32.186350 kubelet[3398]: I1213 01:35:32.186158 3398 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:35:32.187003 containerd[1960]: time="2024-12-13T01:35:32.186753365Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:35:32.187332 kubelet[3398]: I1213 01:35:32.187112 3398 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:35:32.585806 kubelet[3398]: I1213 01:35:32.583636 3398 topology_manager.go:215] "Topology Admit Handler" podUID="f7724876-14f0-4302-8d3a-59d48eb44d14" podNamespace="kube-system" podName="kube-proxy-vgb47" Dec 13 01:35:32.605175 kubelet[3398]: I1213 01:35:32.603999 3398 topology_manager.go:215] "Topology Admit Handler" podUID="adcc1c57-6737-4a79-aac1-c9d50d236003" podNamespace="kube-system" podName="cilium-ndjt4" Dec 13 01:35:32.629807 systemd[1]: Created slice kubepods-besteffort-podf7724876_14f0_4302_8d3a_59d48eb44d14.slice - libcontainer container kubepods-besteffort-podf7724876_14f0_4302_8d3a_59d48eb44d14.slice. Dec 13 01:35:32.645227 kubelet[3398]: W1213 01:35:32.645003 3398 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-20-66" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-66' and this object Dec 13 01:35:32.645227 kubelet[3398]: E1213 01:35:32.645051 3398 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-20-66" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-66' and this object Dec 13 01:35:32.645227 kubelet[3398]: W1213 01:35:32.645106 3398 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-20-66" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-66' and this object Dec 13 01:35:32.645227 kubelet[3398]: E1213 01:35:32.645120 3398 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-20-66" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-66' and this object Dec 13 01:35:32.645227 kubelet[3398]: W1213 01:35:32.645165 3398 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-20-66" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-66' and this object Dec 13 01:35:32.645558 kubelet[3398]: E1213 01:35:32.645177 3398 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-20-66" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-66' and this object Dec 13 01:35:32.653321 systemd[1]: Created slice kubepods-burstable-podadcc1c57_6737_4a79_aac1_c9d50d236003.slice - libcontainer container kubepods-burstable-podadcc1c57_6737_4a79_aac1_c9d50d236003.slice. Dec 13 01:35:32.766604 kubelet[3398]: I1213 01:35:32.764224 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cni-path\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.766604 kubelet[3398]: I1213 01:35:32.764685 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/adcc1c57-6737-4a79-aac1-c9d50d236003-clustermesh-secrets\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.766604 kubelet[3398]: I1213 01:35:32.764758 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-bpf-maps\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.766604 kubelet[3398]: I1213 01:35:32.764844 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-config-path\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.766604 kubelet[3398]: I1213 01:35:32.764869 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-hubble-tls\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.766604 kubelet[3398]: I1213 01:35:32.764897 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-hostproc\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.767041 kubelet[3398]: I1213 01:35:32.764923 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-run\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.767041 kubelet[3398]: I1213 01:35:32.764945 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-lib-modules\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.767041 kubelet[3398]: I1213 01:35:32.764970 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-host-proc-sys-kernel\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.767041 kubelet[3398]: I1213 01:35:32.764991 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-xtables-lock\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.767041 kubelet[3398]: I1213 01:35:32.765042 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7724876-14f0-4302-8d3a-59d48eb44d14-kube-proxy\") pod \"kube-proxy-vgb47\" (UID: \"f7724876-14f0-4302-8d3a-59d48eb44d14\") " pod="kube-system/kube-proxy-vgb47" Dec 13 01:35:32.767041 kubelet[3398]: I1213 01:35:32.765066 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7724876-14f0-4302-8d3a-59d48eb44d14-lib-modules\") pod \"kube-proxy-vgb47\" (UID: \"f7724876-14f0-4302-8d3a-59d48eb44d14\") " pod="kube-system/kube-proxy-vgb47" Dec 13 01:35:32.767786 kubelet[3398]: I1213 01:35:32.765182 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-cgroup\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.767786 kubelet[3398]: I1213 01:35:32.765210 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-host-proc-sys-net\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.767786 kubelet[3398]: I1213 01:35:32.765236 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7724876-14f0-4302-8d3a-59d48eb44d14-xtables-lock\") pod \"kube-proxy-vgb47\" (UID: \"f7724876-14f0-4302-8d3a-59d48eb44d14\") " pod="kube-system/kube-proxy-vgb47" Dec 13 01:35:32.767786 kubelet[3398]: I1213 01:35:32.765261 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmbq\" (UniqueName: \"kubernetes.io/projected/f7724876-14f0-4302-8d3a-59d48eb44d14-kube-api-access-bkmbq\") pod \"kube-proxy-vgb47\" (UID: \"f7724876-14f0-4302-8d3a-59d48eb44d14\") " pod="kube-system/kube-proxy-vgb47" Dec 13 01:35:32.767786 kubelet[3398]: I1213 01:35:32.765287 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-etc-cni-netd\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.767999 kubelet[3398]: I1213 01:35:32.765313 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29c9\" (UniqueName: \"kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-kube-api-access-d29c9\") pod \"cilium-ndjt4\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " pod="kube-system/cilium-ndjt4" Dec 13 01:35:32.895086 kubelet[3398]: E1213 01:35:32.894897 3398 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:35:32.895086 kubelet[3398]: E1213 01:35:32.895088 3398 projected.go:200] Error preparing data for projected volume kube-api-access-d29c9 for pod kube-system/cilium-ndjt4: configmap "kube-root-ca.crt" not found Dec 13 01:35:32.898446 kubelet[3398]: E1213 01:35:32.895198 3398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-kube-api-access-d29c9 podName:adcc1c57-6737-4a79-aac1-c9d50d236003 nodeName:}" failed. No retries permitted until 2024-12-13 01:35:33.39516659 +0000 UTC m=+13.663290036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d29c9" (UniqueName: "kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-kube-api-access-d29c9") pod "cilium-ndjt4" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003") : configmap "kube-root-ca.crt" not found Dec 13 01:35:32.901738 kubelet[3398]: E1213 01:35:32.901703 3398 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:35:32.901738 kubelet[3398]: E1213 01:35:32.901737 3398 projected.go:200] Error preparing data for projected volume kube-api-access-bkmbq for pod kube-system/kube-proxy-vgb47: configmap "kube-root-ca.crt" not found Dec 13 01:35:32.902116 kubelet[3398]: E1213 01:35:32.901887 3398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f7724876-14f0-4302-8d3a-59d48eb44d14-kube-api-access-bkmbq podName:f7724876-14f0-4302-8d3a-59d48eb44d14 nodeName:}" failed. No retries permitted until 2024-12-13 01:35:33.401860584 +0000 UTC m=+13.669984021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bkmbq" (UniqueName: "kubernetes.io/projected/f7724876-14f0-4302-8d3a-59d48eb44d14-kube-api-access-bkmbq") pod "kube-proxy-vgb47" (UID: "f7724876-14f0-4302-8d3a-59d48eb44d14") : configmap "kube-root-ca.crt" not found Dec 13 01:35:33.313052 kubelet[3398]: I1213 01:35:33.312314 3398 topology_manager.go:215] "Topology Admit Handler" podUID="18605ec1-8de2-4550-af1f-ab174d5ce203" podNamespace="kube-system" podName="cilium-operator-599987898-wn7h6" Dec 13 01:35:33.324719 systemd[1]: Created slice kubepods-besteffort-pod18605ec1_8de2_4550_af1f_ab174d5ce203.slice - libcontainer container kubepods-besteffort-pod18605ec1_8de2_4550_af1f_ab174d5ce203.slice. Dec 13 01:35:33.472170 kubelet[3398]: I1213 01:35:33.472111 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7htv\" (UniqueName: \"kubernetes.io/projected/18605ec1-8de2-4550-af1f-ab174d5ce203-kube-api-access-s7htv\") pod \"cilium-operator-599987898-wn7h6\" (UID: \"18605ec1-8de2-4550-af1f-ab174d5ce203\") " pod="kube-system/cilium-operator-599987898-wn7h6" Dec 13 01:35:33.472334 kubelet[3398]: I1213 01:35:33.472235 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18605ec1-8de2-4550-af1f-ab174d5ce203-cilium-config-path\") pod \"cilium-operator-599987898-wn7h6\" (UID: \"18605ec1-8de2-4550-af1f-ab174d5ce203\") " pod="kube-system/cilium-operator-599987898-wn7h6" Dec 13 01:35:33.551287 containerd[1960]: time="2024-12-13T01:35:33.550996998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vgb47,Uid:f7724876-14f0-4302-8d3a-59d48eb44d14,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:33.610861 containerd[1960]: time="2024-12-13T01:35:33.610542048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:33.610861 containerd[1960]: time="2024-12-13T01:35:33.610641828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:33.610861 containerd[1960]: time="2024-12-13T01:35:33.610673222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:33.611608 containerd[1960]: time="2024-12-13T01:35:33.610799198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:33.634880 systemd[1]: Started cri-containerd-6a3dc21c1de12316adb054173f0d168152b93cd081e6c3af1da8956d780e7307.scope - libcontainer container 6a3dc21c1de12316adb054173f0d168152b93cd081e6c3af1da8956d780e7307. Dec 13 01:35:33.675021 containerd[1960]: time="2024-12-13T01:35:33.674488008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vgb47,Uid:f7724876-14f0-4302-8d3a-59d48eb44d14,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a3dc21c1de12316adb054173f0d168152b93cd081e6c3af1da8956d780e7307\"" Dec 13 01:35:33.681183 containerd[1960]: time="2024-12-13T01:35:33.681131957Z" level=info msg="CreateContainer within sandbox \"6a3dc21c1de12316adb054173f0d168152b93cd081e6c3af1da8956d780e7307\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:35:33.713030 containerd[1960]: time="2024-12-13T01:35:33.712984195Z" level=info msg="CreateContainer within sandbox \"6a3dc21c1de12316adb054173f0d168152b93cd081e6c3af1da8956d780e7307\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"763773bebb360d8822835580403e1e3ebd43aa0aa8dd01bcb52d83f5d931a92d\"" Dec 13 01:35:33.713719 containerd[1960]: time="2024-12-13T01:35:33.713683597Z" level=info msg="StartContainer for \"763773bebb360d8822835580403e1e3ebd43aa0aa8dd01bcb52d83f5d931a92d\"" Dec 13 01:35:33.751713 systemd[1]: Started cri-containerd-763773bebb360d8822835580403e1e3ebd43aa0aa8dd01bcb52d83f5d931a92d.scope - libcontainer container 763773bebb360d8822835580403e1e3ebd43aa0aa8dd01bcb52d83f5d931a92d. Dec 13 01:35:33.800367 containerd[1960]: time="2024-12-13T01:35:33.800177158Z" level=info msg="StartContainer for \"763773bebb360d8822835580403e1e3ebd43aa0aa8dd01bcb52d83f5d931a92d\" returns successfully" Dec 13 01:35:33.870216 kubelet[3398]: E1213 01:35:33.870180 3398 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:35:33.870363 kubelet[3398]: E1213 01:35:33.870296 3398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-config-path podName:adcc1c57-6737-4a79-aac1-c9d50d236003 nodeName:}" failed. No retries permitted until 2024-12-13 01:35:34.370275042 +0000 UTC m=+14.638398491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-config-path") pod "cilium-ndjt4" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:35:33.874567 kubelet[3398]: E1213 01:35:33.874536 3398 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 13 01:35:33.874704 kubelet[3398]: E1213 01:35:33.874632 3398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adcc1c57-6737-4a79-aac1-c9d50d236003-clustermesh-secrets podName:adcc1c57-6737-4a79-aac1-c9d50d236003 nodeName:}" failed. No retries permitted until 2024-12-13 01:35:34.374611549 +0000 UTC m=+14.642734989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/adcc1c57-6737-4a79-aac1-c9d50d236003-clustermesh-secrets") pod "cilium-ndjt4" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003") : failed to sync secret cache: timed out waiting for the condition Dec 13 01:35:34.126557 kubelet[3398]: I1213 01:35:34.126332 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vgb47" podStartSLOduration=2.126294626 podStartE2EDuration="2.126294626s" podCreationTimestamp="2024-12-13 01:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:35:34.126098611 +0000 UTC m=+14.394222066" watchObservedRunningTime="2024-12-13 01:35:34.126294626 +0000 UTC m=+14.394418081" Dec 13 01:35:34.231930 containerd[1960]: time="2024-12-13T01:35:34.231860726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wn7h6,Uid:18605ec1-8de2-4550-af1f-ab174d5ce203,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:34.304077 containerd[1960]: time="2024-12-13T01:35:34.303965224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:34.304077 containerd[1960]: time="2024-12-13T01:35:34.304046173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:34.304851 containerd[1960]: time="2024-12-13T01:35:34.304062742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:34.307830 containerd[1960]: time="2024-12-13T01:35:34.307771232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:34.331719 systemd[1]: Started cri-containerd-02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973.scope - libcontainer container 02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973. Dec 13 01:35:34.422492 containerd[1960]: time="2024-12-13T01:35:34.422233382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wn7h6,Uid:18605ec1-8de2-4550-af1f-ab174d5ce203,Namespace:kube-system,Attempt:0,} returns sandbox id \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\"" Dec 13 01:35:34.433356 containerd[1960]: time="2024-12-13T01:35:34.433317970Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:35:34.462750 containerd[1960]: time="2024-12-13T01:35:34.462693768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ndjt4,Uid:adcc1c57-6737-4a79-aac1-c9d50d236003,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:34.558692 containerd[1960]: time="2024-12-13T01:35:34.558601110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:35:34.559613 containerd[1960]: time="2024-12-13T01:35:34.559410631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:35:34.559613 containerd[1960]: time="2024-12-13T01:35:34.559429861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:34.559613 containerd[1960]: time="2024-12-13T01:35:34.559513254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:35:34.606969 systemd[1]: Started cri-containerd-193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7.scope - libcontainer container 193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7. Dec 13 01:35:34.671558 containerd[1960]: time="2024-12-13T01:35:34.671427090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ndjt4,Uid:adcc1c57-6737-4a79-aac1-c9d50d236003,Namespace:kube-system,Attempt:0,} returns sandbox id \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\"" Dec 13 01:35:37.997356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount613616943.mount: Deactivated successfully. Dec 13 01:35:39.023759 containerd[1960]: time="2024-12-13T01:35:39.023680498Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:39.025811 containerd[1960]: time="2024-12-13T01:35:39.025725039Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907189" Dec 13 01:35:39.027651 containerd[1960]: time="2024-12-13T01:35:39.027263844Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:39.029196 containerd[1960]: time="2024-12-13T01:35:39.029150544Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.59571095s" Dec 13 01:35:39.029427 containerd[1960]: time="2024-12-13T01:35:39.029365847Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:35:39.040325 containerd[1960]: time="2024-12-13T01:35:39.039508565Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:35:39.060118 containerd[1960]: time="2024-12-13T01:35:39.059767330Z" level=info msg="CreateContainer within sandbox \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:35:39.080656 containerd[1960]: time="2024-12-13T01:35:39.080607996Z" level=info msg="CreateContainer within sandbox \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\"" Dec 13 01:35:39.087779 containerd[1960]: time="2024-12-13T01:35:39.087702137Z" level=info msg="StartContainer for \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\"" Dec 13 01:35:39.144654 systemd[1]: run-containerd-runc-k8s.io-38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a-runc.X5CQIm.mount: Deactivated successfully. Dec 13 01:35:39.156723 systemd[1]: Started cri-containerd-38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a.scope - libcontainer container 38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a. Dec 13 01:35:39.199912 containerd[1960]: time="2024-12-13T01:35:39.199779613Z" level=info msg="StartContainer for \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\" returns successfully" Dec 13 01:35:45.741100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476680964.mount: Deactivated successfully. Dec 13 01:35:49.257026 containerd[1960]: time="2024-12-13T01:35:49.256963169Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:49.259350 containerd[1960]: time="2024-12-13T01:35:49.259294168Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735339" Dec 13 01:35:49.261509 containerd[1960]: time="2024-12-13T01:35:49.261453370Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:35:49.263431 containerd[1960]: time="2024-12-13T01:35:49.263279536Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.22370628s" Dec 13 01:35:49.263431 containerd[1960]: time="2024-12-13T01:35:49.263325802Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:35:49.279323 containerd[1960]: time="2024-12-13T01:35:49.279278055Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:35:49.359794 containerd[1960]: time="2024-12-13T01:35:49.359741744Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\"" Dec 13 01:35:49.361602 containerd[1960]: time="2024-12-13T01:35:49.360617653Z" level=info msg="StartContainer for \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\"" Dec 13 01:35:49.552731 systemd[1]: Started cri-containerd-54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a.scope - libcontainer container 54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a. Dec 13 01:35:49.600134 containerd[1960]: time="2024-12-13T01:35:49.600088652Z" level=info msg="StartContainer for \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\" returns successfully" Dec 13 01:35:49.631247 systemd[1]: cri-containerd-54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a.scope: Deactivated successfully. Dec 13 01:35:49.738197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a-rootfs.mount: Deactivated successfully. Dec 13 01:35:49.868794 containerd[1960]: time="2024-12-13T01:35:49.865818333Z" level=info msg="shim disconnected" id=54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a namespace=k8s.io Dec 13 01:35:49.868794 containerd[1960]: time="2024-12-13T01:35:49.868115911Z" level=warning msg="cleaning up after shim disconnected" id=54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a namespace=k8s.io Dec 13 01:35:49.868794 containerd[1960]: time="2024-12-13T01:35:49.868136514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:50.213737 containerd[1960]: time="2024-12-13T01:35:50.213624710Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:35:50.232434 kubelet[3398]: I1213 01:35:50.227654 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-wn7h6" podStartSLOduration=12.616384909 podStartE2EDuration="17.223242775s" podCreationTimestamp="2024-12-13 01:35:33 +0000 UTC" firstStartedPulling="2024-12-13 01:35:34.424867374 +0000 UTC m=+14.692990820" lastFinishedPulling="2024-12-13 01:35:39.031725234 +0000 UTC m=+19.299848686" observedRunningTime="2024-12-13 01:35:40.382142971 +0000 UTC m=+20.650266407" watchObservedRunningTime="2024-12-13 01:35:50.223242775 +0000 UTC m=+30.491366231" Dec 13 01:35:50.258803 containerd[1960]: time="2024-12-13T01:35:50.258761021Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\"" Dec 13 01:35:50.261381 containerd[1960]: time="2024-12-13T01:35:50.260031031Z" level=info msg="StartContainer for \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\"" Dec 13 01:35:50.304857 systemd[1]: Started cri-containerd-c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3.scope - libcontainer container c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3. Dec 13 01:35:50.404924 containerd[1960]: time="2024-12-13T01:35:50.404587676Z" level=info msg="StartContainer for \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\" returns successfully" Dec 13 01:35:50.421000 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:35:50.421465 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:35:50.422452 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:35:50.432356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:35:50.433009 systemd[1]: cri-containerd-c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3.scope: Deactivated successfully. Dec 13 01:35:50.517942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3-rootfs.mount: Deactivated successfully. Dec 13 01:35:50.533605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:35:50.541324 containerd[1960]: time="2024-12-13T01:35:50.541229067Z" level=info msg="shim disconnected" id=c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3 namespace=k8s.io Dec 13 01:35:50.541512 containerd[1960]: time="2024-12-13T01:35:50.541324640Z" level=warning msg="cleaning up after shim disconnected" id=c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3 namespace=k8s.io Dec 13 01:35:50.541512 containerd[1960]: time="2024-12-13T01:35:50.541337928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:51.198366 containerd[1960]: time="2024-12-13T01:35:51.198292868Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:35:51.265896 containerd[1960]: time="2024-12-13T01:35:51.265850176Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\"" Dec 13 01:35:51.268556 containerd[1960]: time="2024-12-13T01:35:51.266983604Z" level=info msg="StartContainer for \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\"" Dec 13 01:35:51.308280 systemd[1]: Started cri-containerd-3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12.scope - libcontainer container 3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12. Dec 13 01:35:51.373733 containerd[1960]: time="2024-12-13T01:35:51.373682552Z" level=info msg="StartContainer for \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\" returns successfully" Dec 13 01:35:51.380805 systemd[1]: cri-containerd-3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12.scope: Deactivated successfully. Dec 13 01:35:51.413574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12-rootfs.mount: Deactivated successfully. Dec 13 01:35:51.427900 containerd[1960]: time="2024-12-13T01:35:51.427733271Z" level=info msg="shim disconnected" id=3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12 namespace=k8s.io Dec 13 01:35:51.427900 containerd[1960]: time="2024-12-13T01:35:51.427854876Z" level=warning msg="cleaning up after shim disconnected" id=3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12 namespace=k8s.io Dec 13 01:35:51.427900 containerd[1960]: time="2024-12-13T01:35:51.427870469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:52.204315 containerd[1960]: time="2024-12-13T01:35:52.204274410Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:35:52.237918 containerd[1960]: time="2024-12-13T01:35:52.237866022Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\"" Dec 13 01:35:52.240076 containerd[1960]: time="2024-12-13T01:35:52.239992514Z" level=info msg="StartContainer for \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\"" Dec 13 01:35:52.309061 systemd[1]: Started cri-containerd-0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1.scope - libcontainer container 0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1. Dec 13 01:35:52.354166 systemd[1]: cri-containerd-0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1.scope: Deactivated successfully. Dec 13 01:35:52.367228 containerd[1960]: time="2024-12-13T01:35:52.367186464Z" level=info msg="StartContainer for \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\" returns successfully" Dec 13 01:35:52.411160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1-rootfs.mount: Deactivated successfully. Dec 13 01:35:52.423737 containerd[1960]: time="2024-12-13T01:35:52.423652634Z" level=info msg="shim disconnected" id=0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1 namespace=k8s.io Dec 13 01:35:52.423737 containerd[1960]: time="2024-12-13T01:35:52.423732337Z" level=warning msg="cleaning up after shim disconnected" id=0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1 namespace=k8s.io Dec 13 01:35:52.423737 containerd[1960]: time="2024-12-13T01:35:52.423744513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:53.218845 containerd[1960]: time="2024-12-13T01:35:53.218455759Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:35:53.264610 containerd[1960]: time="2024-12-13T01:35:53.264562906Z" level=info msg="CreateContainer within sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\"" Dec 13 01:35:53.265200 containerd[1960]: time="2024-12-13T01:35:53.265089543Z" level=info msg="StartContainer for \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\"" Dec 13 01:35:53.317719 systemd[1]: Started cri-containerd-b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42.scope - libcontainer container b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42. Dec 13 01:35:53.378798 containerd[1960]: time="2024-12-13T01:35:53.378001522Z" level=info msg="StartContainer for \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\" returns successfully" Dec 13 01:35:53.641704 kubelet[3398]: I1213 01:35:53.641335 3398 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:35:53.687087 kubelet[3398]: I1213 01:35:53.687044 3398 topology_manager.go:215] "Topology Admit Handler" podUID="cea7da0d-8fb7-49de-92fe-da73e18b3345" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wm4xz" Dec 13 01:35:53.687327 kubelet[3398]: I1213 01:35:53.687307 3398 topology_manager.go:215] "Topology Admit Handler" podUID="f1292415-0ebc-45d1-8942-e783ae8372c5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h5cqg" Dec 13 01:35:53.717992 systemd[1]: Created slice kubepods-burstable-podf1292415_0ebc_45d1_8942_e783ae8372c5.slice - libcontainer container kubepods-burstable-podf1292415_0ebc_45d1_8942_e783ae8372c5.slice. Dec 13 01:35:53.732973 systemd[1]: Created slice kubepods-burstable-podcea7da0d_8fb7_49de_92fe_da73e18b3345.slice - libcontainer container kubepods-burstable-podcea7da0d_8fb7_49de_92fe_da73e18b3345.slice. Dec 13 01:35:53.758403 kubelet[3398]: I1213 01:35:53.758367 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25l6n\" (UniqueName: \"kubernetes.io/projected/f1292415-0ebc-45d1-8942-e783ae8372c5-kube-api-access-25l6n\") pod \"coredns-7db6d8ff4d-h5cqg\" (UID: \"f1292415-0ebc-45d1-8942-e783ae8372c5\") " pod="kube-system/coredns-7db6d8ff4d-h5cqg" Dec 13 01:35:53.758941 kubelet[3398]: I1213 01:35:53.758741 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cea7da0d-8fb7-49de-92fe-da73e18b3345-config-volume\") pod \"coredns-7db6d8ff4d-wm4xz\" (UID: \"cea7da0d-8fb7-49de-92fe-da73e18b3345\") " pod="kube-system/coredns-7db6d8ff4d-wm4xz" Dec 13 01:35:53.758941 kubelet[3398]: I1213 01:35:53.758834 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1292415-0ebc-45d1-8942-e783ae8372c5-config-volume\") pod \"coredns-7db6d8ff4d-h5cqg\" (UID: \"f1292415-0ebc-45d1-8942-e783ae8372c5\") " pod="kube-system/coredns-7db6d8ff4d-h5cqg" Dec 13 01:35:53.758941 kubelet[3398]: I1213 01:35:53.758883 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ps9\" (UniqueName: \"kubernetes.io/projected/cea7da0d-8fb7-49de-92fe-da73e18b3345-kube-api-access-72ps9\") pod \"coredns-7db6d8ff4d-wm4xz\" (UID: \"cea7da0d-8fb7-49de-92fe-da73e18b3345\") " pod="kube-system/coredns-7db6d8ff4d-wm4xz" Dec 13 01:35:54.030577 containerd[1960]: time="2024-12-13T01:35:54.029766574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h5cqg,Uid:f1292415-0ebc-45d1-8942-e783ae8372c5,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:54.042789 containerd[1960]: time="2024-12-13T01:35:54.042600699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wm4xz,Uid:cea7da0d-8fb7-49de-92fe-da73e18b3345,Namespace:kube-system,Attempt:0,}" Dec 13 01:35:54.288875 kubelet[3398]: I1213 01:35:54.288060 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ndjt4" podStartSLOduration=7.698817426 podStartE2EDuration="22.288037418s" podCreationTimestamp="2024-12-13 01:35:32 +0000 UTC" firstStartedPulling="2024-12-13 01:35:34.675363419 +0000 UTC m=+14.943486860" lastFinishedPulling="2024-12-13 01:35:49.264583406 +0000 UTC m=+29.532706852" observedRunningTime="2024-12-13 01:35:54.286182131 +0000 UTC m=+34.554305684" watchObservedRunningTime="2024-12-13 01:35:54.288037418 +0000 UTC m=+34.556160878" Dec 13 01:35:56.127322 systemd-networkd[1809]: cilium_host: Link UP Dec 13 01:35:56.127663 systemd-networkd[1809]: cilium_net: Link UP Dec 13 01:35:56.127893 systemd-networkd[1809]: cilium_net: Gained carrier Dec 13 01:35:56.128172 (udev-worker)[4215]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:35:56.129710 (udev-worker)[4182]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:35:56.131123 systemd-networkd[1809]: cilium_host: Gained carrier Dec 13 01:35:56.308948 (udev-worker)[4241]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:35:56.321354 systemd-networkd[1809]: cilium_vxlan: Link UP Dec 13 01:35:56.321365 systemd-networkd[1809]: cilium_vxlan: Gained carrier Dec 13 01:35:56.519988 systemd-networkd[1809]: cilium_host: Gained IPv6LL Dec 13 01:35:56.977657 kernel: NET: Registered PF_ALG protocol family Dec 13 01:35:57.049204 systemd-networkd[1809]: cilium_net: Gained IPv6LL Dec 13 01:35:57.559680 systemd-networkd[1809]: cilium_vxlan: Gained IPv6LL Dec 13 01:35:58.178312 systemd[1]: Started sshd@7-172.31.20.66:22-139.178.68.195:32824.service - OpenSSH per-connection server daemon (139.178.68.195:32824). Dec 13 01:35:58.255175 (udev-worker)[4242]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:35:58.256593 systemd-networkd[1809]: lxc_health: Link UP Dec 13 01:35:58.298570 systemd-networkd[1809]: lxc_health: Gained carrier Dec 13 01:35:58.496198 sshd[4536]: Accepted publickey for core from 139.178.68.195 port 32824 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:35:58.500490 sshd[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:58.535347 systemd-logind[1950]: New session 8 of user core. Dec 13 01:35:58.542791 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:35:58.748831 systemd-networkd[1809]: lxc039ac2343501: Link UP Dec 13 01:35:58.756831 kernel: eth0: renamed from tmpebc34 Dec 13 01:35:58.763460 systemd-networkd[1809]: lxc039ac2343501: Gained carrier Dec 13 01:35:59.189481 systemd-networkd[1809]: lxc2eea0237276d: Link UP Dec 13 01:35:59.201651 kernel: eth0: renamed from tmpa7466 Dec 13 01:35:59.212247 systemd-networkd[1809]: lxc2eea0237276d: Gained carrier Dec 13 01:35:59.866982 systemd-networkd[1809]: lxc039ac2343501: Gained IPv6LL Dec 13 01:35:59.977945 sshd[4536]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:59.988228 systemd[1]: sshd@7-172.31.20.66:22-139.178.68.195:32824.service: Deactivated successfully. Dec 13 01:35:59.992306 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:35:59.998243 systemd-logind[1950]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:36:00.052559 systemd-logind[1950]: Removed session 8. Dec 13 01:36:00.250684 systemd-networkd[1809]: lxc_health: Gained IPv6LL Dec 13 01:36:01.015677 systemd-networkd[1809]: lxc2eea0237276d: Gained IPv6LL Dec 13 01:36:03.348850 ntpd[1942]: Listen normally on 7 cilium_host 192.168.0.204:123 Dec 13 01:36:03.351075 ntpd[1942]: 13 Dec 01:36:03 ntpd[1942]: Listen normally on 7 cilium_host 192.168.0.204:123 Dec 13 01:36:03.351075 ntpd[1942]: 13 Dec 01:36:03 ntpd[1942]: Listen normally on 8 cilium_net [fe80::a006:81ff:fe18:b207%4]:123 Dec 13 01:36:03.351075 ntpd[1942]: 13 Dec 01:36:03 ntpd[1942]: Listen normally on 9 cilium_host [fe80::8c0e:feff:fe71:d110%5]:123 Dec 13 01:36:03.351075 ntpd[1942]: 13 Dec 01:36:03 ntpd[1942]: Listen normally on 10 cilium_vxlan [fe80::7c29:d3ff:fe23:d271%6]:123 Dec 13 01:36:03.351075 ntpd[1942]: 13 Dec 01:36:03 ntpd[1942]: Listen normally on 11 lxc_health [fe80::74ba:ecff:fe02:fb21%8]:123 Dec 13 01:36:03.351075 ntpd[1942]: 13 Dec 01:36:03 ntpd[1942]: Listen normally on 12 lxc039ac2343501 [fe80::68ab:3cff:fe95:e5ff%10]:123 Dec 13 01:36:03.351075 ntpd[1942]: 13 Dec 01:36:03 ntpd[1942]: Listen normally on 13 lxc2eea0237276d [fe80::340b:a1ff:fea5:3fe8%12]:123 Dec 13 01:36:03.348947 ntpd[1942]: Listen normally on 8 cilium_net [fe80::a006:81ff:fe18:b207%4]:123 Dec 13 01:36:03.349003 ntpd[1942]: Listen normally on 9 cilium_host [fe80::8c0e:feff:fe71:d110%5]:123 Dec 13 01:36:03.349045 ntpd[1942]: Listen normally on 10 cilium_vxlan [fe80::7c29:d3ff:fe23:d271%6]:123 Dec 13 01:36:03.349085 ntpd[1942]: Listen normally on 11 lxc_health [fe80::74ba:ecff:fe02:fb21%8]:123 Dec 13 01:36:03.349129 ntpd[1942]: Listen normally on 12 lxc039ac2343501 [fe80::68ab:3cff:fe95:e5ff%10]:123 Dec 13 01:36:03.349171 ntpd[1942]: Listen normally on 13 lxc2eea0237276d [fe80::340b:a1ff:fea5:3fe8%12]:123 Dec 13 01:36:05.021072 systemd[1]: Started sshd@8-172.31.20.66:22-139.178.68.195:32840.service - OpenSSH per-connection server daemon (139.178.68.195:32840). Dec 13 01:36:05.215107 sshd[4601]: Accepted publickey for core from 139.178.68.195 port 32840 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:05.218325 sshd[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:05.227863 systemd-logind[1950]: New session 9 of user core. Dec 13 01:36:05.232329 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:36:05.555452 sshd[4601]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:05.565874 systemd[1]: sshd@8-172.31.20.66:22-139.178.68.195:32840.service: Deactivated successfully. Dec 13 01:36:05.570301 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:36:05.572896 systemd-logind[1950]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:36:05.575847 systemd-logind[1950]: Removed session 9. Dec 13 01:36:06.863925 containerd[1960]: time="2024-12-13T01:36:06.863087358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:06.863925 containerd[1960]: time="2024-12-13T01:36:06.863750986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:06.866452 containerd[1960]: time="2024-12-13T01:36:06.864162453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:06.870493 containerd[1960]: time="2024-12-13T01:36:06.869562738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:06.898148 containerd[1960]: time="2024-12-13T01:36:06.897444178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:36:06.899219 containerd[1960]: time="2024-12-13T01:36:06.898920480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:36:06.899219 containerd[1960]: time="2024-12-13T01:36:06.899170264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:06.900545 containerd[1960]: time="2024-12-13T01:36:06.900279658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:36:06.963741 systemd[1]: Started cri-containerd-ebc34b7135479bcf170b740c2611cdf7e255583612f6bebcad84407112fc3d0c.scope - libcontainer container ebc34b7135479bcf170b740c2611cdf7e255583612f6bebcad84407112fc3d0c. Dec 13 01:36:06.976066 systemd[1]: Started cri-containerd-a7466d010c71fc2945d7a2741196beefffa6a969548dfe30822166d3dacdfdaa.scope - libcontainer container a7466d010c71fc2945d7a2741196beefffa6a969548dfe30822166d3dacdfdaa. Dec 13 01:36:07.206444 containerd[1960]: time="2024-12-13T01:36:07.206397835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h5cqg,Uid:f1292415-0ebc-45d1-8942-e783ae8372c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7466d010c71fc2945d7a2741196beefffa6a969548dfe30822166d3dacdfdaa\"" Dec 13 01:36:07.207472 containerd[1960]: time="2024-12-13T01:36:07.207370758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wm4xz,Uid:cea7da0d-8fb7-49de-92fe-da73e18b3345,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebc34b7135479bcf170b740c2611cdf7e255583612f6bebcad84407112fc3d0c\"" Dec 13 01:36:07.218215 containerd[1960]: time="2024-12-13T01:36:07.218160724Z" level=info msg="CreateContainer within sandbox \"ebc34b7135479bcf170b740c2611cdf7e255583612f6bebcad84407112fc3d0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:36:07.221761 containerd[1960]: time="2024-12-13T01:36:07.221612126Z" level=info msg="CreateContainer within sandbox \"a7466d010c71fc2945d7a2741196beefffa6a969548dfe30822166d3dacdfdaa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:36:07.277723 containerd[1960]: time="2024-12-13T01:36:07.277651942Z" level=info msg="CreateContainer within sandbox \"ebc34b7135479bcf170b740c2611cdf7e255583612f6bebcad84407112fc3d0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6920f54046d8d89f5e460d5ed4cf037b8cb5f53873de3879eefb69dbc5f49b69\"" Dec 13 01:36:07.284339 containerd[1960]: time="2024-12-13T01:36:07.280195629Z" level=info msg="StartContainer for \"6920f54046d8d89f5e460d5ed4cf037b8cb5f53873de3879eefb69dbc5f49b69\"" Dec 13 01:36:07.287944 containerd[1960]: time="2024-12-13T01:36:07.287896953Z" level=info msg="CreateContainer within sandbox \"a7466d010c71fc2945d7a2741196beefffa6a969548dfe30822166d3dacdfdaa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66dc59e2f06b660d10ba934f622128b81f2743623c9958730bcbc396e0f28342\"" Dec 13 01:36:07.291539 containerd[1960]: time="2024-12-13T01:36:07.291423010Z" level=info msg="StartContainer for \"66dc59e2f06b660d10ba934f622128b81f2743623c9958730bcbc396e0f28342\"" Dec 13 01:36:07.333796 systemd[1]: Started cri-containerd-6920f54046d8d89f5e460d5ed4cf037b8cb5f53873de3879eefb69dbc5f49b69.scope - libcontainer container 6920f54046d8d89f5e460d5ed4cf037b8cb5f53873de3879eefb69dbc5f49b69. Dec 13 01:36:07.359110 systemd[1]: Started cri-containerd-66dc59e2f06b660d10ba934f622128b81f2743623c9958730bcbc396e0f28342.scope - libcontainer container 66dc59e2f06b660d10ba934f622128b81f2743623c9958730bcbc396e0f28342. Dec 13 01:36:07.436869 containerd[1960]: time="2024-12-13T01:36:07.436816637Z" level=info msg="StartContainer for \"66dc59e2f06b660d10ba934f622128b81f2743623c9958730bcbc396e0f28342\" returns successfully" Dec 13 01:36:07.438565 containerd[1960]: time="2024-12-13T01:36:07.438021472Z" level=info msg="StartContainer for \"6920f54046d8d89f5e460d5ed4cf037b8cb5f53873de3879eefb69dbc5f49b69\" returns successfully" Dec 13 01:36:07.887960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484326611.mount: Deactivated successfully. Dec 13 01:36:08.408977 kubelet[3398]: I1213 01:36:08.408875 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wm4xz" podStartSLOduration=35.408852963 podStartE2EDuration="35.408852963s" podCreationTimestamp="2024-12-13 01:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:08.40839572 +0000 UTC m=+48.676519175" watchObservedRunningTime="2024-12-13 01:36:08.408852963 +0000 UTC m=+48.676976418" Dec 13 01:36:08.452920 kubelet[3398]: I1213 01:36:08.452843 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h5cqg" podStartSLOduration=35.452801267 podStartE2EDuration="35.452801267s" podCreationTimestamp="2024-12-13 01:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:36:08.449810074 +0000 UTC m=+48.717933526" watchObservedRunningTime="2024-12-13 01:36:08.452801267 +0000 UTC m=+48.720924722" Dec 13 01:36:10.593920 systemd[1]: Started sshd@9-172.31.20.66:22-139.178.68.195:40420.service - OpenSSH per-connection server daemon (139.178.68.195:40420). Dec 13 01:36:10.796653 sshd[4788]: Accepted publickey for core from 139.178.68.195 port 40420 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:10.797802 sshd[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:10.804226 systemd-logind[1950]: New session 10 of user core. Dec 13 01:36:10.814001 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:36:11.041431 sshd[4788]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:11.045159 systemd[1]: sshd@9-172.31.20.66:22-139.178.68.195:40420.service: Deactivated successfully. Dec 13 01:36:11.047465 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:36:11.049297 systemd-logind[1950]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:36:11.051553 systemd-logind[1950]: Removed session 10. Dec 13 01:36:16.089499 systemd[1]: Started sshd@10-172.31.20.66:22-139.178.68.195:35958.service - OpenSSH per-connection server daemon (139.178.68.195:35958). Dec 13 01:36:16.265070 sshd[4803]: Accepted publickey for core from 139.178.68.195 port 35958 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:16.269077 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:16.275159 systemd-logind[1950]: New session 11 of user core. Dec 13 01:36:16.281724 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:36:16.496806 sshd[4803]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:16.501301 systemd[1]: sshd@10-172.31.20.66:22-139.178.68.195:35958.service: Deactivated successfully. Dec 13 01:36:16.503908 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:36:16.505943 systemd-logind[1950]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:36:16.507652 systemd-logind[1950]: Removed session 11. Dec 13 01:36:21.532867 systemd[1]: Started sshd@11-172.31.20.66:22-139.178.68.195:35964.service - OpenSSH per-connection server daemon (139.178.68.195:35964). Dec 13 01:36:21.699668 sshd[4819]: Accepted publickey for core from 139.178.68.195 port 35964 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:21.700549 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:21.705354 systemd-logind[1950]: New session 12 of user core. Dec 13 01:36:21.711781 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:36:21.927447 sshd[4819]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:21.932317 systemd[1]: sshd@11-172.31.20.66:22-139.178.68.195:35964.service: Deactivated successfully. Dec 13 01:36:21.934581 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:36:21.935588 systemd-logind[1950]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:36:21.937096 systemd-logind[1950]: Removed session 12. Dec 13 01:36:21.961956 systemd[1]: Started sshd@12-172.31.20.66:22-139.178.68.195:35966.service - OpenSSH per-connection server daemon (139.178.68.195:35966). Dec 13 01:36:22.117714 sshd[4833]: Accepted publickey for core from 139.178.68.195 port 35966 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:22.122073 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:22.131807 systemd-logind[1950]: New session 13 of user core. Dec 13 01:36:22.137968 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:36:22.435596 sshd[4833]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:22.445643 systemd-logind[1950]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:36:22.447108 systemd[1]: sshd@12-172.31.20.66:22-139.178.68.195:35966.service: Deactivated successfully. Dec 13 01:36:22.451646 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:36:22.464885 systemd-logind[1950]: Removed session 13. Dec 13 01:36:22.472012 systemd[1]: Started sshd@13-172.31.20.66:22-139.178.68.195:35978.service - OpenSSH per-connection server daemon (139.178.68.195:35978). Dec 13 01:36:22.638743 sshd[4844]: Accepted publickey for core from 139.178.68.195 port 35978 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:22.640379 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:22.646611 systemd-logind[1950]: New session 14 of user core. Dec 13 01:36:22.651726 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:36:22.922318 sshd[4844]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:22.927926 systemd[1]: sshd@13-172.31.20.66:22-139.178.68.195:35978.service: Deactivated successfully. Dec 13 01:36:22.931104 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:36:22.932422 systemd-logind[1950]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:36:22.947577 systemd-logind[1950]: Removed session 14. Dec 13 01:36:27.968998 systemd[1]: Started sshd@14-172.31.20.66:22-139.178.68.195:33586.service - OpenSSH per-connection server daemon (139.178.68.195:33586). Dec 13 01:36:28.147438 sshd[4858]: Accepted publickey for core from 139.178.68.195 port 33586 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:28.150843 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:28.158777 systemd-logind[1950]: New session 15 of user core. Dec 13 01:36:28.165740 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:36:28.408694 sshd[4858]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:28.415641 systemd[1]: sshd@14-172.31.20.66:22-139.178.68.195:33586.service: Deactivated successfully. Dec 13 01:36:28.419915 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:36:28.423111 systemd-logind[1950]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:36:28.425851 systemd-logind[1950]: Removed session 15. Dec 13 01:36:33.447950 systemd[1]: Started sshd@15-172.31.20.66:22-139.178.68.195:33598.service - OpenSSH per-connection server daemon (139.178.68.195:33598). Dec 13 01:36:33.617081 sshd[4871]: Accepted publickey for core from 139.178.68.195 port 33598 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:33.619273 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:33.627664 systemd-logind[1950]: New session 16 of user core. Dec 13 01:36:33.635157 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:36:33.869361 sshd[4871]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:33.874787 systemd-logind[1950]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:36:33.876202 systemd[1]: sshd@15-172.31.20.66:22-139.178.68.195:33598.service: Deactivated successfully. Dec 13 01:36:33.879559 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:36:33.881168 systemd-logind[1950]: Removed session 16. Dec 13 01:36:38.909259 systemd[1]: Started sshd@16-172.31.20.66:22-139.178.68.195:57260.service - OpenSSH per-connection server daemon (139.178.68.195:57260). Dec 13 01:36:39.101232 sshd[4888]: Accepted publickey for core from 139.178.68.195 port 57260 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:39.101972 sshd[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:39.107438 systemd-logind[1950]: New session 17 of user core. Dec 13 01:36:39.111904 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:36:39.362400 sshd[4888]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:39.369032 systemd-logind[1950]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:36:39.370276 systemd[1]: sshd@16-172.31.20.66:22-139.178.68.195:57260.service: Deactivated successfully. Dec 13 01:36:39.378775 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:36:39.380848 systemd-logind[1950]: Removed session 17. Dec 13 01:36:39.400867 systemd[1]: Started sshd@17-172.31.20.66:22-139.178.68.195:57274.service - OpenSSH per-connection server daemon (139.178.68.195:57274). Dec 13 01:36:39.568808 sshd[4901]: Accepted publickey for core from 139.178.68.195 port 57274 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:39.570581 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:39.580899 systemd-logind[1950]: New session 18 of user core. Dec 13 01:36:39.586735 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:36:40.393275 sshd[4901]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:40.402048 systemd[1]: sshd@17-172.31.20.66:22-139.178.68.195:57274.service: Deactivated successfully. Dec 13 01:36:40.404977 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:36:40.407334 systemd-logind[1950]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:36:40.409710 systemd-logind[1950]: Removed session 18. Dec 13 01:36:40.429051 systemd[1]: Started sshd@18-172.31.20.66:22-139.178.68.195:57282.service - OpenSSH per-connection server daemon (139.178.68.195:57282). Dec 13 01:36:40.618182 sshd[4911]: Accepted publickey for core from 139.178.68.195 port 57282 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:40.621879 sshd[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:40.631591 systemd-logind[1950]: New session 19 of user core. Dec 13 01:36:40.641805 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:36:42.795481 sshd[4911]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:42.800546 systemd-logind[1950]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:36:42.804401 systemd[1]: sshd@18-172.31.20.66:22-139.178.68.195:57282.service: Deactivated successfully. Dec 13 01:36:42.808263 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:36:42.810807 systemd-logind[1950]: Removed session 19. Dec 13 01:36:42.832085 systemd[1]: Started sshd@19-172.31.20.66:22-139.178.68.195:57286.service - OpenSSH per-connection server daemon (139.178.68.195:57286). Dec 13 01:36:43.005240 sshd[4929]: Accepted publickey for core from 139.178.68.195 port 57286 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:43.007283 sshd[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:43.019602 systemd-logind[1950]: New session 20 of user core. Dec 13 01:36:43.022827 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:36:43.577039 sshd[4929]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:43.583131 systemd[1]: sshd@19-172.31.20.66:22-139.178.68.195:57286.service: Deactivated successfully. Dec 13 01:36:43.585835 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:36:43.586961 systemd-logind[1950]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:36:43.588400 systemd-logind[1950]: Removed session 20. Dec 13 01:36:43.614154 systemd[1]: Started sshd@20-172.31.20.66:22-139.178.68.195:57288.service - OpenSSH per-connection server daemon (139.178.68.195:57288). Dec 13 01:36:43.786592 sshd[4940]: Accepted publickey for core from 139.178.68.195 port 57288 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:43.788394 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:43.804094 systemd-logind[1950]: New session 21 of user core. Dec 13 01:36:43.818772 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:36:44.025423 sshd[4940]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:44.046944 systemd-logind[1950]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:36:44.047328 systemd[1]: sshd@20-172.31.20.66:22-139.178.68.195:57288.service: Deactivated successfully. Dec 13 01:36:44.050642 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:36:44.052029 systemd-logind[1950]: Removed session 21. Dec 13 01:36:49.059917 systemd[1]: Started sshd@21-172.31.20.66:22-139.178.68.195:37158.service - OpenSSH per-connection server daemon (139.178.68.195:37158). Dec 13 01:36:49.224542 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 37158 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:49.228403 sshd[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:49.235013 systemd-logind[1950]: New session 22 of user core. Dec 13 01:36:49.239788 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:36:49.478912 sshd[4953]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:49.489419 systemd[1]: sshd@21-172.31.20.66:22-139.178.68.195:37158.service: Deactivated successfully. Dec 13 01:36:49.514895 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:36:49.516458 systemd-logind[1950]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:36:49.519644 systemd-logind[1950]: Removed session 22. Dec 13 01:36:54.529045 systemd[1]: Started sshd@22-172.31.20.66:22-139.178.68.195:37172.service - OpenSSH per-connection server daemon (139.178.68.195:37172). Dec 13 01:36:54.809586 sshd[4969]: Accepted publickey for core from 139.178.68.195 port 37172 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:36:54.811288 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:54.818394 systemd-logind[1950]: New session 23 of user core. Dec 13 01:36:54.830815 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:36:55.086230 sshd[4969]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:55.091860 systemd-logind[1950]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:36:55.095021 systemd[1]: sshd@22-172.31.20.66:22-139.178.68.195:37172.service: Deactivated successfully. Dec 13 01:36:55.097318 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:36:55.098638 systemd-logind[1950]: Removed session 23. Dec 13 01:37:00.125073 systemd[1]: Started sshd@23-172.31.20.66:22-139.178.68.195:44216.service - OpenSSH per-connection server daemon (139.178.68.195:44216). Dec 13 01:37:00.293557 sshd[4982]: Accepted publickey for core from 139.178.68.195 port 44216 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:37:00.297474 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:00.307316 systemd-logind[1950]: New session 24 of user core. Dec 13 01:37:00.312804 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:37:00.524566 sshd[4982]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:00.529841 systemd[1]: sshd@23-172.31.20.66:22-139.178.68.195:44216.service: Deactivated successfully. Dec 13 01:37:00.532999 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:37:00.534826 systemd-logind[1950]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:37:00.536279 systemd-logind[1950]: Removed session 24. Dec 13 01:37:05.575956 systemd[1]: Started sshd@24-172.31.20.66:22-139.178.68.195:44218.service - OpenSSH per-connection server daemon (139.178.68.195:44218). Dec 13 01:37:05.760609 sshd[4998]: Accepted publickey for core from 139.178.68.195 port 44218 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:37:05.762655 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:05.781152 systemd-logind[1950]: New session 25 of user core. Dec 13 01:37:05.794790 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:37:06.031789 sshd[4998]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:06.036679 systemd-logind[1950]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:37:06.037337 systemd[1]: sshd@24-172.31.20.66:22-139.178.68.195:44218.service: Deactivated successfully. Dec 13 01:37:06.039615 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:37:06.042465 systemd-logind[1950]: Removed session 25. Dec 13 01:37:06.069406 systemd[1]: Started sshd@25-172.31.20.66:22-139.178.68.195:52624.service - OpenSSH per-connection server daemon (139.178.68.195:52624). Dec 13 01:37:06.226547 sshd[5011]: Accepted publickey for core from 139.178.68.195 port 52624 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:37:06.228142 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:06.234038 systemd-logind[1950]: New session 26 of user core. Dec 13 01:37:06.238715 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:37:07.910445 containerd[1960]: time="2024-12-13T01:37:07.910238225Z" level=info msg="StopContainer for \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\" with timeout 30 (s)" Dec 13 01:37:07.913230 containerd[1960]: time="2024-12-13T01:37:07.913060038Z" level=info msg="Stop container \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\" with signal terminated" Dec 13 01:37:07.930313 systemd[1]: cri-containerd-38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a.scope: Deactivated successfully. Dec 13 01:37:07.981508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a-rootfs.mount: Deactivated successfully. Dec 13 01:37:07.985077 containerd[1960]: time="2024-12-13T01:37:07.984947299Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:37:07.999957 containerd[1960]: time="2024-12-13T01:37:07.999808588Z" level=info msg="shim disconnected" id=38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a namespace=k8s.io Dec 13 01:37:07.999957 containerd[1960]: time="2024-12-13T01:37:07.999875281Z" level=warning msg="cleaning up after shim disconnected" id=38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a namespace=k8s.io Dec 13 01:37:07.999957 containerd[1960]: time="2024-12-13T01:37:07.999887295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:08.004680 containerd[1960]: time="2024-12-13T01:37:08.004469349Z" level=info msg="StopContainer for \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\" with timeout 2 (s)" Dec 13 01:37:08.006409 containerd[1960]: time="2024-12-13T01:37:08.006264634Z" level=info msg="Stop container \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\" with signal terminated" Dec 13 01:37:08.020450 systemd-networkd[1809]: lxc_health: Link DOWN Dec 13 01:37:08.020460 systemd-networkd[1809]: lxc_health: Lost carrier Dec 13 01:37:08.045030 containerd[1960]: time="2024-12-13T01:37:08.044270010Z" level=info msg="StopContainer for \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\" returns successfully" Dec 13 01:37:08.045303 containerd[1960]: time="2024-12-13T01:37:08.045237405Z" level=info msg="StopPodSandbox for \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\"" Dec 13 01:37:08.045303 containerd[1960]: time="2024-12-13T01:37:08.045281785Z" level=info msg="Container to stop \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:08.052733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973-shm.mount: Deactivated successfully. Dec 13 01:37:08.055493 systemd[1]: cri-containerd-b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42.scope: Deactivated successfully. Dec 13 01:37:08.056567 systemd[1]: cri-containerd-b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42.scope: Consumed 9.119s CPU time. Dec 13 01:37:08.072053 systemd[1]: cri-containerd-02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973.scope: Deactivated successfully. Dec 13 01:37:08.107132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42-rootfs.mount: Deactivated successfully. Dec 13 01:37:08.116906 containerd[1960]: time="2024-12-13T01:37:08.116833621Z" level=info msg="shim disconnected" id=b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42 namespace=k8s.io Dec 13 01:37:08.116906 containerd[1960]: time="2024-12-13T01:37:08.116905945Z" level=warning msg="cleaning up after shim disconnected" id=b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42 namespace=k8s.io Dec 13 01:37:08.117232 containerd[1960]: time="2024-12-13T01:37:08.116918092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:08.119878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973-rootfs.mount: Deactivated successfully. Dec 13 01:37:08.133066 containerd[1960]: time="2024-12-13T01:37:08.132719994Z" level=info msg="shim disconnected" id=02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973 namespace=k8s.io Dec 13 01:37:08.133066 containerd[1960]: time="2024-12-13T01:37:08.132812899Z" level=warning msg="cleaning up after shim disconnected" id=02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973 namespace=k8s.io Dec 13 01:37:08.133066 containerd[1960]: time="2024-12-13T01:37:08.132828183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:08.156428 containerd[1960]: time="2024-12-13T01:37:08.155511616Z" level=info msg="StopContainer for \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\" returns successfully" Dec 13 01:37:08.157504 containerd[1960]: time="2024-12-13T01:37:08.157448436Z" level=info msg="StopPodSandbox for \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\"" Dec 13 01:37:08.157718 containerd[1960]: time="2024-12-13T01:37:08.157696046Z" level=info msg="Container to stop \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:08.157949 containerd[1960]: time="2024-12-13T01:37:08.157903258Z" level=info msg="Container to stop \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:08.158400 containerd[1960]: time="2024-12-13T01:37:08.158125691Z" level=info msg="Container to stop \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:08.158400 containerd[1960]: time="2024-12-13T01:37:08.158150648Z" level=info msg="Container to stop \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:08.158400 containerd[1960]: time="2024-12-13T01:37:08.158168171Z" level=info msg="Container to stop \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:37:08.171181 systemd[1]: cri-containerd-193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7.scope: Deactivated successfully. Dec 13 01:37:08.179013 containerd[1960]: time="2024-12-13T01:37:08.178377497Z" level=info msg="TearDown network for sandbox \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\" successfully" Dec 13 01:37:08.179013 containerd[1960]: time="2024-12-13T01:37:08.178420326Z" level=info msg="StopPodSandbox for \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\" returns successfully" Dec 13 01:37:08.234621 containerd[1960]: time="2024-12-13T01:37:08.234503739Z" level=info msg="shim disconnected" id=193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7 namespace=k8s.io Dec 13 01:37:08.234621 containerd[1960]: time="2024-12-13T01:37:08.234620867Z" level=warning msg="cleaning up after shim disconnected" id=193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7 namespace=k8s.io Dec 13 01:37:08.236280 containerd[1960]: time="2024-12-13T01:37:08.235569364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:08.253172 containerd[1960]: time="2024-12-13T01:37:08.253123242Z" level=info msg="TearDown network for sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" successfully" Dec 13 01:37:08.253172 containerd[1960]: time="2024-12-13T01:37:08.253166319Z" level=info msg="StopPodSandbox for \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" returns successfully" Dec 13 01:37:08.292883 kubelet[3398]: I1213 01:37:08.292831 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18605ec1-8de2-4550-af1f-ab174d5ce203-cilium-config-path\") pod \"18605ec1-8de2-4550-af1f-ab174d5ce203\" (UID: \"18605ec1-8de2-4550-af1f-ab174d5ce203\") " Dec 13 01:37:08.292883 kubelet[3398]: I1213 01:37:08.292893 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7htv\" (UniqueName: \"kubernetes.io/projected/18605ec1-8de2-4550-af1f-ab174d5ce203-kube-api-access-s7htv\") pod \"18605ec1-8de2-4550-af1f-ab174d5ce203\" (UID: \"18605ec1-8de2-4550-af1f-ab174d5ce203\") " Dec 13 01:37:08.293457 kubelet[3398]: I1213 01:37:08.292917 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-run\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.293457 kubelet[3398]: I1213 01:37:08.292936 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-xtables-lock\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.293457 kubelet[3398]: I1213 01:37:08.292954 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-etc-cni-netd\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.293457 kubelet[3398]: I1213 01:37:08.292977 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d29c9\" (UniqueName: \"kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-kube-api-access-d29c9\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.293457 kubelet[3398]: I1213 01:37:08.292996 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-lib-modules\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.293457 kubelet[3398]: I1213 01:37:08.293021 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-config-path\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.293691 kubelet[3398]: I1213 01:37:08.293050 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/adcc1c57-6737-4a79-aac1-c9d50d236003-clustermesh-secrets\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.293691 kubelet[3398]: I1213 01:37:08.293072 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-host-proc-sys-kernel\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.293691 kubelet[3398]: I1213 01:37:08.293094 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-bpf-maps\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.293691 kubelet[3398]: I1213 01:37:08.293114 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-host-proc-sys-net\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.305565 kubelet[3398]: I1213 01:37:08.305005 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.323688 kubelet[3398]: I1213 01:37:08.322732 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.323688 kubelet[3398]: I1213 01:37:08.322830 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.323688 kubelet[3398]: I1213 01:37:08.302940 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.325453 kubelet[3398]: I1213 01:37:08.325265 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.325640 kubelet[3398]: I1213 01:37:08.325494 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.325640 kubelet[3398]: I1213 01:37:08.325624 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.328277 kubelet[3398]: I1213 01:37:08.328242 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-kube-api-access-d29c9" (OuterVolumeSpecName: "kube-api-access-d29c9") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "kube-api-access-d29c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:08.330024 kubelet[3398]: I1213 01:37:08.329990 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18605ec1-8de2-4550-af1f-ab174d5ce203-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "18605ec1-8de2-4550-af1f-ab174d5ce203" (UID: "18605ec1-8de2-4550-af1f-ab174d5ce203"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:37:08.330913 kubelet[3398]: I1213 01:37:08.330888 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18605ec1-8de2-4550-af1f-ab174d5ce203-kube-api-access-s7htv" (OuterVolumeSpecName: "kube-api-access-s7htv") pod "18605ec1-8de2-4550-af1f-ab174d5ce203" (UID: "18605ec1-8de2-4550-af1f-ab174d5ce203"). InnerVolumeSpecName "kube-api-access-s7htv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:08.331660 kubelet[3398]: I1213 01:37:08.330904 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adcc1c57-6737-4a79-aac1-c9d50d236003-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:37:08.331660 kubelet[3398]: I1213 01:37:08.331603 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:37:08.393402 kubelet[3398]: I1213 01:37:08.393362 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-hostproc\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.393402 kubelet[3398]: I1213 01:37:08.393432 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-hostproc" (OuterVolumeSpecName: "hostproc") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.393402 kubelet[3398]: I1213 01:37:08.393450 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-cgroup\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.393402 kubelet[3398]: I1213 01:37:08.393478 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cni-path\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.393402 kubelet[3398]: I1213 01:37:08.393498 3398 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-hubble-tls\") pod \"adcc1c57-6737-4a79-aac1-c9d50d236003\" (UID: \"adcc1c57-6737-4a79-aac1-c9d50d236003\") " Dec 13 01:37:08.393402 kubelet[3398]: I1213 01:37:08.393549 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.394101 kubelet[3398]: I1213 01:37:08.393571 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cni-path" (OuterVolumeSpecName: "cni-path") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:37:08.395928 kubelet[3398]: I1213 01:37:08.395635 3398 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-hostproc\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.395928 kubelet[3398]: I1213 01:37:08.395695 3398 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-run\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.395928 kubelet[3398]: I1213 01:37:08.395713 3398 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-xtables-lock\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.395928 kubelet[3398]: I1213 01:37:08.395726 3398 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18605ec1-8de2-4550-af1f-ab174d5ce203-cilium-config-path\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.395928 kubelet[3398]: I1213 01:37:08.395745 3398 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s7htv\" (UniqueName: \"kubernetes.io/projected/18605ec1-8de2-4550-af1f-ab174d5ce203-kube-api-access-s7htv\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.395928 kubelet[3398]: I1213 01:37:08.395778 3398 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-d29c9\" (UniqueName: \"kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-kube-api-access-d29c9\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.395928 kubelet[3398]: I1213 01:37:08.395792 3398 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-lib-modules\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.395928 kubelet[3398]: I1213 01:37:08.395805 3398 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-etc-cni-netd\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.396477 kubelet[3398]: I1213 01:37:08.395820 3398 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-config-path\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.396477 kubelet[3398]: I1213 01:37:08.395848 3398 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/adcc1c57-6737-4a79-aac1-c9d50d236003-clustermesh-secrets\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.396477 kubelet[3398]: I1213 01:37:08.395863 3398 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-host-proc-sys-kernel\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.396477 kubelet[3398]: I1213 01:37:08.395874 3398 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-bpf-maps\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.396477 kubelet[3398]: I1213 01:37:08.395887 3398 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-host-proc-sys-net\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.397101 kubelet[3398]: I1213 01:37:08.397064 3398 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "adcc1c57-6737-4a79-aac1-c9d50d236003" (UID: "adcc1c57-6737-4a79-aac1-c9d50d236003"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:37:08.496589 kubelet[3398]: I1213 01:37:08.496454 3398 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cilium-cgroup\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.496589 kubelet[3398]: I1213 01:37:08.496486 3398 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/adcc1c57-6737-4a79-aac1-c9d50d236003-cni-path\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.496589 kubelet[3398]: I1213 01:37:08.496497 3398 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/adcc1c57-6737-4a79-aac1-c9d50d236003-hubble-tls\") on node \"ip-172-31-20-66\" DevicePath \"\"" Dec 13 01:37:08.593104 systemd[1]: Removed slice kubepods-burstable-podadcc1c57_6737_4a79_aac1_c9d50d236003.slice - libcontainer container kubepods-burstable-podadcc1c57_6737_4a79_aac1_c9d50d236003.slice. Dec 13 01:37:08.593233 systemd[1]: kubepods-burstable-podadcc1c57_6737_4a79_aac1_c9d50d236003.slice: Consumed 9.216s CPU time. Dec 13 01:37:08.612087 kubelet[3398]: I1213 01:37:08.612040 3398 scope.go:117] "RemoveContainer" containerID="b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42" Dec 13 01:37:08.617435 containerd[1960]: time="2024-12-13T01:37:08.617066755Z" level=info msg="RemoveContainer for \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\"" Dec 13 01:37:08.627618 containerd[1960]: time="2024-12-13T01:37:08.626683191Z" level=info msg="RemoveContainer for \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\" returns successfully" Dec 13 01:37:08.631980 systemd[1]: Removed slice kubepods-besteffort-pod18605ec1_8de2_4550_af1f_ab174d5ce203.slice - libcontainer container kubepods-besteffort-pod18605ec1_8de2_4550_af1f_ab174d5ce203.slice. Dec 13 01:37:08.642305 kubelet[3398]: I1213 01:37:08.642268 3398 scope.go:117] "RemoveContainer" containerID="0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1" Dec 13 01:37:08.645717 containerd[1960]: time="2024-12-13T01:37:08.645317312Z" level=info msg="RemoveContainer for \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\"" Dec 13 01:37:08.651019 containerd[1960]: time="2024-12-13T01:37:08.650921295Z" level=info msg="RemoveContainer for \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\" returns successfully" Dec 13 01:37:08.651293 kubelet[3398]: I1213 01:37:08.651264 3398 scope.go:117] "RemoveContainer" containerID="3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12" Dec 13 01:37:08.654832 containerd[1960]: time="2024-12-13T01:37:08.654500559Z" level=info msg="RemoveContainer for \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\"" Dec 13 01:37:08.661342 containerd[1960]: time="2024-12-13T01:37:08.661285646Z" level=info msg="RemoveContainer for \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\" returns successfully" Dec 13 01:37:08.662040 kubelet[3398]: I1213 01:37:08.661916 3398 scope.go:117] "RemoveContainer" containerID="c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3" Dec 13 01:37:08.664171 containerd[1960]: time="2024-12-13T01:37:08.663820772Z" level=info msg="RemoveContainer for \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\"" Dec 13 01:37:08.668836 containerd[1960]: time="2024-12-13T01:37:08.668799928Z" level=info msg="RemoveContainer for \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\" returns successfully" Dec 13 01:37:08.670486 kubelet[3398]: I1213 01:37:08.669084 3398 scope.go:117] "RemoveContainer" containerID="54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a" Dec 13 01:37:08.670933 containerd[1960]: time="2024-12-13T01:37:08.670900709Z" level=info msg="RemoveContainer for \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\"" Dec 13 01:37:08.676722 containerd[1960]: time="2024-12-13T01:37:08.676678803Z" level=info msg="RemoveContainer for \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\" returns successfully" Dec 13 01:37:08.676959 kubelet[3398]: I1213 01:37:08.676935 3398 scope.go:117] "RemoveContainer" containerID="b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42" Dec 13 01:37:08.730184 containerd[1960]: time="2024-12-13T01:37:08.687268290Z" level=error msg="ContainerStatus for \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\": not found" Dec 13 01:37:08.748277 kubelet[3398]: E1213 01:37:08.747780 3398 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\": not found" containerID="b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42" Dec 13 01:37:08.748277 kubelet[3398]: I1213 01:37:08.748009 3398 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42"} err="failed to get container status \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\": rpc error: code = NotFound desc = an error occurred when try to find container \"b59e49dbf38c5f696caef4e1365d666450654227bad25206b389061e8a1c3b42\": not found" Dec 13 01:37:08.748277 kubelet[3398]: I1213 01:37:08.748149 3398 scope.go:117] "RemoveContainer" containerID="0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1" Dec 13 01:37:08.749508 containerd[1960]: time="2024-12-13T01:37:08.748569307Z" level=error msg="ContainerStatus for \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\": not found" Dec 13 01:37:08.750163 kubelet[3398]: E1213 01:37:08.749719 3398 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\": not found" containerID="0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1" Dec 13 01:37:08.750163 kubelet[3398]: I1213 01:37:08.749761 3398 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1"} err="failed to get container status \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\": rpc error: code = NotFound desc = an error occurred when try to find container \"0941f6742b36c81132e81704a39f560a573642d5f72169102f9e47987e0dffa1\": not found" Dec 13 01:37:08.750163 kubelet[3398]: I1213 01:37:08.749852 3398 scope.go:117] "RemoveContainer" containerID="3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12" Dec 13 01:37:08.750339 containerd[1960]: time="2024-12-13T01:37:08.750268247Z" level=error msg="ContainerStatus for \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\": not found" Dec 13 01:37:08.750464 kubelet[3398]: E1213 01:37:08.750424 3398 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\": not found" containerID="3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12" Dec 13 01:37:08.750546 kubelet[3398]: I1213 01:37:08.750472 3398 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12"} err="failed to get container status \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\": rpc error: code = NotFound desc = an error occurred when try to find container \"3384a3f4c294098a8abf4f90f20d2896a861a37046dc6469aad18ad6c8579a12\": not found" Dec 13 01:37:08.750546 kubelet[3398]: I1213 01:37:08.750494 3398 scope.go:117] "RemoveContainer" containerID="c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3" Dec 13 01:37:08.751246 containerd[1960]: time="2024-12-13T01:37:08.751193559Z" level=error msg="ContainerStatus for \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\": not found" Dec 13 01:37:08.751416 kubelet[3398]: E1213 01:37:08.751394 3398 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\": not found" containerID="c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3" Dec 13 01:37:08.751575 kubelet[3398]: I1213 01:37:08.751419 3398 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3"} err="failed to get container status \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1fb48fe5f28fee6ecd32ea241a93c9dd3d4e75153968899b2c44729024c08d3\": not found" Dec 13 01:37:08.751575 kubelet[3398]: I1213 01:37:08.751439 3398 scope.go:117] "RemoveContainer" containerID="54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a" Dec 13 01:37:08.751707 containerd[1960]: time="2024-12-13T01:37:08.751663655Z" level=error msg="ContainerStatus for \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\": not found" Dec 13 01:37:08.752030 kubelet[3398]: E1213 01:37:08.751798 3398 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\": not found" containerID="54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a" Dec 13 01:37:08.752030 kubelet[3398]: I1213 01:37:08.751824 3398 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a"} err="failed to get container status \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\": rpc error: code = NotFound desc = an error occurred when try to find container \"54e26e80ea2f024e53fbfdab393ac35be4d3c9176cffd1163f2053771a85500a\": not found" Dec 13 01:37:08.752030 kubelet[3398]: I1213 01:37:08.751847 3398 scope.go:117] "RemoveContainer" containerID="38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a" Dec 13 01:37:08.753324 containerd[1960]: time="2024-12-13T01:37:08.753028985Z" level=info msg="RemoveContainer for \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\"" Dec 13 01:37:08.758601 containerd[1960]: time="2024-12-13T01:37:08.758563887Z" level=info msg="RemoveContainer for \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\" returns successfully" Dec 13 01:37:08.758794 kubelet[3398]: I1213 01:37:08.758774 3398 scope.go:117] "RemoveContainer" containerID="38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a" Dec 13 01:37:08.759157 containerd[1960]: time="2024-12-13T01:37:08.759056885Z" level=error msg="ContainerStatus for \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\": not found" Dec 13 01:37:08.759249 kubelet[3398]: E1213 01:37:08.759198 3398 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\": not found" containerID="38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a" Dec 13 01:37:08.759249 kubelet[3398]: I1213 01:37:08.759226 3398 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a"} err="failed to get container status \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\": rpc error: code = NotFound desc = an error occurred when try to find container \"38de5ad01518040d84a6c5830b89dab20bbad31139f76a8ddc8582a3803fe63a\": not found" Dec 13 01:37:08.956803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7-rootfs.mount: Deactivated successfully. Dec 13 01:37:08.956931 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7-shm.mount: Deactivated successfully. Dec 13 01:37:08.957123 systemd[1]: var-lib-kubelet-pods-adcc1c57\x2d6737\x2d4a79\x2daac1\x2dc9d50d236003-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:37:08.957207 systemd[1]: var-lib-kubelet-pods-18605ec1\x2d8de2\x2d4550\x2daf1f\x2dab174d5ce203-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds7htv.mount: Deactivated successfully. Dec 13 01:37:08.957340 systemd[1]: var-lib-kubelet-pods-adcc1c57\x2d6737\x2d4a79\x2daac1\x2dc9d50d236003-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:37:08.957434 systemd[1]: var-lib-kubelet-pods-adcc1c57\x2d6737\x2d4a79\x2daac1\x2dc9d50d236003-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd29c9.mount: Deactivated successfully. Dec 13 01:37:09.845033 sshd[5011]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:09.848595 systemd[1]: sshd@25-172.31.20.66:22-139.178.68.195:52624.service: Deactivated successfully. Dec 13 01:37:09.851291 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:37:09.853139 systemd-logind[1950]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:37:09.854953 systemd-logind[1950]: Removed session 26. Dec 13 01:37:09.882915 systemd[1]: Started sshd@26-172.31.20.66:22-139.178.68.195:52634.service - OpenSSH per-connection server daemon (139.178.68.195:52634). Dec 13 01:37:10.003206 kubelet[3398]: I1213 01:37:10.003171 3398 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18605ec1-8de2-4550-af1f-ab174d5ce203" path="/var/lib/kubelet/pods/18605ec1-8de2-4550-af1f-ab174d5ce203/volumes" Dec 13 01:37:10.003859 kubelet[3398]: I1213 01:37:10.003830 3398 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adcc1c57-6737-4a79-aac1-c9d50d236003" path="/var/lib/kubelet/pods/adcc1c57-6737-4a79-aac1-c9d50d236003/volumes" Dec 13 01:37:10.086936 sshd[5172]: Accepted publickey for core from 139.178.68.195 port 52634 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:37:10.090998 sshd[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:10.103374 systemd-logind[1950]: New session 27 of user core. Dec 13 01:37:10.109745 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:37:10.300397 kubelet[3398]: E1213 01:37:10.300313 3398 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:37:10.350388 ntpd[1942]: Deleting interface #11 lxc_health, fe80::74ba:ecff:fe02:fb21%8#123, interface stats: received=0, sent=0, dropped=0, active_time=67 secs Dec 13 01:37:10.351432 ntpd[1942]: 13 Dec 01:37:10 ntpd[1942]: Deleting interface #11 lxc_health, fe80::74ba:ecff:fe02:fb21%8#123, interface stats: received=0, sent=0, dropped=0, active_time=67 secs Dec 13 01:37:10.983840 sshd[5172]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:10.988685 kubelet[3398]: I1213 01:37:10.988538 3398 topology_manager.go:215] "Topology Admit Handler" podUID="8f4ec781-94c6-4a0c-841c-c6c1cf0474ed" podNamespace="kube-system" podName="cilium-tnjxd" Dec 13 01:37:10.992734 kubelet[3398]: E1213 01:37:10.992126 3398 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="adcc1c57-6737-4a79-aac1-c9d50d236003" containerName="cilium-agent" Dec 13 01:37:10.992734 kubelet[3398]: E1213 01:37:10.992170 3398 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18605ec1-8de2-4550-af1f-ab174d5ce203" containerName="cilium-operator" Dec 13 01:37:10.992734 kubelet[3398]: E1213 01:37:10.992182 3398 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="adcc1c57-6737-4a79-aac1-c9d50d236003" containerName="clean-cilium-state" Dec 13 01:37:10.992734 kubelet[3398]: E1213 01:37:10.992192 3398 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="adcc1c57-6737-4a79-aac1-c9d50d236003" containerName="mount-bpf-fs" Dec 13 01:37:10.992734 kubelet[3398]: E1213 01:37:10.992202 3398 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="adcc1c57-6737-4a79-aac1-c9d50d236003" containerName="mount-cgroup" Dec 13 01:37:10.992734 kubelet[3398]: E1213 01:37:10.992210 3398 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="adcc1c57-6737-4a79-aac1-c9d50d236003" containerName="apply-sysctl-overwrites" Dec 13 01:37:10.992734 kubelet[3398]: I1213 01:37:10.992304 3398 memory_manager.go:354] "RemoveStaleState removing state" podUID="18605ec1-8de2-4550-af1f-ab174d5ce203" containerName="cilium-operator" Dec 13 01:37:10.992734 kubelet[3398]: I1213 01:37:10.992318 3398 memory_manager.go:354] "RemoveStaleState removing state" podUID="adcc1c57-6737-4a79-aac1-c9d50d236003" containerName="cilium-agent" Dec 13 01:37:10.992964 systemd-logind[1950]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:37:10.996446 systemd[1]: sshd@26-172.31.20.66:22-139.178.68.195:52634.service: Deactivated successfully. Dec 13 01:37:11.006884 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:37:11.029296 kubelet[3398]: I1213 01:37:11.029221 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-cni-path\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.029774 kubelet[3398]: I1213 01:37:11.029301 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-hostproc\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.029774 kubelet[3398]: I1213 01:37:11.029331 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-cilium-ipsec-secrets\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.029774 kubelet[3398]: I1213 01:37:11.029355 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-cilium-cgroup\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.029774 kubelet[3398]: I1213 01:37:11.029380 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-etc-cni-netd\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.029774 kubelet[3398]: I1213 01:37:11.029404 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7x6k\" (UniqueName: \"kubernetes.io/projected/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-kube-api-access-d7x6k\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.029774 kubelet[3398]: I1213 01:37:11.029431 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-cilium-run\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.034218 kubelet[3398]: I1213 01:37:11.029455 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-host-proc-sys-kernel\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.034218 kubelet[3398]: I1213 01:37:11.029478 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-bpf-maps\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.034218 kubelet[3398]: I1213 01:37:11.029501 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-hubble-tls\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.034218 kubelet[3398]: I1213 01:37:11.032675 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-clustermesh-secrets\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.034218 kubelet[3398]: I1213 01:37:11.032716 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-cilium-config-path\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.034218 kubelet[3398]: I1213 01:37:11.032744 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-host-proc-sys-net\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.034455 kubelet[3398]: I1213 01:37:11.032772 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-xtables-lock\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.034455 kubelet[3398]: I1213 01:37:11.032796 3398 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f4ec781-94c6-4a0c-841c-c6c1cf0474ed-lib-modules\") pod \"cilium-tnjxd\" (UID: \"8f4ec781-94c6-4a0c-841c-c6c1cf0474ed\") " pod="kube-system/cilium-tnjxd" Dec 13 01:37:11.035144 systemd-logind[1950]: Removed session 27. Dec 13 01:37:11.040904 systemd[1]: Started sshd@27-172.31.20.66:22-139.178.68.195:52640.service - OpenSSH per-connection server daemon (139.178.68.195:52640). Dec 13 01:37:11.051051 systemd[1]: Created slice kubepods-burstable-pod8f4ec781_94c6_4a0c_841c_c6c1cf0474ed.slice - libcontainer container kubepods-burstable-pod8f4ec781_94c6_4a0c_841c_c6c1cf0474ed.slice. Dec 13 01:37:11.233229 sshd[5184]: Accepted publickey for core from 139.178.68.195 port 52640 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:37:11.236357 sshd[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:11.246583 systemd-logind[1950]: New session 28 of user core. Dec 13 01:37:11.250724 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:37:11.369548 containerd[1960]: time="2024-12-13T01:37:11.367981565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tnjxd,Uid:8f4ec781-94c6-4a0c-841c-c6c1cf0474ed,Namespace:kube-system,Attempt:0,}" Dec 13 01:37:11.369453 sshd[5184]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:11.391412 systemd[1]: sshd@27-172.31.20.66:22-139.178.68.195:52640.service: Deactivated successfully. Dec 13 01:37:11.407002 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:37:11.432845 systemd-logind[1950]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:37:11.436155 containerd[1960]: time="2024-12-13T01:37:11.436048761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:37:11.443642 systemd[1]: Started sshd@28-172.31.20.66:22-139.178.68.195:52656.service - OpenSSH per-connection server daemon (139.178.68.195:52656). Dec 13 01:37:11.447465 systemd-logind[1950]: Removed session 28. Dec 13 01:37:11.448176 containerd[1960]: time="2024-12-13T01:37:11.436115460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:37:11.448176 containerd[1960]: time="2024-12-13T01:37:11.445923453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:11.448176 containerd[1960]: time="2024-12-13T01:37:11.446375771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:37:11.481051 systemd[1]: Started cri-containerd-38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1.scope - libcontainer container 38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1. Dec 13 01:37:11.523425 containerd[1960]: time="2024-12-13T01:37:11.523292746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tnjxd,Uid:8f4ec781-94c6-4a0c-841c-c6c1cf0474ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\"" Dec 13 01:37:11.527552 containerd[1960]: time="2024-12-13T01:37:11.527454136Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:37:11.569969 containerd[1960]: time="2024-12-13T01:37:11.569902794Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b39b0568953b4326d9db208955498f3a0a43dcfc0b315d0bc43bba1113ec0fda\"" Dec 13 01:37:11.571490 containerd[1960]: time="2024-12-13T01:37:11.571339786Z" level=info msg="StartContainer for \"b39b0568953b4326d9db208955498f3a0a43dcfc0b315d0bc43bba1113ec0fda\"" Dec 13 01:37:11.605825 systemd[1]: Started cri-containerd-b39b0568953b4326d9db208955498f3a0a43dcfc0b315d0bc43bba1113ec0fda.scope - libcontainer container b39b0568953b4326d9db208955498f3a0a43dcfc0b315d0bc43bba1113ec0fda. Dec 13 01:37:11.607246 sshd[5209]: Accepted publickey for core from 139.178.68.195 port 52656 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:37:11.610629 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:11.625708 systemd-logind[1950]: New session 29 of user core. Dec 13 01:37:11.633898 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:37:11.667547 containerd[1960]: time="2024-12-13T01:37:11.666028719Z" level=info msg="StartContainer for \"b39b0568953b4326d9db208955498f3a0a43dcfc0b315d0bc43bba1113ec0fda\" returns successfully" Dec 13 01:37:11.678400 systemd[1]: cri-containerd-b39b0568953b4326d9db208955498f3a0a43dcfc0b315d0bc43bba1113ec0fda.scope: Deactivated successfully. Dec 13 01:37:11.729363 containerd[1960]: time="2024-12-13T01:37:11.729282022Z" level=info msg="shim disconnected" id=b39b0568953b4326d9db208955498f3a0a43dcfc0b315d0bc43bba1113ec0fda namespace=k8s.io Dec 13 01:37:11.729736 containerd[1960]: time="2024-12-13T01:37:11.729630284Z" level=warning msg="cleaning up after shim disconnected" id=b39b0568953b4326d9db208955498f3a0a43dcfc0b315d0bc43bba1113ec0fda namespace=k8s.io Dec 13 01:37:11.729736 containerd[1960]: time="2024-12-13T01:37:11.729654753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:12.645649 kubelet[3398]: I1213 01:37:12.641034 3398 setters.go:580] "Node became not ready" node="ip-172-31-20-66" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:37:12Z","lastTransitionTime":"2024-12-13T01:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:37:12.666221 containerd[1960]: time="2024-12-13T01:37:12.665991859Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:37:12.711485 containerd[1960]: time="2024-12-13T01:37:12.711325134Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa\"" Dec 13 01:37:12.715033 containerd[1960]: time="2024-12-13T01:37:12.714278325Z" level=info msg="StartContainer for \"56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa\"" Dec 13 01:37:12.807162 systemd[1]: Started cri-containerd-56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa.scope - libcontainer container 56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa. Dec 13 01:37:12.852695 containerd[1960]: time="2024-12-13T01:37:12.852509616Z" level=info msg="StartContainer for \"56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa\" returns successfully" Dec 13 01:37:12.863661 systemd[1]: cri-containerd-56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa.scope: Deactivated successfully. Dec 13 01:37:12.908431 containerd[1960]: time="2024-12-13T01:37:12.907802841Z" level=info msg="shim disconnected" id=56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa namespace=k8s.io Dec 13 01:37:12.908431 containerd[1960]: time="2024-12-13T01:37:12.907903834Z" level=warning msg="cleaning up after shim disconnected" id=56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa namespace=k8s.io Dec 13 01:37:12.908431 containerd[1960]: time="2024-12-13T01:37:12.907976790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:13.153448 systemd[1]: run-containerd-runc-k8s.io-56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa-runc.u5EiOM.mount: Deactivated successfully. Dec 13 01:37:13.153589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56f8e2f087f7d78c3384ec3fffa7fa1ef7194a2f4f060c44c1a63d9ca8686aaa-rootfs.mount: Deactivated successfully. Dec 13 01:37:13.670803 containerd[1960]: time="2024-12-13T01:37:13.670738649Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:37:13.699194 containerd[1960]: time="2024-12-13T01:37:13.697915075Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d\"" Dec 13 01:37:13.702276 containerd[1960]: time="2024-12-13T01:37:13.700320925Z" level=info msg="StartContainer for \"1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d\"" Dec 13 01:37:13.785772 systemd[1]: Started cri-containerd-1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d.scope - libcontainer container 1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d. Dec 13 01:37:13.841619 containerd[1960]: time="2024-12-13T01:37:13.841228965Z" level=info msg="StartContainer for \"1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d\" returns successfully" Dec 13 01:37:13.853380 systemd[1]: cri-containerd-1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d.scope: Deactivated successfully. Dec 13 01:37:13.913330 containerd[1960]: time="2024-12-13T01:37:13.913265471Z" level=info msg="shim disconnected" id=1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d namespace=k8s.io Dec 13 01:37:13.913330 containerd[1960]: time="2024-12-13T01:37:13.913327786Z" level=warning msg="cleaning up after shim disconnected" id=1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d namespace=k8s.io Dec 13 01:37:13.914085 containerd[1960]: time="2024-12-13T01:37:13.913339787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:14.153129 systemd[1]: run-containerd-runc-k8s.io-1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d-runc.sTozak.mount: Deactivated successfully. Dec 13 01:37:14.153250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1386602b8cd114e672d1e9930d419259282ce9f718f44de03bcbee6c6460287d-rootfs.mount: Deactivated successfully. Dec 13 01:37:14.672128 containerd[1960]: time="2024-12-13T01:37:14.671944022Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:37:14.698618 containerd[1960]: time="2024-12-13T01:37:14.698572472Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7\"" Dec 13 01:37:14.701692 containerd[1960]: time="2024-12-13T01:37:14.699842918Z" level=info msg="StartContainer for \"0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7\"" Dec 13 01:37:14.745919 systemd[1]: Started cri-containerd-0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7.scope - libcontainer container 0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7. Dec 13 01:37:14.774166 systemd[1]: cri-containerd-0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7.scope: Deactivated successfully. Dec 13 01:37:14.784132 containerd[1960]: time="2024-12-13T01:37:14.783846553Z" level=info msg="StartContainer for \"0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7\" returns successfully" Dec 13 01:37:14.784132 containerd[1960]: time="2024-12-13T01:37:14.783723719Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f4ec781_94c6_4a0c_841c_c6c1cf0474ed.slice/cri-containerd-0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7.scope/memory.events\": no such file or directory" Dec 13 01:37:14.840938 containerd[1960]: time="2024-12-13T01:37:14.840657783Z" level=info msg="shim disconnected" id=0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7 namespace=k8s.io Dec 13 01:37:14.840938 containerd[1960]: time="2024-12-13T01:37:14.840720652Z" level=warning msg="cleaning up after shim disconnected" id=0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7 namespace=k8s.io Dec 13 01:37:14.840938 containerd[1960]: time="2024-12-13T01:37:14.840731019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:15.153285 systemd[1]: run-containerd-runc-k8s.io-0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7-runc.H6tezb.mount: Deactivated successfully. Dec 13 01:37:15.153396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e8d167aea03747666ef713e0d633752673348273ff5588eb2b8c9268e9003e7-rootfs.mount: Deactivated successfully. Dec 13 01:37:15.301372 kubelet[3398]: E1213 01:37:15.301329 3398 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:37:15.676615 containerd[1960]: time="2024-12-13T01:37:15.676437428Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:37:15.707342 containerd[1960]: time="2024-12-13T01:37:15.707291553Z" level=info msg="CreateContainer within sandbox \"38c9150bec038cfa007711f6870fb0b5292df0153d56f0083020c87981d790c1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74fc01020ae499575c1d6c0b879faa6dab8a6999398df267752794f06e12ee69\"" Dec 13 01:37:15.709753 containerd[1960]: time="2024-12-13T01:37:15.709464189Z" level=info msg="StartContainer for \"74fc01020ae499575c1d6c0b879faa6dab8a6999398df267752794f06e12ee69\"" Dec 13 01:37:15.745723 systemd[1]: Started cri-containerd-74fc01020ae499575c1d6c0b879faa6dab8a6999398df267752794f06e12ee69.scope - libcontainer container 74fc01020ae499575c1d6c0b879faa6dab8a6999398df267752794f06e12ee69. Dec 13 01:37:15.782971 containerd[1960]: time="2024-12-13T01:37:15.782925571Z" level=info msg="StartContainer for \"74fc01020ae499575c1d6c0b879faa6dab8a6999398df267752794f06e12ee69\" returns successfully" Dec 13 01:37:16.473651 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:37:16.699982 kubelet[3398]: I1213 01:37:16.699920 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tnjxd" podStartSLOduration=6.699898737 podStartE2EDuration="6.699898737s" podCreationTimestamp="2024-12-13 01:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:37:16.699383102 +0000 UTC m=+116.967506558" watchObservedRunningTime="2024-12-13 01:37:16.699898737 +0000 UTC m=+116.968022182" Dec 13 01:37:18.419037 systemd[1]: run-containerd-runc-k8s.io-74fc01020ae499575c1d6c0b879faa6dab8a6999398df267752794f06e12ee69-runc.82He7N.mount: Deactivated successfully. Dec 13 01:37:20.022271 containerd[1960]: time="2024-12-13T01:37:20.018587344Z" level=info msg="StopPodSandbox for \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\"" Dec 13 01:37:20.022271 containerd[1960]: time="2024-12-13T01:37:20.019309333Z" level=info msg="TearDown network for sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" successfully" Dec 13 01:37:20.022271 containerd[1960]: time="2024-12-13T01:37:20.019339962Z" level=info msg="StopPodSandbox for \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" returns successfully" Dec 13 01:37:20.031567 containerd[1960]: time="2024-12-13T01:37:20.028971965Z" level=info msg="RemovePodSandbox for \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\"" Dec 13 01:37:20.031567 containerd[1960]: time="2024-12-13T01:37:20.029415330Z" level=info msg="Forcibly stopping sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\"" Dec 13 01:37:20.031567 containerd[1960]: time="2024-12-13T01:37:20.029511959Z" level=info msg="TearDown network for sandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" successfully" Dec 13 01:37:20.047240 containerd[1960]: time="2024-12-13T01:37:20.047124450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:37:20.047393 containerd[1960]: time="2024-12-13T01:37:20.047266929Z" level=info msg="RemovePodSandbox \"193f5850beb3141f10f93f664459bdfe8ba70e1e931e863806e2359fe352edd7\" returns successfully" Dec 13 01:37:20.047947 containerd[1960]: time="2024-12-13T01:37:20.047914616Z" level=info msg="StopPodSandbox for \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\"" Dec 13 01:37:20.048450 containerd[1960]: time="2024-12-13T01:37:20.048206418Z" level=info msg="TearDown network for sandbox \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\" successfully" Dec 13 01:37:20.048450 containerd[1960]: time="2024-12-13T01:37:20.048227166Z" level=info msg="StopPodSandbox for \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\" returns successfully" Dec 13 01:37:20.048976 containerd[1960]: time="2024-12-13T01:37:20.048946054Z" level=info msg="RemovePodSandbox for \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\"" Dec 13 01:37:20.049063 containerd[1960]: time="2024-12-13T01:37:20.048978572Z" level=info msg="Forcibly stopping sandbox \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\"" Dec 13 01:37:20.049063 containerd[1960]: time="2024-12-13T01:37:20.049042466Z" level=info msg="TearDown network for sandbox \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\" successfully" Dec 13 01:37:20.054775 containerd[1960]: time="2024-12-13T01:37:20.054478214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:37:20.054775 containerd[1960]: time="2024-12-13T01:37:20.054650383Z" level=info msg="RemovePodSandbox \"02def459a521d4edb608514d98c65cce9a7f1b240a34735dc7d5ab545abed973\" returns successfully" Dec 13 01:37:20.298861 systemd-networkd[1809]: lxc_health: Link UP Dec 13 01:37:20.304718 systemd-networkd[1809]: lxc_health: Gained carrier Dec 13 01:37:20.311292 (udev-worker)[6048]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:37:20.693295 systemd[1]: run-containerd-runc-k8s.io-74fc01020ae499575c1d6c0b879faa6dab8a6999398df267752794f06e12ee69-runc.3quR0r.mount: Deactivated successfully. Dec 13 01:37:21.655857 systemd-networkd[1809]: lxc_health: Gained IPv6LL Dec 13 01:37:24.348880 ntpd[1942]: Listen normally on 14 lxc_health [fe80::87a:4dff:fe17:3fb3%14]:123 Dec 13 01:37:24.349716 ntpd[1942]: 13 Dec 01:37:24 ntpd[1942]: Listen normally on 14 lxc_health [fe80::87a:4dff:fe17:3fb3%14]:123 Dec 13 01:37:27.647297 sshd[5209]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:27.652420 systemd[1]: sshd@28-172.31.20.66:22-139.178.68.195:52656.service: Deactivated successfully. Dec 13 01:37:27.655175 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:37:27.656410 systemd-logind[1950]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:37:27.658109 systemd-logind[1950]: Removed session 29. Dec 13 01:38:03.248290 kubelet[3398]: E1213 01:38:03.247723 3398 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-66?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:38:03.453140 systemd[1]: cri-containerd-27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad.scope: Deactivated successfully. Dec 13 01:38:03.454689 systemd[1]: cri-containerd-27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad.scope: Consumed 3.546s CPU time, 25.8M memory peak, 0B memory swap peak. Dec 13 01:38:03.496409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad-rootfs.mount: Deactivated successfully. Dec 13 01:38:03.514986 containerd[1960]: time="2024-12-13T01:38:03.514727302Z" level=info msg="shim disconnected" id=27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad namespace=k8s.io Dec 13 01:38:03.514986 containerd[1960]: time="2024-12-13T01:38:03.514871234Z" level=warning msg="cleaning up after shim disconnected" id=27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad namespace=k8s.io Dec 13 01:38:03.514986 containerd[1960]: time="2024-12-13T01:38:03.514886923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:03.825307 kubelet[3398]: I1213 01:38:03.824338 3398 scope.go:117] "RemoveContainer" containerID="27db665ce6dac36c7cae6ca69247a6bf0de75fe937f6d5b24f7a2672066589ad" Dec 13 01:38:03.838286 containerd[1960]: time="2024-12-13T01:38:03.838233779Z" level=info msg="CreateContainer within sandbox \"c7c3415b6ae99650322ef0da3bd4517eac5c4ab3bf4e6cb37c5fbf0aec14760e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:38:03.869652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618106570.mount: Deactivated successfully. Dec 13 01:38:03.877630 containerd[1960]: time="2024-12-13T01:38:03.872991066Z" level=info msg="CreateContainer within sandbox \"c7c3415b6ae99650322ef0da3bd4517eac5c4ab3bf4e6cb37c5fbf0aec14760e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"62e09390c42f2b8166713afb457e59da6c4c3d9f15f0219be73af60205f7dbd3\"" Dec 13 01:38:03.877630 containerd[1960]: time="2024-12-13T01:38:03.874286033Z" level=info msg="StartContainer for \"62e09390c42f2b8166713afb457e59da6c4c3d9f15f0219be73af60205f7dbd3\"" Dec 13 01:38:03.931744 systemd[1]: Started cri-containerd-62e09390c42f2b8166713afb457e59da6c4c3d9f15f0219be73af60205f7dbd3.scope - libcontainer container 62e09390c42f2b8166713afb457e59da6c4c3d9f15f0219be73af60205f7dbd3. Dec 13 01:38:03.986215 containerd[1960]: time="2024-12-13T01:38:03.986166176Z" level=info msg="StartContainer for \"62e09390c42f2b8166713afb457e59da6c4c3d9f15f0219be73af60205f7dbd3\" returns successfully" Dec 13 01:38:07.876901 systemd[1]: cri-containerd-9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35.scope: Deactivated successfully. Dec 13 01:38:07.878904 systemd[1]: cri-containerd-9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35.scope: Consumed 1.606s CPU time, 17.7M memory peak, 0B memory swap peak. Dec 13 01:38:07.909631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35-rootfs.mount: Deactivated successfully. Dec 13 01:38:07.924200 containerd[1960]: time="2024-12-13T01:38:07.924135426Z" level=info msg="shim disconnected" id=9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35 namespace=k8s.io Dec 13 01:38:07.924200 containerd[1960]: time="2024-12-13T01:38:07.924197152Z" level=warning msg="cleaning up after shim disconnected" id=9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35 namespace=k8s.io Dec 13 01:38:07.924200 containerd[1960]: time="2024-12-13T01:38:07.924208147Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:38:08.848829 kubelet[3398]: I1213 01:38:08.848800 3398 scope.go:117] "RemoveContainer" containerID="9758b7eb85a05c39a85b19f2a671aca639025a65c277f42446a1ac5208be0b35" Dec 13 01:38:08.852920 containerd[1960]: time="2024-12-13T01:38:08.852881064Z" level=info msg="CreateContainer within sandbox \"adb61a92db1047e1e6a7315eb932b50f144292d70bf4f875d1f40a84dd76baf4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:38:08.896446 containerd[1960]: time="2024-12-13T01:38:08.896393279Z" level=info msg="CreateContainer within sandbox \"adb61a92db1047e1e6a7315eb932b50f144292d70bf4f875d1f40a84dd76baf4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2c5ac4f76e5c7957959edeea9f1a5af3259850311b3b73b905a097f6476eac78\"" Dec 13 01:38:08.897186 containerd[1960]: time="2024-12-13T01:38:08.897151622Z" level=info msg="StartContainer for \"2c5ac4f76e5c7957959edeea9f1a5af3259850311b3b73b905a097f6476eac78\"" Dec 13 01:38:08.946330 systemd[1]: run-containerd-runc-k8s.io-2c5ac4f76e5c7957959edeea9f1a5af3259850311b3b73b905a097f6476eac78-runc.nc6MDn.mount: Deactivated successfully. Dec 13 01:38:08.955715 systemd[1]: Started cri-containerd-2c5ac4f76e5c7957959edeea9f1a5af3259850311b3b73b905a097f6476eac78.scope - libcontainer container 2c5ac4f76e5c7957959edeea9f1a5af3259850311b3b73b905a097f6476eac78. Dec 13 01:38:09.013506 containerd[1960]: time="2024-12-13T01:38:09.013459686Z" level=info msg="StartContainer for \"2c5ac4f76e5c7957959edeea9f1a5af3259850311b3b73b905a097f6476eac78\" returns successfully" Dec 13 01:38:13.249013 kubelet[3398]: E1213 01:38:13.248953 3398 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-66?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:38:23.250167 kubelet[3398]: E1213 01:38:23.249686 3398 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-66?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"