Jan 29 10:48:29.188030 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 29 10:48:29.188073 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 10:48:29.188097 kernel: KASLR disabled due to lack of seed Jan 29 10:48:29.188113 kernel: efi: EFI v2.7 by EDK II Jan 29 10:48:29.188128 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Jan 29 10:48:29.188144 kernel: secureboot: Secure boot disabled Jan 29 10:48:29.188161 kernel: ACPI: Early table checksum verification disabled Jan 29 10:48:29.188176 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 29 10:48:29.188191 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 29 10:48:29.188206 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 10:48:29.188226 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 29 10:48:29.188243 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 10:48:29.188258 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 29 10:48:29.188274 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 29 10:48:29.188292 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 29 10:48:29.188312 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 10:48:29.190422 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 29 10:48:29.190449 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 29 10:48:29.190466 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 29 10:48:29.190483 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 29 10:48:29.190499 kernel: printk: bootconsole [uart0] enabled Jan 29 10:48:29.190515 kernel: NUMA: Failed to initialise from firmware Jan 29 10:48:29.190532 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 10:48:29.190548 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 29 10:48:29.190565 kernel: Zone ranges: Jan 29 10:48:29.190581 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 10:48:29.190606 kernel: DMA32 empty Jan 29 10:48:29.190623 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 29 10:48:29.190639 kernel: Movable zone start for each node Jan 29 10:48:29.190655 kernel: Early memory node ranges Jan 29 10:48:29.190671 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 29 10:48:29.190687 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 29 10:48:29.190703 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 29 10:48:29.190719 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 29 10:48:29.190734 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 29 10:48:29.190750 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 29 10:48:29.190767 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 29 10:48:29.190783 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 29 10:48:29.190803 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 10:48:29.190820 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 29 10:48:29.190843 kernel: psci: probing for conduit method from ACPI. Jan 29 10:48:29.190860 kernel: psci: PSCIv1.0 detected in firmware. Jan 29 10:48:29.190877 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 10:48:29.190898 kernel: psci: Trusted OS migration not required Jan 29 10:48:29.190915 kernel: psci: SMC Calling Convention v1.1 Jan 29 10:48:29.190932 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 10:48:29.190949 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 10:48:29.190966 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 10:48:29.190983 kernel: Detected PIPT I-cache on CPU0 Jan 29 10:48:29.191000 kernel: CPU features: detected: GIC system register CPU interface Jan 29 10:48:29.191017 kernel: CPU features: detected: Spectre-v2 Jan 29 10:48:29.191034 kernel: CPU features: detected: Spectre-v3a Jan 29 10:48:29.191051 kernel: CPU features: detected: Spectre-BHB Jan 29 10:48:29.191068 kernel: CPU features: detected: ARM erratum 1742098 Jan 29 10:48:29.191084 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 29 10:48:29.191106 kernel: alternatives: applying boot alternatives Jan 29 10:48:29.191125 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:48:29.191144 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 10:48:29.191161 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 10:48:29.191178 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 10:48:29.191195 kernel: Fallback order for Node 0: 0 Jan 29 10:48:29.191212 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 29 10:48:29.191228 kernel: Policy zone: Normal Jan 29 10:48:29.191245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 10:48:29.191262 kernel: software IO TLB: area num 2. Jan 29 10:48:29.191284 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 29 10:48:29.191302 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Jan 29 10:48:29.191334 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 10:48:29.191357 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 10:48:29.191375 kernel: rcu: RCU event tracing is enabled. Jan 29 10:48:29.191393 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 10:48:29.191410 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 10:48:29.191427 kernel: Tracing variant of Tasks RCU enabled. Jan 29 10:48:29.191444 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 10:48:29.191461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 10:48:29.191478 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 10:48:29.191501 kernel: GICv3: 96 SPIs implemented Jan 29 10:48:29.191518 kernel: GICv3: 0 Extended SPIs implemented Jan 29 10:48:29.191535 kernel: Root IRQ handler: gic_handle_irq Jan 29 10:48:29.191552 kernel: GICv3: GICv3 features: 16 PPIs Jan 29 10:48:29.191568 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 29 10:48:29.191585 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 29 10:48:29.191602 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 10:48:29.191619 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 29 10:48:29.191636 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 29 10:48:29.191653 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 29 10:48:29.191670 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 29 10:48:29.191687 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 10:48:29.191709 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 29 10:48:29.191727 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 29 10:48:29.191744 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 29 10:48:29.191761 kernel: Console: colour dummy device 80x25 Jan 29 10:48:29.191778 kernel: printk: console [tty1] enabled Jan 29 10:48:29.191796 kernel: ACPI: Core revision 20230628 Jan 29 10:48:29.191814 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 29 10:48:29.191831 kernel: pid_max: default: 32768 minimum: 301 Jan 29 10:48:29.191848 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 10:48:29.191866 kernel: landlock: Up and running. Jan 29 10:48:29.191887 kernel: SELinux: Initializing. Jan 29 10:48:29.191904 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:48:29.191922 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:48:29.191939 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:48:29.191957 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:48:29.191974 kernel: rcu: Hierarchical SRCU implementation. Jan 29 10:48:29.191992 kernel: rcu: Max phase no-delay instances is 400. Jan 29 10:48:29.192009 kernel: Platform MSI: ITS@0x10080000 domain created Jan 29 10:48:29.192031 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 29 10:48:29.192048 kernel: Remapping and enabling EFI services. Jan 29 10:48:29.192065 kernel: smp: Bringing up secondary CPUs ... Jan 29 10:48:29.192083 kernel: Detected PIPT I-cache on CPU1 Jan 29 10:48:29.192100 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 29 10:48:29.192117 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 29 10:48:29.192135 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 29 10:48:29.192152 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 10:48:29.192169 kernel: SMP: Total of 2 processors activated. Jan 29 10:48:29.192186 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 10:48:29.192207 kernel: CPU features: detected: 32-bit EL1 Support Jan 29 10:48:29.192225 kernel: CPU features: detected: CRC32 instructions Jan 29 10:48:29.192253 kernel: CPU: All CPU(s) started at EL1 Jan 29 10:48:29.192275 kernel: alternatives: applying system-wide alternatives Jan 29 10:48:29.192293 kernel: devtmpfs: initialized Jan 29 10:48:29.192311 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 10:48:29.197395 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 10:48:29.197424 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 10:48:29.197443 kernel: SMBIOS 3.0.0 present. Jan 29 10:48:29.197472 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 29 10:48:29.197491 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 10:48:29.197509 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 10:48:29.197528 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 10:48:29.197546 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 10:48:29.197565 kernel: audit: initializing netlink subsys (disabled) Jan 29 10:48:29.197583 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Jan 29 10:48:29.197606 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 10:48:29.197625 kernel: cpuidle: using governor menu Jan 29 10:48:29.197643 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 10:48:29.197661 kernel: ASID allocator initialised with 65536 entries Jan 29 10:48:29.197679 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 10:48:29.197698 kernel: Serial: AMBA PL011 UART driver Jan 29 10:48:29.197717 kernel: Modules: 17360 pages in range for non-PLT usage Jan 29 10:48:29.197735 kernel: Modules: 508880 pages in range for PLT usage Jan 29 10:48:29.197753 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 10:48:29.197776 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 10:48:29.197795 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 10:48:29.197813 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 10:48:29.197831 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 10:48:29.197849 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 10:48:29.197868 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 10:48:29.197886 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 10:48:29.197904 kernel: ACPI: Added _OSI(Module Device) Jan 29 10:48:29.197922 kernel: ACPI: Added _OSI(Processor Device) Jan 29 10:48:29.197944 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 10:48:29.197963 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 10:48:29.197982 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 10:48:29.198000 kernel: ACPI: Interpreter enabled Jan 29 10:48:29.198017 kernel: ACPI: Using GIC for interrupt routing Jan 29 10:48:29.198036 kernel: ACPI: MCFG table detected, 1 entries Jan 29 10:48:29.198054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 29 10:48:29.199383 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 10:48:29.199627 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 10:48:29.199827 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 10:48:29.200024 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 29 10:48:29.200227 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 29 10:48:29.200252 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 29 10:48:29.200271 kernel: acpiphp: Slot [1] registered Jan 29 10:48:29.200290 kernel: acpiphp: Slot [2] registered Jan 29 10:48:29.200308 kernel: acpiphp: Slot [3] registered Jan 29 10:48:29.201374 kernel: acpiphp: Slot [4] registered Jan 29 10:48:29.201397 kernel: acpiphp: Slot [5] registered Jan 29 10:48:29.201415 kernel: acpiphp: Slot [6] registered Jan 29 10:48:29.201434 kernel: acpiphp: Slot [7] registered Jan 29 10:48:29.201451 kernel: acpiphp: Slot [8] registered Jan 29 10:48:29.201470 kernel: acpiphp: Slot [9] registered Jan 29 10:48:29.201488 kernel: acpiphp: Slot [10] registered Jan 29 10:48:29.201506 kernel: acpiphp: Slot [11] registered Jan 29 10:48:29.201524 kernel: acpiphp: Slot [12] registered Jan 29 10:48:29.201542 kernel: acpiphp: Slot [13] registered Jan 29 10:48:29.201567 kernel: acpiphp: Slot [14] registered Jan 29 10:48:29.201585 kernel: acpiphp: Slot [15] registered Jan 29 10:48:29.201603 kernel: acpiphp: Slot [16] registered Jan 29 10:48:29.201621 kernel: acpiphp: Slot [17] registered Jan 29 10:48:29.201639 kernel: acpiphp: Slot [18] registered Jan 29 10:48:29.201657 kernel: acpiphp: Slot [19] registered Jan 29 10:48:29.201674 kernel: acpiphp: Slot [20] registered Jan 29 10:48:29.201693 kernel: acpiphp: Slot [21] registered Jan 29 10:48:29.201710 kernel: acpiphp: Slot [22] registered Jan 29 10:48:29.201733 kernel: acpiphp: Slot [23] registered Jan 29 10:48:29.201751 kernel: acpiphp: Slot [24] registered Jan 29 10:48:29.201769 kernel: acpiphp: Slot [25] registered Jan 29 10:48:29.201787 kernel: acpiphp: Slot [26] registered Jan 29 10:48:29.201805 kernel: acpiphp: Slot [27] registered Jan 29 10:48:29.201822 kernel: acpiphp: Slot [28] registered Jan 29 10:48:29.201840 kernel: acpiphp: Slot [29] registered Jan 29 10:48:29.201858 kernel: acpiphp: Slot [30] registered Jan 29 10:48:29.201876 kernel: acpiphp: Slot [31] registered Jan 29 10:48:29.201894 kernel: PCI host bridge to bus 0000:00 Jan 29 10:48:29.202123 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 29 10:48:29.202314 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 10:48:29.203430 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 29 10:48:29.203618 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 29 10:48:29.203857 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 29 10:48:29.204083 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 29 10:48:29.204300 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 29 10:48:29.205775 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 10:48:29.206005 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 29 10:48:29.206426 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 10:48:29.206667 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 10:48:29.206924 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 29 10:48:29.207136 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 29 10:48:29.207402 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 29 10:48:29.207611 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 10:48:29.207812 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 29 10:48:29.208015 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 29 10:48:29.208220 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 29 10:48:29.208465 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 29 10:48:29.208683 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 29 10:48:29.208882 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 29 10:48:29.209075 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 10:48:29.209263 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 29 10:48:29.209289 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 10:48:29.209308 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 10:48:29.209367 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 10:48:29.209389 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 10:48:29.209408 kernel: iommu: Default domain type: Translated Jan 29 10:48:29.209434 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 10:48:29.209453 kernel: efivars: Registered efivars operations Jan 29 10:48:29.209471 kernel: vgaarb: loaded Jan 29 10:48:29.209489 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 10:48:29.209508 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 10:48:29.209527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 10:48:29.209545 kernel: pnp: PnP ACPI init Jan 29 10:48:29.209783 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 29 10:48:29.209818 kernel: pnp: PnP ACPI: found 1 devices Jan 29 10:48:29.209837 kernel: NET: Registered PF_INET protocol family Jan 29 10:48:29.209855 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 10:48:29.209874 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 10:48:29.209892 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 10:48:29.209911 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 10:48:29.209929 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 10:48:29.209947 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 10:48:29.209965 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:48:29.209988 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:48:29.210006 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 10:48:29.210024 kernel: PCI: CLS 0 bytes, default 64 Jan 29 10:48:29.210042 kernel: kvm [1]: HYP mode not available Jan 29 10:48:29.210060 kernel: Initialise system trusted keyrings Jan 29 10:48:29.210078 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 10:48:29.210096 kernel: Key type asymmetric registered Jan 29 10:48:29.210114 kernel: Asymmetric key parser 'x509' registered Jan 29 10:48:29.210132 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 10:48:29.210155 kernel: io scheduler mq-deadline registered Jan 29 10:48:29.210173 kernel: io scheduler kyber registered Jan 29 10:48:29.210191 kernel: io scheduler bfq registered Jan 29 10:48:29.214407 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 29 10:48:29.214455 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 10:48:29.214474 kernel: ACPI: button: Power Button [PWRB] Jan 29 10:48:29.214494 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 29 10:48:29.214512 kernel: ACPI: button: Sleep Button [SLPB] Jan 29 10:48:29.214543 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 10:48:29.214562 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 10:48:29.214795 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 29 10:48:29.214822 kernel: printk: console [ttyS0] disabled Jan 29 10:48:29.214841 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 29 10:48:29.214859 kernel: printk: console [ttyS0] enabled Jan 29 10:48:29.214878 kernel: printk: bootconsole [uart0] disabled Jan 29 10:48:29.214896 kernel: thunder_xcv, ver 1.0 Jan 29 10:48:29.214915 kernel: thunder_bgx, ver 1.0 Jan 29 10:48:29.214933 kernel: nicpf, ver 1.0 Jan 29 10:48:29.214957 kernel: nicvf, ver 1.0 Jan 29 10:48:29.215168 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 10:48:29.216442 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T10:48:28 UTC (1738147708) Jan 29 10:48:29.216483 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 10:48:29.216503 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 29 10:48:29.217389 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 10:48:29.217419 kernel: watchdog: Hard watchdog permanently disabled Jan 29 10:48:29.217447 kernel: NET: Registered PF_INET6 protocol family Jan 29 10:48:29.217467 kernel: Segment Routing with IPv6 Jan 29 10:48:29.217485 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 10:48:29.217503 kernel: NET: Registered PF_PACKET protocol family Jan 29 10:48:29.217521 kernel: Key type dns_resolver registered Jan 29 10:48:29.217539 kernel: registered taskstats version 1 Jan 29 10:48:29.217558 kernel: Loading compiled-in X.509 certificates Jan 29 10:48:29.217577 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 10:48:29.217595 kernel: Key type .fscrypt registered Jan 29 10:48:29.217613 kernel: Key type fscrypt-provisioning registered Jan 29 10:48:29.217637 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 10:48:29.217655 kernel: ima: Allocated hash algorithm: sha1 Jan 29 10:48:29.217673 kernel: ima: No architecture policies found Jan 29 10:48:29.217691 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 10:48:29.217709 kernel: clk: Disabling unused clocks Jan 29 10:48:29.217727 kernel: Freeing unused kernel memory: 39936K Jan 29 10:48:29.217745 kernel: Run /init as init process Jan 29 10:48:29.217763 kernel: with arguments: Jan 29 10:48:29.217781 kernel: /init Jan 29 10:48:29.217803 kernel: with environment: Jan 29 10:48:29.217822 kernel: HOME=/ Jan 29 10:48:29.217840 kernel: TERM=linux Jan 29 10:48:29.217858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 10:48:29.217881 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:48:29.217904 systemd[1]: Detected virtualization amazon. Jan 29 10:48:29.217925 systemd[1]: Detected architecture arm64. Jan 29 10:48:29.217949 systemd[1]: Running in initrd. Jan 29 10:48:29.217969 systemd[1]: No hostname configured, using default hostname. Jan 29 10:48:29.217988 systemd[1]: Hostname set to . Jan 29 10:48:29.218008 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:48:29.218027 systemd[1]: Queued start job for default target initrd.target. Jan 29 10:48:29.218049 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:48:29.218069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:48:29.218090 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 10:48:29.218115 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:48:29.218136 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 10:48:29.218157 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 10:48:29.218179 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 10:48:29.218200 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 10:48:29.218220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:48:29.218240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:48:29.218264 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:48:29.218285 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:48:29.218304 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:48:29.219406 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:48:29.219449 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:48:29.219471 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:48:29.219492 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 10:48:29.219512 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 10:48:29.219533 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:48:29.219606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:48:29.219628 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:48:29.219648 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:48:29.219668 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 10:48:29.219688 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:48:29.219708 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 10:48:29.219729 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 10:48:29.219749 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:48:29.219775 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:48:29.219795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:48:29.219815 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 10:48:29.219835 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:48:29.219855 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 10:48:29.219927 systemd-journald[251]: Collecting audit messages is disabled. Jan 29 10:48:29.219975 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:48:29.219995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 10:48:29.220016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:29.220040 kernel: Bridge firewalling registered Jan 29 10:48:29.220059 systemd-journald[251]: Journal started Jan 29 10:48:29.220105 systemd-journald[251]: Runtime Journal (/run/log/journal/ec20c97f6bd38c5e76598c1c7e1741a4) is 8.0M, max 75.3M, 67.3M free. Jan 29 10:48:29.225061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:48:29.178302 systemd-modules-load[252]: Inserted module 'overlay' Jan 29 10:48:29.249976 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:48:29.222464 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 29 10:48:29.237387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:48:29.238665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:48:29.253734 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:48:29.260592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:48:29.267827 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:48:29.299941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:48:29.310229 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:48:29.321894 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:48:29.333661 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 10:48:29.338219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:48:29.362036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:48:29.402844 dracut-cmdline[287]: dracut-dracut-053 Jan 29 10:48:29.413541 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:48:29.445303 systemd-resolved[290]: Positive Trust Anchors: Jan 29 10:48:29.445838 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:48:29.445902 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:48:29.605621 kernel: SCSI subsystem initialized Jan 29 10:48:29.613455 kernel: Loading iSCSI transport class v2.0-870. Jan 29 10:48:29.625440 kernel: iscsi: registered transport (tcp) Jan 29 10:48:29.647447 kernel: iscsi: registered transport (qla4xxx) Jan 29 10:48:29.647521 kernel: QLogic iSCSI HBA Driver Jan 29 10:48:29.698449 kernel: random: crng init done Jan 29 10:48:29.698775 systemd-resolved[290]: Defaulting to hostname 'linux'. Jan 29 10:48:29.700743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:48:29.705815 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:48:29.736898 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 10:48:29.746715 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 10:48:29.780370 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 10:48:29.780468 kernel: device-mapper: uevent: version 1.0.3 Jan 29 10:48:29.780499 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 10:48:29.846365 kernel: raid6: neonx8 gen() 6547 MB/s Jan 29 10:48:29.863352 kernel: raid6: neonx4 gen() 6477 MB/s Jan 29 10:48:29.880352 kernel: raid6: neonx2 gen() 5364 MB/s Jan 29 10:48:29.897352 kernel: raid6: neonx1 gen() 3931 MB/s Jan 29 10:48:29.914352 kernel: raid6: int64x8 gen() 3602 MB/s Jan 29 10:48:29.931353 kernel: raid6: int64x4 gen() 3682 MB/s Jan 29 10:48:29.948353 kernel: raid6: int64x2 gen() 3568 MB/s Jan 29 10:48:29.966108 kernel: raid6: int64x1 gen() 2759 MB/s Jan 29 10:48:29.966145 kernel: raid6: using algorithm neonx8 gen() 6547 MB/s Jan 29 10:48:29.984093 kernel: raid6: .... xor() 4818 MB/s, rmw enabled Jan 29 10:48:29.984134 kernel: raid6: using neon recovery algorithm Jan 29 10:48:29.992156 kernel: xor: measuring software checksum speed Jan 29 10:48:29.992203 kernel: 8regs : 12946 MB/sec Jan 29 10:48:29.993353 kernel: 32regs : 11478 MB/sec Jan 29 10:48:29.995355 kernel: arm64_neon : 9021 MB/sec Jan 29 10:48:29.995386 kernel: xor: using function: 8regs (12946 MB/sec) Jan 29 10:48:30.077375 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 10:48:30.095547 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:48:30.109694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:48:30.142228 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 29 10:48:30.150143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:48:30.165655 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 10:48:30.205560 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jan 29 10:48:30.259416 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:48:30.272669 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:48:30.390907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:48:30.408158 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 10:48:30.451796 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 10:48:30.455832 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:48:30.464500 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:48:30.470891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:48:30.486983 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 10:48:30.532664 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:48:30.593725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:48:30.598972 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:48:30.605187 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 10:48:30.605233 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 29 10:48:30.621797 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 10:48:30.622058 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 10:48:30.622300 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c3:51:1c:1f:e7 Jan 29 10:48:30.613395 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:48:30.618497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:48:30.618795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:30.625089 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:48:30.627836 (udev-worker)[544]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:48:30.654867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:48:30.682713 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 10:48:30.682774 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 10:48:30.692392 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 10:48:30.702367 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 10:48:30.702432 kernel: GPT:9289727 != 16777215 Jan 29 10:48:30.702467 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 10:48:30.702492 kernel: GPT:9289727 != 16777215 Jan 29 10:48:30.702516 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 10:48:30.702539 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:48:30.708232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:30.718811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:48:30.769365 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:48:30.802359 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (520) Jan 29 10:48:30.832937 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (523) Jan 29 10:48:30.873120 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 10:48:30.910691 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 10:48:30.920462 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 10:48:30.943226 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 10:48:30.984547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 10:48:30.996680 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 10:48:31.010233 disk-uuid[664]: Primary Header is updated. Jan 29 10:48:31.010233 disk-uuid[664]: Secondary Entries is updated. Jan 29 10:48:31.010233 disk-uuid[664]: Secondary Header is updated. Jan 29 10:48:31.022375 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:48:32.039392 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:48:32.039617 disk-uuid[665]: The operation has completed successfully. Jan 29 10:48:32.233234 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 10:48:32.235776 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 10:48:32.281653 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 10:48:32.294562 sh[925]: Success Jan 29 10:48:32.321366 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 10:48:32.444189 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 10:48:32.464570 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 10:48:32.471793 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 10:48:32.503361 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 10:48:32.503422 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:48:32.503448 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 10:48:32.504071 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 10:48:32.505337 kernel: BTRFS info (device dm-0): using free space tree Jan 29 10:48:32.535366 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 10:48:32.539036 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 10:48:32.545608 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 10:48:32.559560 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 10:48:32.570648 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 10:48:32.603120 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:32.603188 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:48:32.604466 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:48:32.611369 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:48:32.625681 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 10:48:32.630023 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:32.637613 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 10:48:32.649696 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 10:48:32.753745 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:48:32.774660 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:48:32.833141 systemd-networkd[1120]: lo: Link UP Jan 29 10:48:32.833157 systemd-networkd[1120]: lo: Gained carrier Jan 29 10:48:32.840394 ignition[1034]: Ignition 2.20.0 Jan 29 10:48:32.840409 ignition[1034]: Stage: fetch-offline Jan 29 10:48:32.843725 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:48:32.840848 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:32.845243 systemd-networkd[1120]: Enumeration completed Jan 29 10:48:32.840872 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:32.846099 systemd-networkd[1120]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:48:32.841362 ignition[1034]: Ignition finished successfully Jan 29 10:48:32.846106 systemd-networkd[1120]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:48:32.850885 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:48:32.858111 systemd[1]: Reached target network.target - Network. Jan 29 10:48:32.861599 systemd-networkd[1120]: eth0: Link UP Jan 29 10:48:32.861607 systemd-networkd[1120]: eth0: Gained carrier Jan 29 10:48:32.861624 systemd-networkd[1120]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:48:32.924982 systemd-networkd[1120]: eth0: DHCPv4 address 172.31.20.65/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 10:48:32.925899 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 10:48:32.975107 ignition[1128]: Ignition 2.20.0 Jan 29 10:48:32.975137 ignition[1128]: Stage: fetch Jan 29 10:48:32.977070 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:32.977107 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:32.978173 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:33.008087 ignition[1128]: PUT result: OK Jan 29 10:48:33.012406 ignition[1128]: parsed url from cmdline: "" Jan 29 10:48:33.012444 ignition[1128]: no config URL provided Jan 29 10:48:33.012464 ignition[1128]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:48:33.012490 ignition[1128]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:48:33.012544 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:33.017129 ignition[1128]: PUT result: OK Jan 29 10:48:33.017207 ignition[1128]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 10:48:33.026142 ignition[1128]: GET result: OK Jan 29 10:48:33.026435 ignition[1128]: parsing config with SHA512: f4f3e0f821147830559a759ae0398d045701c1988f161572d123d8e6c601b76f0af05ade6c9e190121d855af9f3f52501672070c7e56907c100d299867cbcaad Jan 29 10:48:33.034353 unknown[1128]: fetched base config from "system" Jan 29 10:48:33.035008 ignition[1128]: fetch: fetch complete Jan 29 10:48:33.034371 unknown[1128]: fetched base config from "system" Jan 29 10:48:33.035019 ignition[1128]: fetch: fetch passed Jan 29 10:48:33.034385 unknown[1128]: fetched user config from "aws" Jan 29 10:48:33.035095 ignition[1128]: Ignition finished successfully Jan 29 10:48:33.048909 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 10:48:33.061741 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 10:48:33.084406 ignition[1136]: Ignition 2.20.0 Jan 29 10:48:33.084951 ignition[1136]: Stage: kargs Jan 29 10:48:33.085621 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:33.085673 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:33.085846 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:33.090674 ignition[1136]: PUT result: OK Jan 29 10:48:33.100413 ignition[1136]: kargs: kargs passed Jan 29 10:48:33.100509 ignition[1136]: Ignition finished successfully Jan 29 10:48:33.106261 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 10:48:33.123672 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 10:48:33.149006 ignition[1142]: Ignition 2.20.0 Jan 29 10:48:33.149037 ignition[1142]: Stage: disks Jan 29 10:48:33.149833 ignition[1142]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:33.149872 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:33.150025 ignition[1142]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:33.152417 ignition[1142]: PUT result: OK Jan 29 10:48:33.164205 ignition[1142]: disks: disks passed Jan 29 10:48:33.164398 ignition[1142]: Ignition finished successfully Jan 29 10:48:33.170407 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 10:48:33.173282 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 10:48:33.180988 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 10:48:33.183803 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:48:33.186152 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:48:33.188522 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:48:33.210553 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 10:48:33.261488 systemd-fsck[1150]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 10:48:33.269965 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 10:48:33.281506 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 10:48:33.381376 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 10:48:33.382514 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 10:48:33.387975 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 10:48:33.411460 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:48:33.418531 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 10:48:33.423157 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 10:48:33.423244 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 10:48:33.444860 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1169) Jan 29 10:48:33.423294 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:48:33.450690 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:33.450721 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:48:33.451887 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:48:33.459851 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 10:48:33.471374 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:48:33.472687 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 10:48:33.483531 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:48:33.563850 initrd-setup-root[1193]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 10:48:33.575161 initrd-setup-root[1200]: cut: /sysroot/etc/group: No such file or directory Jan 29 10:48:33.585757 initrd-setup-root[1207]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 10:48:33.594666 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 10:48:33.761583 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 10:48:33.774537 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 10:48:33.788483 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 10:48:33.807154 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 10:48:33.810959 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:33.839460 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 10:48:33.852610 ignition[1283]: INFO : Ignition 2.20.0 Jan 29 10:48:33.852610 ignition[1283]: INFO : Stage: mount Jan 29 10:48:33.857243 ignition[1283]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:33.857243 ignition[1283]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:33.857243 ignition[1283]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:33.857243 ignition[1283]: INFO : PUT result: OK Jan 29 10:48:33.870488 ignition[1283]: INFO : mount: mount passed Jan 29 10:48:33.873021 ignition[1283]: INFO : Ignition finished successfully Jan 29 10:48:33.877260 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 10:48:33.888511 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 10:48:33.908473 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:48:33.941378 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1294) Jan 29 10:48:33.945172 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:33.945217 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:48:33.945243 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:48:33.951354 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:48:33.954746 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:48:33.987004 ignition[1312]: INFO : Ignition 2.20.0 Jan 29 10:48:33.989697 ignition[1312]: INFO : Stage: files Jan 29 10:48:33.992291 ignition[1312]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:33.992291 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:33.992291 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:34.000603 ignition[1312]: INFO : PUT result: OK Jan 29 10:48:34.010924 ignition[1312]: DEBUG : files: compiled without relabeling support, skipping Jan 29 10:48:34.014128 ignition[1312]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 10:48:34.014128 ignition[1312]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 10:48:34.029717 ignition[1312]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 10:48:34.033235 ignition[1312]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 10:48:34.036903 unknown[1312]: wrote ssh authorized keys file for user: core Jan 29 10:48:34.039896 ignition[1312]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 10:48:34.049577 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:48:34.053378 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 10:48:34.102490 systemd-networkd[1120]: eth0: Gained IPv6LL Jan 29 10:48:34.143206 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 10:48:34.275262 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:48:34.280483 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 10:48:34.280483 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 10:48:34.750786 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 10:48:34.879045 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 10:48:34.879045 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:48:34.887899 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 10:48:35.295879 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 10:48:35.637040 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:48:35.644165 ignition[1312]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 10:48:35.644165 ignition[1312]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:48:35.644165 ignition[1312]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:48:35.644165 ignition[1312]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 10:48:35.644165 ignition[1312]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 10:48:35.644165 ignition[1312]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 10:48:35.644165 ignition[1312]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:48:35.644165 ignition[1312]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:48:35.644165 ignition[1312]: INFO : files: files passed Jan 29 10:48:35.644165 ignition[1312]: INFO : Ignition finished successfully Jan 29 10:48:35.678303 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 10:48:35.688646 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 10:48:35.697953 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 10:48:35.717107 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 10:48:35.719891 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 10:48:35.731655 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:48:35.731655 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:48:35.740569 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:48:35.747193 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:48:35.754205 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 10:48:35.774675 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 10:48:35.834050 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 10:48:35.835395 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 10:48:35.840411 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 10:48:35.842550 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 10:48:35.844650 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 10:48:35.861528 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 10:48:35.891385 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:48:35.911703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 10:48:35.937541 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:48:35.941124 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:48:35.944663 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 10:48:35.954632 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 10:48:35.954860 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:48:35.961706 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 10:48:35.963909 systemd[1]: Stopped target basic.target - Basic System. Jan 29 10:48:35.968770 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 10:48:35.973682 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:48:35.983268 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 10:48:35.986224 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 10:48:35.993268 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:48:35.996398 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 10:48:36.003670 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 10:48:36.006394 systemd[1]: Stopped target swap.target - Swaps. Jan 29 10:48:36.012137 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 10:48:36.012869 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:48:36.019743 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:48:36.022559 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:48:36.030196 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 10:48:36.032643 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:48:36.035611 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 10:48:36.035837 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 10:48:36.044218 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 10:48:36.044656 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:48:36.052646 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 10:48:36.053036 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 10:48:36.066743 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 10:48:36.068879 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 10:48:36.069150 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:48:36.077941 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 10:48:36.083541 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 10:48:36.085848 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:48:36.092886 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 10:48:36.095448 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:48:36.108268 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 10:48:36.108502 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 10:48:36.130881 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 10:48:36.146317 ignition[1363]: INFO : Ignition 2.20.0 Jan 29 10:48:36.146317 ignition[1363]: INFO : Stage: umount Jan 29 10:48:36.146317 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:36.146317 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:36.146317 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:36.143591 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 10:48:36.161008 ignition[1363]: INFO : PUT result: OK Jan 29 10:48:36.143824 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 10:48:36.166898 ignition[1363]: INFO : umount: umount passed Jan 29 10:48:36.169895 ignition[1363]: INFO : Ignition finished successfully Jan 29 10:48:36.169491 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 10:48:36.170741 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 10:48:36.176015 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 10:48:36.176107 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 10:48:36.186912 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 10:48:36.187007 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 10:48:36.189109 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 10:48:36.189185 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 10:48:36.191238 systemd[1]: Stopped target network.target - Network. Jan 29 10:48:36.192984 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 10:48:36.193059 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:48:36.195419 systemd[1]: Stopped target paths.target - Path Units. Jan 29 10:48:36.197190 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 10:48:36.201433 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:48:36.204653 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 10:48:36.206933 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 10:48:36.209159 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 10:48:36.209234 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:48:36.211515 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 10:48:36.211584 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:48:36.214685 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 10:48:36.214764 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 10:48:36.247198 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 10:48:36.247280 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 10:48:36.249753 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 10:48:36.249827 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 10:48:36.252548 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 10:48:36.254989 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 10:48:36.283380 systemd-networkd[1120]: eth0: DHCPv6 lease lost Jan 29 10:48:36.287052 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 10:48:36.287294 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 10:48:36.295116 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 10:48:36.295310 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 10:48:36.303428 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 10:48:36.303558 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:48:36.315482 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 10:48:36.319503 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 10:48:36.319630 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:48:36.331077 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:48:36.331180 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:48:36.333698 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 10:48:36.333777 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 10:48:36.337709 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 10:48:36.337783 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:48:36.342832 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:48:36.378452 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 10:48:36.378772 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:48:36.384237 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 10:48:36.384340 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 10:48:36.386935 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 10:48:36.387004 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:48:36.389429 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 10:48:36.389510 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:48:36.392076 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 10:48:36.392154 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 10:48:36.394653 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:48:36.394731 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:48:36.400592 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 10:48:36.402831 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 10:48:36.402956 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:48:36.405573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:48:36.405681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:36.412534 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 10:48:36.414878 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 10:48:36.437774 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 10:48:36.437979 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 10:48:36.473921 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 10:48:36.486599 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 10:48:36.505992 systemd[1]: Switching root. Jan 29 10:48:36.544581 systemd-journald[251]: Journal stopped Jan 29 10:48:38.373738 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 29 10:48:38.373857 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 10:48:38.373907 kernel: SELinux: policy capability open_perms=1 Jan 29 10:48:38.373939 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 10:48:38.373969 kernel: SELinux: policy capability always_check_network=0 Jan 29 10:48:38.374000 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 10:48:38.374030 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 10:48:38.374061 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 10:48:38.374094 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 10:48:38.374124 kernel: audit: type=1403 audit(1738147716.863:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 10:48:38.374208 systemd[1]: Successfully loaded SELinux policy in 50.038ms. Jan 29 10:48:38.374265 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.769ms. Jan 29 10:48:38.374300 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:48:38.374473 systemd[1]: Detected virtualization amazon. Jan 29 10:48:38.374511 systemd[1]: Detected architecture arm64. Jan 29 10:48:38.374542 systemd[1]: Detected first boot. Jan 29 10:48:38.374575 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:48:38.374607 zram_generator::config[1405]: No configuration found. Jan 29 10:48:38.374642 systemd[1]: Populated /etc with preset unit settings. Jan 29 10:48:38.374678 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 10:48:38.374707 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 10:48:38.374738 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 10:48:38.374771 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 10:48:38.374800 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 10:48:38.374831 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 10:48:38.374862 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 10:48:38.374893 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 10:48:38.374924 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 10:48:38.374960 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 10:48:38.375003 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 10:48:38.375036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:48:38.375066 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:48:38.375097 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 10:48:38.375126 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 10:48:38.375155 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 10:48:38.375186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:48:38.375217 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 10:48:38.375254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:48:38.375283 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 10:48:38.375346 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 10:48:38.375382 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 10:48:38.375412 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 10:48:38.375443 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:48:38.375475 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:48:38.375510 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:48:38.375541 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:48:38.375570 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 10:48:38.375601 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 10:48:38.375632 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:48:38.375661 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:48:38.375695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:48:38.375724 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 10:48:38.375755 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 10:48:38.377438 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 10:48:38.377506 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 10:48:38.377539 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 10:48:38.377569 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 10:48:38.377601 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 10:48:38.377631 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 10:48:38.377663 systemd[1]: Reached target machines.target - Containers. Jan 29 10:48:38.377692 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 10:48:38.377721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:48:38.377755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:48:38.377784 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 10:48:38.377815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:48:38.377846 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:48:38.377875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:48:38.377907 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 10:48:38.377936 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:48:38.379384 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 10:48:38.379430 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 10:48:38.379460 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 10:48:38.379490 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 10:48:38.379518 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 10:48:38.379549 kernel: fuse: init (API version 7.39) Jan 29 10:48:38.379578 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:48:38.379608 kernel: loop: module loaded Jan 29 10:48:38.379636 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:48:38.379665 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 10:48:38.379697 kernel: ACPI: bus type drm_connector registered Jan 29 10:48:38.379728 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 10:48:38.379757 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:48:38.379787 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 10:48:38.379816 systemd[1]: Stopped verity-setup.service. Jan 29 10:48:38.379845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 10:48:38.379874 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 10:48:38.379902 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 10:48:38.379988 systemd-journald[1497]: Collecting audit messages is disabled. Jan 29 10:48:38.380041 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 10:48:38.380072 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 10:48:38.380102 systemd-journald[1497]: Journal started Jan 29 10:48:38.380161 systemd-journald[1497]: Runtime Journal (/run/log/journal/ec20c97f6bd38c5e76598c1c7e1741a4) is 8.0M, max 75.3M, 67.3M free. Jan 29 10:48:37.821861 systemd[1]: Queued start job for default target multi-user.target. Jan 29 10:48:37.849971 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 10:48:37.850758 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 10:48:38.388310 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:48:38.391046 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 10:48:38.395110 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 10:48:38.400746 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:48:38.406737 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 10:48:38.407053 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 10:48:38.412774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:48:38.413070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:48:38.418622 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:48:38.418920 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:48:38.424208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:48:38.424678 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:48:38.430312 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 10:48:38.430676 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 10:48:38.435946 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:48:38.436241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:48:38.441483 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:48:38.446961 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 10:48:38.452919 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 10:48:38.480434 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 10:48:38.491552 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 10:48:38.506545 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 10:48:38.511458 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 10:48:38.511527 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:48:38.519499 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 10:48:38.532707 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 10:48:38.549232 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 10:48:38.553840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:48:38.556748 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 10:48:38.568070 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 10:48:38.575524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:48:38.586633 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 10:48:38.591208 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:48:38.594934 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:48:38.603865 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 10:48:38.614750 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 10:48:38.624420 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:48:38.626477 systemd-journald[1497]: Time spent on flushing to /var/log/journal/ec20c97f6bd38c5e76598c1c7e1741a4 is 156.835ms for 907 entries. Jan 29 10:48:38.626477 systemd-journald[1497]: System Journal (/var/log/journal/ec20c97f6bd38c5e76598c1c7e1741a4) is 8.0M, max 195.6M, 187.6M free. Jan 29 10:48:38.796211 systemd-journald[1497]: Received client request to flush runtime journal. Jan 29 10:48:38.797646 kernel: loop0: detected capacity change from 0 to 113552 Jan 29 10:48:38.631867 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 10:48:38.641821 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 10:48:38.648011 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 10:48:38.653984 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 10:48:38.668778 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 10:48:38.684594 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 10:48:38.695645 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 10:48:38.743423 udevadm[1544]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 10:48:38.768303 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:48:38.777441 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 10:48:38.785013 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 10:48:38.811619 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 10:48:38.818458 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 10:48:38.839346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:48:38.850374 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 10:48:38.878371 kernel: loop1: detected capacity change from 0 to 194096 Jan 29 10:48:38.909366 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Jan 29 10:48:38.909404 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Jan 29 10:48:38.928470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:48:39.033375 kernel: loop2: detected capacity change from 0 to 53784 Jan 29 10:48:39.177422 kernel: loop3: detected capacity change from 0 to 116784 Jan 29 10:48:39.240397 kernel: loop4: detected capacity change from 0 to 113552 Jan 29 10:48:39.269627 kernel: loop5: detected capacity change from 0 to 194096 Jan 29 10:48:39.313497 kernel: loop6: detected capacity change from 0 to 53784 Jan 29 10:48:39.333386 kernel: loop7: detected capacity change from 0 to 116784 Jan 29 10:48:39.356957 (sd-merge)[1559]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 10:48:39.359090 (sd-merge)[1559]: Merged extensions into '/usr'. Jan 29 10:48:39.371424 systemd[1]: Reloading requested from client PID 1534 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 10:48:39.371454 systemd[1]: Reloading... Jan 29 10:48:39.491014 ldconfig[1529]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 10:48:39.559373 zram_generator::config[1585]: No configuration found. Jan 29 10:48:39.829849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:48:39.939763 systemd[1]: Reloading finished in 567 ms. Jan 29 10:48:39.983376 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 10:48:39.986675 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 10:48:39.989955 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 10:48:40.006679 systemd[1]: Starting ensure-sysext.service... Jan 29 10:48:40.020133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:48:40.026702 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:48:40.051434 systemd[1]: Reloading requested from client PID 1638 ('systemctl') (unit ensure-sysext.service)... Jan 29 10:48:40.051480 systemd[1]: Reloading... Jan 29 10:48:40.066949 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 10:48:40.069937 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 10:48:40.071951 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 10:48:40.072647 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jan 29 10:48:40.072884 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jan 29 10:48:40.079889 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:48:40.080117 systemd-tmpfiles[1639]: Skipping /boot Jan 29 10:48:40.102290 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:48:40.102520 systemd-tmpfiles[1639]: Skipping /boot Jan 29 10:48:40.150892 systemd-udevd[1640]: Using default interface naming scheme 'v255'. Jan 29 10:48:40.291372 zram_generator::config[1670]: No configuration found. Jan 29 10:48:40.356545 (udev-worker)[1676]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:48:40.639509 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:48:40.651352 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1676) Jan 29 10:48:40.783917 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 10:48:40.784924 systemd[1]: Reloading finished in 732 ms. Jan 29 10:48:40.819140 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:48:40.824904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:48:40.909966 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:48:40.916813 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 10:48:40.921834 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:48:40.927907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:48:40.942963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:48:40.951599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:48:40.960834 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:48:40.969861 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 10:48:40.982392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:48:40.994852 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:48:41.008963 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 10:48:41.027284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:48:41.035158 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 10:48:41.041491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:48:41.041808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:48:41.047779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:48:41.048101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:48:41.056158 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:48:41.056798 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:48:41.068721 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 10:48:41.095230 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 10:48:41.110904 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 10:48:41.131507 systemd[1]: Finished ensure-sysext.service. Jan 29 10:48:41.133201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:48:41.140551 augenrules[1870]: No rules Jan 29 10:48:41.142616 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 10:48:41.149858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:48:41.154576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:48:41.161416 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:48:41.168892 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:48:41.169236 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:48:41.182685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 10:48:41.182894 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 10:48:41.189250 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 10:48:41.196650 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 10:48:41.197705 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:48:41.199456 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:48:41.209190 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:48:41.212851 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:48:41.227822 lvm[1871]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:48:41.244842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:48:41.245466 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:48:41.255791 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:48:41.257446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:48:41.258042 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:48:41.269675 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 10:48:41.270234 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 10:48:41.286965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:48:41.287270 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:48:41.287973 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 10:48:41.290442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:48:41.312451 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 10:48:41.322501 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 10:48:41.323049 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:48:41.336598 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 10:48:41.349419 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:41.355718 lvm[1895]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:48:41.390928 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 10:48:41.403273 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 10:48:41.501804 systemd-networkd[1844]: lo: Link UP Jan 29 10:48:41.501825 systemd-networkd[1844]: lo: Gained carrier Jan 29 10:48:41.504669 systemd-networkd[1844]: Enumeration completed Jan 29 10:48:41.505036 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:48:41.509786 systemd-networkd[1844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:48:41.509807 systemd-networkd[1844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:48:41.511834 systemd-networkd[1844]: eth0: Link UP Jan 29 10:48:41.512141 systemd-networkd[1844]: eth0: Gained carrier Jan 29 10:48:41.512202 systemd-networkd[1844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:48:41.519840 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 10:48:41.524607 systemd-networkd[1844]: eth0: DHCPv4 address 172.31.20.65/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 10:48:41.525109 systemd-resolved[1846]: Positive Trust Anchors: Jan 29 10:48:41.525130 systemd-resolved[1846]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:48:41.525191 systemd-resolved[1846]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:48:41.536275 systemd-resolved[1846]: Defaulting to hostname 'linux'. Jan 29 10:48:41.542524 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:48:41.546916 systemd[1]: Reached target network.target - Network. Jan 29 10:48:41.550613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:48:41.554935 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:48:41.558191 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 10:48:41.562035 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 10:48:41.566180 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 10:48:41.569464 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 10:48:41.571936 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 10:48:41.574414 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 10:48:41.574467 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:48:41.576249 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:48:41.579691 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 10:48:41.586733 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 10:48:41.594499 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 10:48:41.598136 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 10:48:41.601051 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:48:41.603291 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:48:41.605477 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:48:41.605526 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:48:41.609532 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 10:48:41.622643 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 10:48:41.630197 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 10:48:41.637581 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 10:48:41.646748 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 10:48:41.649734 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 10:48:41.661846 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 10:48:41.667250 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 10:48:41.683782 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 10:48:41.692748 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 10:48:41.705744 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 10:48:41.721062 jq[1911]: false Jan 29 10:48:41.722715 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 10:48:41.736792 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 10:48:41.740180 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 10:48:41.741053 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 10:48:41.745653 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 10:48:41.750586 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 10:48:41.758062 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 10:48:41.760425 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 10:48:41.790361 dbus-daemon[1910]: [system] SELinux support is enabled Jan 29 10:48:41.797421 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 10:48:41.809161 dbus-daemon[1910]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1844 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 10:48:41.814104 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 10:48:41.814597 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 10:48:41.819405 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 10:48:41.819460 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 10:48:41.825942 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 10:48:41.832189 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 10:48:41.825981 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 10:48:41.841243 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 10:48:41.866360 jq[1926]: true Jan 29 10:48:41.899501 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 10:48:41.899883 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 10:48:41.914943 tar[1930]: linux-arm64/helm Jan 29 10:48:41.932158 update_engine[1924]: I20250129 10:48:41.925889 1924 main.cc:92] Flatcar Update Engine starting Jan 29 10:48:41.932788 extend-filesystems[1912]: Found loop4 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found loop5 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found loop6 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found loop7 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found nvme0n1 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found nvme0n1p1 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found nvme0n1p2 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found nvme0n1p3 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found usr Jan 29 10:48:41.932788 extend-filesystems[1912]: Found nvme0n1p4 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found nvme0n1p6 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found nvme0n1p7 Jan 29 10:48:41.932788 extend-filesystems[1912]: Found nvme0n1p9 Jan 29 10:48:41.932788 extend-filesystems[1912]: Checking size of /dev/nvme0n1p9 Jan 29 10:48:41.952964 systemd[1]: Started update-engine.service - Update Engine. Jan 29 10:48:42.046646 update_engine[1924]: I20250129 10:48:41.961616 1924 update_check_scheduler.cc:74] Next update check in 8m25s Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetch successful Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetch successful Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetch successful Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetch successful Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetch failed with 404: resource not found Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetch successful Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:41.999 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:42.000 INFO Fetch successful Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:42.000 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:42.005 INFO Fetch successful Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:42.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:42.005 INFO Fetch successful Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:42.005 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 10:48:42.046707 coreos-metadata[1909]: Jan 29 10:48:42.005 INFO Fetch successful Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:41 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:00:26 UTC 2025 (1): Starting Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:41 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:41 ntpd[1914]: ---------------------------------------------------- Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:41 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:41 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:41 ntpd[1914]: corporation. Support and training for ntp-4 are Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:41 ntpd[1914]: available at https://www.nwtime.org/support Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:41 ntpd[1914]: ---------------------------------------------------- Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: proto: precision = 0.096 usec (-23) Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: basedate set to 2025-01-17 Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: gps base set to 2025-01-19 (week 2350) Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 10:48:42.047640 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: Listen normally on 3 eth0 172.31.20.65:123 Jan 29 10:48:41.993777 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:00:26 UTC 2025 (1): Starting Jan 29 10:48:41.964273 (ntainerd)[1944]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 10:48:42.082739 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: Listen normally on 4 lo [::1]:123 Jan 29 10:48:42.082739 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: bind(21) AF_INET6 fe80::4c3:51ff:fe1c:1fe7%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:48:42.082739 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: unable to create socket on eth0 (5) for fe80::4c3:51ff:fe1c:1fe7%2#123 Jan 29 10:48:42.082739 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: failed to init interface for address fe80::4c3:51ff:fe1c:1fe7%2 Jan 29 10:48:42.082739 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Jan 29 10:48:41.993823 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 10:48:42.083040 jq[1946]: true Jan 29 10:48:41.966501 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 10:48:41.993843 ntpd[1914]: ---------------------------------------------------- Jan 29 10:48:41.993861 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Jan 29 10:48:41.993879 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 10:48:41.993897 ntpd[1914]: corporation. Support and training for ntp-4 are Jan 29 10:48:41.993919 ntpd[1914]: available at https://www.nwtime.org/support Jan 29 10:48:42.102641 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:48:42.102641 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:48:41.993937 ntpd[1914]: ---------------------------------------------------- Jan 29 10:48:42.008597 ntpd[1914]: proto: precision = 0.096 usec (-23) Jan 29 10:48:42.025923 ntpd[1914]: basedate set to 2025-01-17 Jan 29 10:48:42.025954 ntpd[1914]: gps base set to 2025-01-19 (week 2350) Jan 29 10:48:42.043875 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 10:48:42.043948 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 10:48:42.044193 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 10:48:42.044252 ntpd[1914]: Listen normally on 3 eth0 172.31.20.65:123 Jan 29 10:48:42.052308 ntpd[1914]: Listen normally on 4 lo [::1]:123 Jan 29 10:48:42.052683 ntpd[1914]: bind(21) AF_INET6 fe80::4c3:51ff:fe1c:1fe7%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:48:42.052724 ntpd[1914]: unable to create socket on eth0 (5) for fe80::4c3:51ff:fe1c:1fe7%2#123 Jan 29 10:48:42.052751 ntpd[1914]: failed to init interface for address fe80::4c3:51ff:fe1c:1fe7%2 Jan 29 10:48:42.110298 extend-filesystems[1912]: Resized partition /dev/nvme0n1p9 Jan 29 10:48:42.052810 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Jan 29 10:48:42.122608 extend-filesystems[1969]: resize2fs 1.47.1 (20-May-2024) Jan 29 10:48:42.135079 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 10:48:42.087765 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:48:42.124810 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 10:48:42.087818 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:48:42.203635 systemd-logind[1923]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 10:48:42.203670 systemd-logind[1923]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 29 10:48:42.204522 systemd-logind[1923]: New seat seat0. Jan 29 10:48:42.206585 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 10:48:42.217250 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 10:48:42.222617 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 10:48:42.248726 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 10:48:42.262418 extend-filesystems[1969]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 10:48:42.262418 extend-filesystems[1969]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 10:48:42.262418 extend-filesystems[1969]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 10:48:42.271946 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 10:48:42.281743 bash[1985]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:48:42.281915 extend-filesystems[1912]: Resized filesystem in /dev/nvme0n1p9 Jan 29 10:48:42.272366 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 10:48:42.289208 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 10:48:42.312740 systemd[1]: Starting sshkeys.service... Jan 29 10:48:42.379455 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 10:48:42.397147 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 10:48:42.419882 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 10:48:42.452629 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 10:48:42.453034 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 10:48:42.460157 dbus-daemon[1910]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1942 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 10:48:42.487351 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1685) Jan 29 10:48:42.510582 locksmithd[1953]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 10:48:42.511656 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 10:48:42.546281 polkitd[2005]: Started polkitd version 121 Jan 29 10:48:42.571711 polkitd[2005]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 10:48:42.571828 polkitd[2005]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 10:48:42.574757 polkitd[2005]: Finished loading, compiling and executing 2 rules Jan 29 10:48:42.577297 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 10:48:42.577599 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 10:48:42.578395 polkitd[2005]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 10:48:42.651596 systemd-hostnamed[1942]: Hostname set to (transient) Jan 29 10:48:42.654496 systemd-resolved[1846]: System hostname changed to 'ip-172-31-20-65'. Jan 29 10:48:42.786991 coreos-metadata[1995]: Jan 29 10:48:42.786 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 10:48:42.796810 coreos-metadata[1995]: Jan 29 10:48:42.793 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 10:48:42.796810 coreos-metadata[1995]: Jan 29 10:48:42.796 INFO Fetch successful Jan 29 10:48:42.796810 coreos-metadata[1995]: Jan 29 10:48:42.796 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 10:48:42.801864 coreos-metadata[1995]: Jan 29 10:48:42.797 INFO Fetch successful Jan 29 10:48:42.808969 unknown[1995]: wrote ssh authorized keys file for user: core Jan 29 10:48:42.880763 update-ssh-keys[2102]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:48:42.882904 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 10:48:42.895721 systemd[1]: Finished sshkeys.service. Jan 29 10:48:42.922164 containerd[1944]: time="2025-01-29T10:48:42.920988971Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 10:48:42.996674 ntpd[1914]: bind(24) AF_INET6 fe80::4c3:51ff:fe1c:1fe7%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:48:42.997354 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: bind(24) AF_INET6 fe80::4c3:51ff:fe1c:1fe7%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:48:42.997354 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: unable to create socket on eth0 (6) for fe80::4c3:51ff:fe1c:1fe7%2#123 Jan 29 10:48:42.997354 ntpd[1914]: 29 Jan 10:48:42 ntpd[1914]: failed to init interface for address fe80::4c3:51ff:fe1c:1fe7%2 Jan 29 10:48:42.996735 ntpd[1914]: unable to create socket on eth0 (6) for fe80::4c3:51ff:fe1c:1fe7%2#123 Jan 29 10:48:42.996763 ntpd[1914]: failed to init interface for address fe80::4c3:51ff:fe1c:1fe7%2 Jan 29 10:48:43.059425 containerd[1944]: time="2025-01-29T10:48:43.057112976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:43.065741 containerd[1944]: time="2025-01-29T10:48:43.065663360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:43.065741 containerd[1944]: time="2025-01-29T10:48:43.065731280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 10:48:43.065922 containerd[1944]: time="2025-01-29T10:48:43.065771540Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 10:48:43.066100 containerd[1944]: time="2025-01-29T10:48:43.066058004Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 10:48:43.066156 containerd[1944]: time="2025-01-29T10:48:43.066105896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:43.066269 containerd[1944]: time="2025-01-29T10:48:43.066228632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:43.066352 containerd[1944]: time="2025-01-29T10:48:43.066266048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:43.066624 containerd[1944]: time="2025-01-29T10:48:43.066577352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:43.066680 containerd[1944]: time="2025-01-29T10:48:43.066620168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:43.066680 containerd[1944]: time="2025-01-29T10:48:43.066656816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:43.066760 containerd[1944]: time="2025-01-29T10:48:43.066681236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:43.066882 containerd[1944]: time="2025-01-29T10:48:43.066843716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:43.072089 containerd[1944]: time="2025-01-29T10:48:43.072025244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:43.073734 containerd[1944]: time="2025-01-29T10:48:43.073678412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:43.073815 containerd[1944]: time="2025-01-29T10:48:43.073730348Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 10:48:43.073988 containerd[1944]: time="2025-01-29T10:48:43.073945904Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 10:48:43.074097 containerd[1944]: time="2025-01-29T10:48:43.074059292Z" level=info msg="metadata content store policy set" policy=shared Jan 29 10:48:43.081512 containerd[1944]: time="2025-01-29T10:48:43.081444824Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 10:48:43.081632 containerd[1944]: time="2025-01-29T10:48:43.081544448Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 10:48:43.081632 containerd[1944]: time="2025-01-29T10:48:43.081582680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 10:48:43.081632 containerd[1944]: time="2025-01-29T10:48:43.081618968Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 10:48:43.081751 containerd[1944]: time="2025-01-29T10:48:43.081652652Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 10:48:43.081943 containerd[1944]: time="2025-01-29T10:48:43.081901436Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 10:48:43.082317 containerd[1944]: time="2025-01-29T10:48:43.082276760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 10:48:43.084603 containerd[1944]: time="2025-01-29T10:48:43.084556208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 10:48:43.084681 containerd[1944]: time="2025-01-29T10:48:43.084610652Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 10:48:43.084681 containerd[1944]: time="2025-01-29T10:48:43.084647516Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 10:48:43.084786 containerd[1944]: time="2025-01-29T10:48:43.084683108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 10:48:43.084786 containerd[1944]: time="2025-01-29T10:48:43.084714128Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 10:48:43.084786 containerd[1944]: time="2025-01-29T10:48:43.084743228Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 10:48:43.084786 containerd[1944]: time="2025-01-29T10:48:43.084774176Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 10:48:43.084939 containerd[1944]: time="2025-01-29T10:48:43.084806552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 10:48:43.084939 containerd[1944]: time="2025-01-29T10:48:43.084835976Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 10:48:43.084939 containerd[1944]: time="2025-01-29T10:48:43.084863756Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 10:48:43.084939 containerd[1944]: time="2025-01-29T10:48:43.084889976Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 10:48:43.084939 containerd[1944]: time="2025-01-29T10:48:43.084930080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085135 containerd[1944]: time="2025-01-29T10:48:43.084960884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085135 containerd[1944]: time="2025-01-29T10:48:43.085001948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085135 containerd[1944]: time="2025-01-29T10:48:43.085033004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085135 containerd[1944]: time="2025-01-29T10:48:43.085061756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085135 containerd[1944]: time="2025-01-29T10:48:43.085091564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085135 containerd[1944]: time="2025-01-29T10:48:43.085122188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085413 containerd[1944]: time="2025-01-29T10:48:43.085152428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085413 containerd[1944]: time="2025-01-29T10:48:43.085182992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085413 containerd[1944]: time="2025-01-29T10:48:43.085216172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085413 containerd[1944]: time="2025-01-29T10:48:43.085243364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085413 containerd[1944]: time="2025-01-29T10:48:43.085270760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.085413 containerd[1944]: time="2025-01-29T10:48:43.085300028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.086782736Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.086849288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.086886080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.086915624Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.087053468Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.087095288Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.087120548Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.087149756Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.087172376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.087201560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.087226640Z" level=info msg="NRI interface is disabled by configuration." Jan 29 10:48:43.088347 containerd[1944]: time="2025-01-29T10:48:43.087252500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 10:48:43.088861 containerd[1944]: time="2025-01-29T10:48:43.088772780Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 10:48:43.089083 containerd[1944]: time="2025-01-29T10:48:43.088868216Z" level=info msg="Connect containerd service" Jan 29 10:48:43.089083 containerd[1944]: time="2025-01-29T10:48:43.088932920Z" level=info msg="using legacy CRI server" Jan 29 10:48:43.089083 containerd[1944]: time="2025-01-29T10:48:43.088951880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 10:48:43.089215 containerd[1944]: time="2025-01-29T10:48:43.089183636Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 10:48:43.093252 containerd[1944]: time="2025-01-29T10:48:43.092274140Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:48:43.094296 containerd[1944]: time="2025-01-29T10:48:43.094229348Z" level=info msg="Start subscribing containerd event" Jan 29 10:48:43.094409 containerd[1944]: time="2025-01-29T10:48:43.094315352Z" level=info msg="Start recovering state" Jan 29 10:48:43.094499 containerd[1944]: time="2025-01-29T10:48:43.094462412Z" level=info msg="Start event monitor" Jan 29 10:48:43.094610 containerd[1944]: time="2025-01-29T10:48:43.094494488Z" level=info msg="Start snapshots syncer" Jan 29 10:48:43.094610 containerd[1944]: time="2025-01-29T10:48:43.094519004Z" level=info msg="Start cni network conf syncer for default" Jan 29 10:48:43.094610 containerd[1944]: time="2025-01-29T10:48:43.094538696Z" level=info msg="Start streaming server" Jan 29 10:48:43.095950 containerd[1944]: time="2025-01-29T10:48:43.095901164Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 10:48:43.096930 containerd[1944]: time="2025-01-29T10:48:43.096800468Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 10:48:43.097197 containerd[1944]: time="2025-01-29T10:48:43.097079132Z" level=info msg="containerd successfully booted in 0.180243s" Jan 29 10:48:43.097195 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 10:48:43.382532 systemd-networkd[1844]: eth0: Gained IPv6LL Jan 29 10:48:43.392439 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 10:48:43.399639 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 10:48:43.419797 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 10:48:43.431605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:48:43.446951 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 10:48:43.488801 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 10:48:43.517974 tar[1930]: linux-arm64/LICENSE Jan 29 10:48:43.517974 tar[1930]: linux-arm64/README.md Jan 29 10:48:43.559464 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 10:48:43.570486 amazon-ssm-agent[2115]: Initializing new seelog logger Jan 29 10:48:43.572210 amazon-ssm-agent[2115]: New Seelog Logger Creation Complete Jan 29 10:48:43.572210 amazon-ssm-agent[2115]: 2025/01/29 10:48:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:43.572210 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:43.572210 amazon-ssm-agent[2115]: 2025/01/29 10:48:43 processing appconfig overrides Jan 29 10:48:43.572739 amazon-ssm-agent[2115]: 2025/01/29 10:48:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:43.572841 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:43.574200 amazon-ssm-agent[2115]: 2025/01/29 10:48:43 processing appconfig overrides Jan 29 10:48:43.574657 amazon-ssm-agent[2115]: 2025/01/29 10:48:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:43.574743 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:43.574918 amazon-ssm-agent[2115]: 2025/01/29 10:48:43 processing appconfig overrides Jan 29 10:48:43.577367 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO Proxy environment variables: Jan 29 10:48:43.581924 amazon-ssm-agent[2115]: 2025/01/29 10:48:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:43.584361 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:43.584361 amazon-ssm-agent[2115]: 2025/01/29 10:48:43 processing appconfig overrides Jan 29 10:48:43.676597 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO https_proxy: Jan 29 10:48:43.774377 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO http_proxy: Jan 29 10:48:43.833493 sshd_keygen[1949]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 10:48:43.872543 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO no_proxy: Jan 29 10:48:43.888436 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 10:48:43.903876 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 10:48:43.919860 systemd[1]: Started sshd@0-172.31.20.65:22-139.178.89.65:37098.service - OpenSSH per-connection server daemon (139.178.89.65:37098). Jan 29 10:48:43.952624 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 10:48:43.954678 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 10:48:43.971023 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 10:48:43.981967 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO Checking if agent identity type OnPrem can be assumed Jan 29 10:48:44.015156 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 10:48:44.029143 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 10:48:44.050941 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 10:48:44.057194 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 10:48:44.084402 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO Checking if agent identity type EC2 can be assumed Jan 29 10:48:44.163030 sshd[2145]: Accepted publickey for core from 139.178.89.65 port 37098 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:44.170106 sshd-session[2145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:44.187082 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO Agent will take identity from EC2 Jan 29 10:48:44.194589 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 10:48:44.203001 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 10:48:44.214295 systemd-logind[1923]: New session 1 of user core. Jan 29 10:48:44.240654 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 10:48:44.258053 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 10:48:44.285145 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:48:44.287337 (systemd)[2156]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO [Registrar] Starting registrar module Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:43 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:44 INFO [EC2Identity] EC2 registration was successful. Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:44 INFO [CredentialRefresher] credentialRefresher has started Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:44 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 10:48:44.290306 amazon-ssm-agent[2115]: 2025-01-29 10:48:44 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 10:48:44.383650 amazon-ssm-agent[2115]: 2025-01-29 10:48:44 INFO [CredentialRefresher] Next credential rotation will be in 30.1249309356 minutes Jan 29 10:48:44.509698 systemd[2156]: Queued start job for default target default.target. Jan 29 10:48:44.521652 systemd[2156]: Created slice app.slice - User Application Slice. Jan 29 10:48:44.521719 systemd[2156]: Reached target paths.target - Paths. Jan 29 10:48:44.521752 systemd[2156]: Reached target timers.target - Timers. Jan 29 10:48:44.524258 systemd[2156]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 10:48:44.564586 systemd[2156]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 10:48:44.564699 systemd[2156]: Reached target sockets.target - Sockets. Jan 29 10:48:44.564732 systemd[2156]: Reached target basic.target - Basic System. Jan 29 10:48:44.564813 systemd[2156]: Reached target default.target - Main User Target. Jan 29 10:48:44.564873 systemd[2156]: Startup finished in 264ms. Jan 29 10:48:44.565613 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 10:48:44.576665 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 10:48:44.741040 systemd[1]: Started sshd@1-172.31.20.65:22-139.178.89.65:37106.service - OpenSSH per-connection server daemon (139.178.89.65:37106). Jan 29 10:48:44.930373 sshd[2167]: Accepted publickey for core from 139.178.89.65 port 37106 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:44.932821 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:44.942864 systemd-logind[1923]: New session 2 of user core. Jan 29 10:48:44.949581 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 10:48:45.081792 sshd[2169]: Connection closed by 139.178.89.65 port 37106 Jan 29 10:48:45.081670 sshd-session[2167]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:45.088220 systemd[1]: sshd@1-172.31.20.65:22-139.178.89.65:37106.service: Deactivated successfully. Jan 29 10:48:45.093277 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 10:48:45.096206 systemd-logind[1923]: Session 2 logged out. Waiting for processes to exit. Jan 29 10:48:45.098432 systemd-logind[1923]: Removed session 2. Jan 29 10:48:45.122471 systemd[1]: Started sshd@2-172.31.20.65:22-139.178.89.65:37110.service - OpenSSH per-connection server daemon (139.178.89.65:37110). Jan 29 10:48:45.312664 sshd[2174]: Accepted publickey for core from 139.178.89.65 port 37110 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:45.314843 sshd-session[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:45.320586 amazon-ssm-agent[2115]: 2025-01-29 10:48:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 10:48:45.330200 systemd-logind[1923]: New session 3 of user core. Jan 29 10:48:45.334655 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 10:48:45.421713 amazon-ssm-agent[2115]: 2025-01-29 10:48:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2177) started Jan 29 10:48:45.475861 sshd[2181]: Connection closed by 139.178.89.65 port 37110 Jan 29 10:48:45.477616 sshd-session[2174]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:45.484577 systemd[1]: sshd@2-172.31.20.65:22-139.178.89.65:37110.service: Deactivated successfully. Jan 29 10:48:45.490305 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 10:48:45.494761 systemd-logind[1923]: Session 3 logged out. Waiting for processes to exit. Jan 29 10:48:45.498576 systemd-logind[1923]: Removed session 3. Jan 29 10:48:45.522553 amazon-ssm-agent[2115]: 2025-01-29 10:48:45 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 10:48:45.685641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:48:45.689300 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 10:48:45.692188 systemd[1]: Startup finished in 1.061s (kernel) + 8.082s (initrd) + 8.876s (userspace) = 18.020s. Jan 29 10:48:45.699877 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:48:45.719437 agetty[2152]: failed to open credentials directory Jan 29 10:48:45.719484 agetty[2153]: failed to open credentials directory Jan 29 10:48:45.994664 ntpd[1914]: Listen normally on 7 eth0 [fe80::4c3:51ff:fe1c:1fe7%2]:123 Jan 29 10:48:45.996338 ntpd[1914]: 29 Jan 10:48:45 ntpd[1914]: Listen normally on 7 eth0 [fe80::4c3:51ff:fe1c:1fe7%2]:123 Jan 29 10:48:46.992682 kubelet[2196]: E0129 10:48:46.992557 2196 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:48:46.997152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:48:46.997599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:48:46.998397 systemd[1]: kubelet.service: Consumed 1.308s CPU time. Jan 29 10:48:48.850106 systemd-resolved[1846]: Clock change detected. Flushing caches. Jan 29 10:48:55.374299 systemd[1]: Started sshd@3-172.31.20.65:22-139.178.89.65:56874.service - OpenSSH per-connection server daemon (139.178.89.65:56874). Jan 29 10:48:55.554940 sshd[2209]: Accepted publickey for core from 139.178.89.65 port 56874 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:55.557316 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:55.564457 systemd-logind[1923]: New session 4 of user core. Jan 29 10:48:55.573109 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 10:48:55.699102 sshd[2211]: Connection closed by 139.178.89.65 port 56874 Jan 29 10:48:55.699938 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:55.705197 systemd[1]: sshd@3-172.31.20.65:22-139.178.89.65:56874.service: Deactivated successfully. Jan 29 10:48:55.707821 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 10:48:55.711617 systemd-logind[1923]: Session 4 logged out. Waiting for processes to exit. Jan 29 10:48:55.713540 systemd-logind[1923]: Removed session 4. Jan 29 10:48:55.746305 systemd[1]: Started sshd@4-172.31.20.65:22-139.178.89.65:56886.service - OpenSSH per-connection server daemon (139.178.89.65:56886). Jan 29 10:48:55.923609 sshd[2216]: Accepted publickey for core from 139.178.89.65 port 56886 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:55.925996 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:55.933212 systemd-logind[1923]: New session 5 of user core. Jan 29 10:48:55.946086 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 10:48:56.064470 sshd[2218]: Connection closed by 139.178.89.65 port 56886 Jan 29 10:48:56.065322 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:56.071073 systemd[1]: sshd@4-172.31.20.65:22-139.178.89.65:56886.service: Deactivated successfully. Jan 29 10:48:56.075214 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 10:48:56.076703 systemd-logind[1923]: Session 5 logged out. Waiting for processes to exit. Jan 29 10:48:56.078373 systemd-logind[1923]: Removed session 5. Jan 29 10:48:56.106367 systemd[1]: Started sshd@5-172.31.20.65:22-139.178.89.65:56894.service - OpenSSH per-connection server daemon (139.178.89.65:56894). Jan 29 10:48:56.289195 sshd[2223]: Accepted publickey for core from 139.178.89.65 port 56894 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:56.291523 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:56.300936 systemd-logind[1923]: New session 6 of user core. Jan 29 10:48:56.307130 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 10:48:56.434750 sshd[2225]: Connection closed by 139.178.89.65 port 56894 Jan 29 10:48:56.436110 sshd-session[2223]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:56.441478 systemd[1]: sshd@5-172.31.20.65:22-139.178.89.65:56894.service: Deactivated successfully. Jan 29 10:48:56.445114 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 10:48:56.447035 systemd-logind[1923]: Session 6 logged out. Waiting for processes to exit. Jan 29 10:48:56.449288 systemd-logind[1923]: Removed session 6. Jan 29 10:48:56.473371 systemd[1]: Started sshd@6-172.31.20.65:22-139.178.89.65:56906.service - OpenSSH per-connection server daemon (139.178.89.65:56906). Jan 29 10:48:56.657817 sshd[2230]: Accepted publickey for core from 139.178.89.65 port 56906 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:56.660255 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:56.668374 systemd-logind[1923]: New session 7 of user core. Jan 29 10:48:56.678117 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 10:48:56.795596 sudo[2233]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 10:48:56.796256 sudo[2233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:48:56.815397 sudo[2233]: pam_unix(sudo:session): session closed for user root Jan 29 10:48:56.838086 sshd[2232]: Connection closed by 139.178.89.65 port 56906 Jan 29 10:48:56.839172 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:56.845757 systemd[1]: sshd@6-172.31.20.65:22-139.178.89.65:56906.service: Deactivated successfully. Jan 29 10:48:56.849434 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 10:48:56.850754 systemd-logind[1923]: Session 7 logged out. Waiting for processes to exit. Jan 29 10:48:56.853254 systemd-logind[1923]: Removed session 7. Jan 29 10:48:56.853997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 10:48:56.859198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:48:56.874359 systemd[1]: Started sshd@7-172.31.20.65:22-139.178.89.65:56920.service - OpenSSH per-connection server daemon (139.178.89.65:56920). Jan 29 10:48:57.063772 sshd[2239]: Accepted publickey for core from 139.178.89.65 port 56920 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:57.067300 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:57.081377 systemd-logind[1923]: New session 8 of user core. Jan 29 10:48:57.085172 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 10:48:57.172716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:48:57.187485 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:48:57.193625 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 10:48:57.194763 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:48:57.203985 sudo[2251]: pam_unix(sudo:session): session closed for user root Jan 29 10:48:57.214439 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 10:48:57.216338 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:48:57.243474 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:48:57.292329 kubelet[2249]: E0129 10:48:57.292254 2249 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:48:57.300015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:48:57.300502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:48:57.312880 augenrules[2280]: No rules Jan 29 10:48:57.316326 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:48:57.317937 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:48:57.320087 sudo[2250]: pam_unix(sudo:session): session closed for user root Jan 29 10:48:57.343830 sshd[2243]: Connection closed by 139.178.89.65 port 56920 Jan 29 10:48:57.344668 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:57.349215 systemd-logind[1923]: Session 8 logged out. Waiting for processes to exit. Jan 29 10:48:57.350470 systemd[1]: sshd@7-172.31.20.65:22-139.178.89.65:56920.service: Deactivated successfully. Jan 29 10:48:57.354332 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 10:48:57.359169 systemd-logind[1923]: Removed session 8. Jan 29 10:48:57.383360 systemd[1]: Started sshd@8-172.31.20.65:22-139.178.89.65:56922.service - OpenSSH per-connection server daemon (139.178.89.65:56922). Jan 29 10:48:57.566605 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 56922 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:57.569033 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:57.576270 systemd-logind[1923]: New session 9 of user core. Jan 29 10:48:57.584102 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 10:48:57.687694 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 10:48:57.688506 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:48:58.205263 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 10:48:58.205561 (dockerd)[2308]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 10:48:58.541577 dockerd[2308]: time="2025-01-29T10:48:58.541477200Z" level=info msg="Starting up" Jan 29 10:48:58.676695 dockerd[2308]: time="2025-01-29T10:48:58.676366308Z" level=info msg="Loading containers: start." Jan 29 10:48:58.915886 kernel: Initializing XFRM netlink socket Jan 29 10:48:58.947092 (udev-worker)[2332]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:48:59.041551 systemd-networkd[1844]: docker0: Link UP Jan 29 10:48:59.083197 dockerd[2308]: time="2025-01-29T10:48:59.083048338Z" level=info msg="Loading containers: done." Jan 29 10:48:59.109930 dockerd[2308]: time="2025-01-29T10:48:59.109699546Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 10:48:59.109930 dockerd[2308]: time="2025-01-29T10:48:59.109829206Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 10:48:59.110190 dockerd[2308]: time="2025-01-29T10:48:59.110070670Z" level=info msg="Daemon has completed initialization" Jan 29 10:48:59.163638 dockerd[2308]: time="2025-01-29T10:48:59.163436783Z" level=info msg="API listen on /run/docker.sock" Jan 29 10:48:59.163748 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 10:49:00.388731 containerd[1944]: time="2025-01-29T10:49:00.388654681Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 10:49:00.981339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount909055881.mount: Deactivated successfully. Jan 29 10:49:02.811363 containerd[1944]: time="2025-01-29T10:49:02.811303013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:02.815831 containerd[1944]: time="2025-01-29T10:49:02.815759309Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 29 10:49:02.817918 containerd[1944]: time="2025-01-29T10:49:02.817621193Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:02.822575 containerd[1944]: time="2025-01-29T10:49:02.822484385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:02.825366 containerd[1944]: time="2025-01-29T10:49:02.825116261Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.436390468s" Jan 29 10:49:02.825366 containerd[1944]: time="2025-01-29T10:49:02.825175625Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 10:49:02.863893 containerd[1944]: time="2025-01-29T10:49:02.863815865Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 10:49:05.215703 containerd[1944]: time="2025-01-29T10:49:05.215626169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:05.217699 containerd[1944]: time="2025-01-29T10:49:05.217632521Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 29 10:49:05.218661 containerd[1944]: time="2025-01-29T10:49:05.218189753Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:05.229410 containerd[1944]: time="2025-01-29T10:49:05.228517793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:05.233022 containerd[1944]: time="2025-01-29T10:49:05.232960241Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 2.3690676s" Jan 29 10:49:05.233325 containerd[1944]: time="2025-01-29T10:49:05.233025665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 10:49:05.275618 containerd[1944]: time="2025-01-29T10:49:05.275306825Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 10:49:06.635819 containerd[1944]: time="2025-01-29T10:49:06.635735744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:06.637946 containerd[1944]: time="2025-01-29T10:49:06.637808996Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 29 10:49:06.638742 containerd[1944]: time="2025-01-29T10:49:06.638657228Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:06.644935 containerd[1944]: time="2025-01-29T10:49:06.644819144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:06.647928 containerd[1944]: time="2025-01-29T10:49:06.647345552Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.371973231s" Jan 29 10:49:06.647928 containerd[1944]: time="2025-01-29T10:49:06.647415428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 10:49:06.689381 containerd[1944]: time="2025-01-29T10:49:06.689274488Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 10:49:07.506497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 10:49:07.513427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:49:07.896247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:49:07.907469 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:49:08.034426 kubelet[2591]: E0129 10:49:08.034236 2591 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:49:08.040307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:49:08.040690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:49:08.188359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281422787.mount: Deactivated successfully. Jan 29 10:49:08.963722 containerd[1944]: time="2025-01-29T10:49:08.963637475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:08.966018 containerd[1944]: time="2025-01-29T10:49:08.965839055Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 29 10:49:08.968712 containerd[1944]: time="2025-01-29T10:49:08.968610275Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:08.974367 containerd[1944]: time="2025-01-29T10:49:08.974260151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:08.976619 containerd[1944]: time="2025-01-29T10:49:08.976044551Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 2.286700583s" Jan 29 10:49:08.976619 containerd[1944]: time="2025-01-29T10:49:08.976109999Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 10:49:09.020198 containerd[1944]: time="2025-01-29T10:49:09.020116484Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 10:49:09.585835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3277280082.mount: Deactivated successfully. Jan 29 10:49:10.836409 containerd[1944]: time="2025-01-29T10:49:10.836313469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:10.838779 containerd[1944]: time="2025-01-29T10:49:10.838684273Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 29 10:49:10.841012 containerd[1944]: time="2025-01-29T10:49:10.840891661Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:10.847919 containerd[1944]: time="2025-01-29T10:49:10.847813957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:10.851049 containerd[1944]: time="2025-01-29T10:49:10.850801789Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.830614577s" Jan 29 10:49:10.851049 containerd[1944]: time="2025-01-29T10:49:10.850896109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 10:49:10.898588 containerd[1944]: time="2025-01-29T10:49:10.898527349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 10:49:11.436667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368866617.mount: Deactivated successfully. Jan 29 10:49:11.451817 containerd[1944]: time="2025-01-29T10:49:11.451712160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:11.453905 containerd[1944]: time="2025-01-29T10:49:11.453713052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 29 10:49:11.456415 containerd[1944]: time="2025-01-29T10:49:11.456298992Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:11.463714 containerd[1944]: time="2025-01-29T10:49:11.463593168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:11.465789 containerd[1944]: time="2025-01-29T10:49:11.465544872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 566.955279ms" Jan 29 10:49:11.465789 containerd[1944]: time="2025-01-29T10:49:11.465612816Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 10:49:11.510837 containerd[1944]: time="2025-01-29T10:49:11.510681840Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 10:49:12.040364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3575567380.mount: Deactivated successfully. Jan 29 10:49:12.549224 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 10:49:14.298360 containerd[1944]: time="2025-01-29T10:49:14.298265402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:14.301377 containerd[1944]: time="2025-01-29T10:49:14.301268222Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 29 10:49:14.303154 containerd[1944]: time="2025-01-29T10:49:14.303076346Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:14.310216 containerd[1944]: time="2025-01-29T10:49:14.310115114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:14.314573 containerd[1944]: time="2025-01-29T10:49:14.313605458Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.802862466s" Jan 29 10:49:14.314573 containerd[1944]: time="2025-01-29T10:49:14.313709114Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 10:49:18.256183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 10:49:18.269043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:49:18.624280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:49:18.633481 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:49:18.730334 kubelet[2779]: E0129 10:49:18.730099 2779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:49:18.736233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:49:18.736828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:49:23.162411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:49:23.179393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:49:23.222998 systemd[1]: Reloading requested from client PID 2794 ('systemctl') (unit session-9.scope)... Jan 29 10:49:23.223036 systemd[1]: Reloading... Jan 29 10:49:23.469903 zram_generator::config[2837]: No configuration found. Jan 29 10:49:23.733553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:49:23.909556 systemd[1]: Reloading finished in 685 ms. Jan 29 10:49:24.013142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:49:24.014145 (kubelet)[2889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:49:24.022238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:49:24.023093 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 10:49:24.023535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:49:24.032474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:49:24.342377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:49:24.362474 (kubelet)[2900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:49:24.456969 kubelet[2900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:49:24.457891 kubelet[2900]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:49:24.457891 kubelet[2900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:49:24.457891 kubelet[2900]: I0129 10:49:24.457593 2900 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:49:25.407890 kubelet[2900]: I0129 10:49:25.406933 2900 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 10:49:25.407890 kubelet[2900]: I0129 10:49:25.406984 2900 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:49:25.407890 kubelet[2900]: I0129 10:49:25.407323 2900 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 10:49:25.437835 kubelet[2900]: I0129 10:49:25.437796 2900 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:49:25.438239 kubelet[2900]: E0129 10:49:25.438214 2900 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:25.458373 kubelet[2900]: I0129 10:49:25.458332 2900 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:49:25.459396 kubelet[2900]: I0129 10:49:25.459323 2900 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:49:25.459689 kubelet[2900]: I0129 10:49:25.459386 2900 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 10:49:25.459895 kubelet[2900]: I0129 10:49:25.459724 2900 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:49:25.459895 kubelet[2900]: I0129 10:49:25.459746 2900 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 10:49:25.460041 kubelet[2900]: I0129 10:49:25.460025 2900 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:49:25.461564 kubelet[2900]: I0129 10:49:25.461512 2900 kubelet.go:400] "Attempting to sync node with API server" Jan 29 10:49:25.461709 kubelet[2900]: I0129 10:49:25.461570 2900 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:49:25.461784 kubelet[2900]: I0129 10:49:25.461708 2900 kubelet.go:312] "Adding apiserver pod source" Jan 29 10:49:25.461839 kubelet[2900]: I0129 10:49:25.461786 2900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:49:25.465920 kubelet[2900]: I0129 10:49:25.463267 2900 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:49:25.465920 kubelet[2900]: I0129 10:49:25.463629 2900 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:49:25.465920 kubelet[2900]: W0129 10:49:25.463734 2900 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 10:49:25.465920 kubelet[2900]: I0129 10:49:25.464936 2900 server.go:1264] "Started kubelet" Jan 29 10:49:25.465920 kubelet[2900]: W0129 10:49:25.465155 2900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:25.465920 kubelet[2900]: E0129 10:49:25.465254 2900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:25.465920 kubelet[2900]: W0129 10:49:25.465370 2900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-65&limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:25.465920 kubelet[2900]: E0129 10:49:25.465427 2900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-65&limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:25.472682 kubelet[2900]: I0129 10:49:25.472607 2900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:49:25.481660 kubelet[2900]: I0129 10:49:25.481593 2900 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:49:25.483555 kubelet[2900]: I0129 10:49:25.483512 2900 server.go:455] "Adding debug handlers to kubelet server" Jan 29 10:49:25.484815 kubelet[2900]: I0129 10:49:25.484754 2900 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 10:49:25.485643 kubelet[2900]: I0129 10:49:25.485567 2900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:49:25.486114 kubelet[2900]: I0129 10:49:25.486089 2900 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:49:25.486903 kubelet[2900]: E0129 10:49:25.486823 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-65?timeout=10s\": dial tcp 172.31.20.65:6443: connect: connection refused" interval="200ms" Jan 29 10:49:25.487383 kubelet[2900]: E0129 10:49:25.487196 2900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.65:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.65:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-65.181f242effab4b99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-65,UID:ip-172-31-20-65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-65,},FirstTimestamp:2025-01-29 10:49:25.464812441 +0000 UTC m=+1.093034898,LastTimestamp:2025-01-29 10:49:25.464812441 +0000 UTC m=+1.093034898,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-65,}" Jan 29 10:49:25.488264 kubelet[2900]: I0129 10:49:25.488234 2900 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 10:49:25.489100 kubelet[2900]: W0129 10:49:25.489020 2900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:25.489283 kubelet[2900]: E0129 10:49:25.489260 2900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:25.490340 kubelet[2900]: I0129 10:49:25.490294 2900 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:49:25.490644 kubelet[2900]: I0129 10:49:25.490611 2900 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:49:25.492471 kubelet[2900]: E0129 10:49:25.492419 2900 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 10:49:25.493008 kubelet[2900]: I0129 10:49:25.492975 2900 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:49:25.497033 kubelet[2900]: I0129 10:49:25.496972 2900 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:49:25.511439 kubelet[2900]: I0129 10:49:25.511375 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:49:25.513929 kubelet[2900]: I0129 10:49:25.513888 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:49:25.514108 kubelet[2900]: I0129 10:49:25.514088 2900 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:49:25.514254 kubelet[2900]: I0129 10:49:25.514234 2900 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 10:49:25.514427 kubelet[2900]: E0129 10:49:25.514397 2900 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:49:25.520266 kubelet[2900]: W0129 10:49:25.519814 2900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:25.520503 kubelet[2900]: E0129 10:49:25.520480 2900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:25.544056 kubelet[2900]: I0129 10:49:25.543835 2900 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:49:25.544308 kubelet[2900]: I0129 10:49:25.544257 2900 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:49:25.544740 kubelet[2900]: I0129 10:49:25.544431 2900 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:49:25.549941 kubelet[2900]: I0129 10:49:25.549623 2900 policy_none.go:49] "None policy: Start" Jan 29 10:49:25.551214 kubelet[2900]: I0129 10:49:25.551083 2900 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:49:25.551214 kubelet[2900]: I0129 10:49:25.551136 2900 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:49:25.563594 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 10:49:25.582749 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 10:49:25.589490 kubelet[2900]: I0129 10:49:25.588210 2900 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-65" Jan 29 10:49:25.589490 kubelet[2900]: E0129 10:49:25.589465 2900 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.65:6443/api/v1/nodes\": dial tcp 172.31.20.65:6443: connect: connection refused" node="ip-172-31-20-65" Jan 29 10:49:25.591917 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 10:49:25.604021 kubelet[2900]: I0129 10:49:25.603971 2900 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:49:25.604670 kubelet[2900]: I0129 10:49:25.604281 2900 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:49:25.604670 kubelet[2900]: I0129 10:49:25.604457 2900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:49:25.608237 kubelet[2900]: E0129 10:49:25.608199 2900 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-65\" not found" Jan 29 10:49:25.615412 kubelet[2900]: I0129 10:49:25.615320 2900 topology_manager.go:215] "Topology Admit Handler" podUID="b8946fca2365d69b49ae4fad0b9fc158" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-65" Jan 29 10:49:25.617834 kubelet[2900]: I0129 10:49:25.617778 2900 topology_manager.go:215] "Topology Admit Handler" podUID="2d8069520efd6eb03ac23dc8eb3da196" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-65" Jan 29 10:49:25.621290 kubelet[2900]: I0129 10:49:25.620596 2900 topology_manager.go:215] "Topology Admit Handler" podUID="43e65cea2876879cd47672f1511dd50f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:25.635208 systemd[1]: Created slice kubepods-burstable-podb8946fca2365d69b49ae4fad0b9fc158.slice - libcontainer container kubepods-burstable-podb8946fca2365d69b49ae4fad0b9fc158.slice. Jan 29 10:49:25.658751 systemd[1]: Created slice kubepods-burstable-pod43e65cea2876879cd47672f1511dd50f.slice - libcontainer container kubepods-burstable-pod43e65cea2876879cd47672f1511dd50f.slice. Jan 29 10:49:25.669248 systemd[1]: Created slice kubepods-burstable-pod2d8069520efd6eb03ac23dc8eb3da196.slice - libcontainer container kubepods-burstable-pod2d8069520efd6eb03ac23dc8eb3da196.slice. Jan 29 10:49:25.688706 kubelet[2900]: E0129 10:49:25.688629 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-65?timeout=10s\": dial tcp 172.31.20.65:6443: connect: connection refused" interval="400ms" Jan 29 10:49:25.698175 kubelet[2900]: I0129 10:49:25.698024 2900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d8069520efd6eb03ac23dc8eb3da196-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-65\" (UID: \"2d8069520efd6eb03ac23dc8eb3da196\") " pod="kube-system/kube-apiserver-ip-172-31-20-65" Jan 29 10:49:25.698175 kubelet[2900]: I0129 10:49:25.698093 2900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:25.698175 kubelet[2900]: I0129 10:49:25.698137 2900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d8069520efd6eb03ac23dc8eb3da196-ca-certs\") pod \"kube-apiserver-ip-172-31-20-65\" (UID: \"2d8069520efd6eb03ac23dc8eb3da196\") " pod="kube-system/kube-apiserver-ip-172-31-20-65" Jan 29 10:49:25.698175 kubelet[2900]: I0129 10:49:25.698179 2900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d8069520efd6eb03ac23dc8eb3da196-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-65\" (UID: \"2d8069520efd6eb03ac23dc8eb3da196\") " pod="kube-system/kube-apiserver-ip-172-31-20-65" Jan 29 10:49:25.698673 kubelet[2900]: I0129 10:49:25.698216 2900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:25.698673 kubelet[2900]: I0129 10:49:25.698253 2900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:25.698673 kubelet[2900]: I0129 10:49:25.698295 2900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:25.698673 kubelet[2900]: I0129 10:49:25.698329 2900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:25.698673 kubelet[2900]: I0129 10:49:25.698363 2900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8946fca2365d69b49ae4fad0b9fc158-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-65\" (UID: \"b8946fca2365d69b49ae4fad0b9fc158\") " pod="kube-system/kube-scheduler-ip-172-31-20-65" Jan 29 10:49:25.792374 kubelet[2900]: I0129 10:49:25.792313 2900 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-65" Jan 29 10:49:25.792828 kubelet[2900]: E0129 10:49:25.792776 2900 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.65:6443/api/v1/nodes\": dial tcp 172.31.20.65:6443: connect: connection refused" node="ip-172-31-20-65" Jan 29 10:49:25.951636 containerd[1944]: time="2025-01-29T10:49:25.951476524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-65,Uid:b8946fca2365d69b49ae4fad0b9fc158,Namespace:kube-system,Attempt:0,}" Jan 29 10:49:25.968793 containerd[1944]: time="2025-01-29T10:49:25.968287396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-65,Uid:43e65cea2876879cd47672f1511dd50f,Namespace:kube-system,Attempt:0,}" Jan 29 10:49:25.974913 containerd[1944]: time="2025-01-29T10:49:25.974781280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-65,Uid:2d8069520efd6eb03ac23dc8eb3da196,Namespace:kube-system,Attempt:0,}" Jan 29 10:49:25.985589 kubelet[2900]: E0129 10:49:25.985433 2900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.65:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.65:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-65.181f242effab4b99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-65,UID:ip-172-31-20-65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-65,},FirstTimestamp:2025-01-29 10:49:25.464812441 +0000 UTC m=+1.093034898,LastTimestamp:2025-01-29 10:49:25.464812441 +0000 UTC m=+1.093034898,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-65,}" Jan 29 10:49:26.089670 kubelet[2900]: E0129 10:49:26.089605 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-65?timeout=10s\": dial tcp 172.31.20.65:6443: connect: connection refused" interval="800ms" Jan 29 10:49:26.195543 kubelet[2900]: I0129 10:49:26.195472 2900 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-65" Jan 29 10:49:26.196172 kubelet[2900]: E0129 10:49:26.196123 2900 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.65:6443/api/v1/nodes\": dial tcp 172.31.20.65:6443: connect: connection refused" node="ip-172-31-20-65" Jan 29 10:49:26.452463 kubelet[2900]: W0129 10:49:26.452196 2900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-65&limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:26.452463 kubelet[2900]: E0129 10:49:26.452294 2900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-65&limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:26.462973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2348901326.mount: Deactivated successfully. Jan 29 10:49:26.477043 containerd[1944]: time="2025-01-29T10:49:26.476965070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:49:26.485771 containerd[1944]: time="2025-01-29T10:49:26.485679110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 10:49:26.488034 containerd[1944]: time="2025-01-29T10:49:26.487740578Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:49:26.490765 containerd[1944]: time="2025-01-29T10:49:26.490487786Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:49:26.494194 containerd[1944]: time="2025-01-29T10:49:26.494125826Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:49:26.496071 containerd[1944]: time="2025-01-29T10:49:26.495963698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:49:26.497999 containerd[1944]: time="2025-01-29T10:49:26.497929754Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:49:26.500216 containerd[1944]: time="2025-01-29T10:49:26.500064675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:49:26.502874 containerd[1944]: time="2025-01-29T10:49:26.502038471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.444407ms" Jan 29 10:49:26.535746 containerd[1944]: time="2025-01-29T10:49:26.535630803Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.701899ms" Jan 29 10:49:26.550314 containerd[1944]: time="2025-01-29T10:49:26.550256571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 581.859267ms" Jan 29 10:49:26.582357 kubelet[2900]: W0129 10:49:26.582030 2900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:26.582357 kubelet[2900]: E0129 10:49:26.582303 2900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:26.625671 kubelet[2900]: W0129 10:49:26.625497 2900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:26.625671 kubelet[2900]: E0129 10:49:26.625602 2900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:26.690626 containerd[1944]: time="2025-01-29T10:49:26.690457263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:26.690934 containerd[1944]: time="2025-01-29T10:49:26.690746475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:26.691007 containerd[1944]: time="2025-01-29T10:49:26.690934359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:26.692263 containerd[1944]: time="2025-01-29T10:49:26.691967139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:26.699446 containerd[1944]: time="2025-01-29T10:49:26.698946759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:26.699446 containerd[1944]: time="2025-01-29T10:49:26.699038427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:26.699446 containerd[1944]: time="2025-01-29T10:49:26.699063675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:26.699446 containerd[1944]: time="2025-01-29T10:49:26.699190863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:26.703143 containerd[1944]: time="2025-01-29T10:49:26.702710464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:26.703143 containerd[1944]: time="2025-01-29T10:49:26.702823528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:26.705190 containerd[1944]: time="2025-01-29T10:49:26.703754476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:26.707060 containerd[1944]: time="2025-01-29T10:49:26.706941736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:26.751931 systemd[1]: Started cri-containerd-1b7350aed180b73c3767642d29e142d470d01962b78be3eec84c096968349efa.scope - libcontainer container 1b7350aed180b73c3767642d29e142d470d01962b78be3eec84c096968349efa. Jan 29 10:49:26.760670 systemd[1]: Started cri-containerd-7f90dc97564964e497f74ed0508ecad84c97d9571d16675d4bce1dcf08d5c7d9.scope - libcontainer container 7f90dc97564964e497f74ed0508ecad84c97d9571d16675d4bce1dcf08d5c7d9. Jan 29 10:49:26.775270 systemd[1]: Started cri-containerd-7e4d7e65841995664b3158803983c428aba0893a09c042573595aec8db669bf6.scope - libcontainer container 7e4d7e65841995664b3158803983c428aba0893a09c042573595aec8db669bf6. Jan 29 10:49:26.786646 kubelet[2900]: W0129 10:49:26.786529 2900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:26.786646 kubelet[2900]: E0129 10:49:26.786630 2900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.65:6443: connect: connection refused Jan 29 10:49:26.868531 containerd[1944]: time="2025-01-29T10:49:26.868474636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-65,Uid:b8946fca2365d69b49ae4fad0b9fc158,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b7350aed180b73c3767642d29e142d470d01962b78be3eec84c096968349efa\"" Jan 29 10:49:26.887583 containerd[1944]: time="2025-01-29T10:49:26.887515048Z" level=info msg="CreateContainer within sandbox \"1b7350aed180b73c3767642d29e142d470d01962b78be3eec84c096968349efa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 10:49:26.891443 kubelet[2900]: E0129 10:49:26.891163 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-65?timeout=10s\": dial tcp 172.31.20.65:6443: connect: connection refused" interval="1.6s" Jan 29 10:49:26.898572 containerd[1944]: time="2025-01-29T10:49:26.898417468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-65,Uid:43e65cea2876879cd47672f1511dd50f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f90dc97564964e497f74ed0508ecad84c97d9571d16675d4bce1dcf08d5c7d9\"" Jan 29 10:49:26.908306 containerd[1944]: time="2025-01-29T10:49:26.908242877Z" level=info msg="CreateContainer within sandbox \"7f90dc97564964e497f74ed0508ecad84c97d9571d16675d4bce1dcf08d5c7d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 10:49:26.910766 containerd[1944]: time="2025-01-29T10:49:26.910394189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-65,Uid:2d8069520efd6eb03ac23dc8eb3da196,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e4d7e65841995664b3158803983c428aba0893a09c042573595aec8db669bf6\"" Jan 29 10:49:26.919304 containerd[1944]: time="2025-01-29T10:49:26.919062173Z" level=info msg="CreateContainer within sandbox \"7e4d7e65841995664b3158803983c428aba0893a09c042573595aec8db669bf6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 10:49:26.936142 containerd[1944]: time="2025-01-29T10:49:26.936064373Z" level=info msg="CreateContainer within sandbox \"1b7350aed180b73c3767642d29e142d470d01962b78be3eec84c096968349efa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637\"" Jan 29 10:49:26.938139 containerd[1944]: time="2025-01-29T10:49:26.937246493Z" level=info msg="StartContainer for \"e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637\"" Jan 29 10:49:26.954793 containerd[1944]: time="2025-01-29T10:49:26.954535625Z" level=info msg="CreateContainer within sandbox \"7f90dc97564964e497f74ed0508ecad84c97d9571d16675d4bce1dcf08d5c7d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538\"" Jan 29 10:49:26.956955 containerd[1944]: time="2025-01-29T10:49:26.956891717Z" level=info msg="StartContainer for \"c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538\"" Jan 29 10:49:26.975539 containerd[1944]: time="2025-01-29T10:49:26.975346505Z" level=info msg="CreateContainer within sandbox \"7e4d7e65841995664b3158803983c428aba0893a09c042573595aec8db669bf6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50ebd53239c7107e9d7e4ef9d3bc78cda17b50c3c9ea5882424951677b433d4d\"" Jan 29 10:49:26.977144 containerd[1944]: time="2025-01-29T10:49:26.976166141Z" level=info msg="StartContainer for \"50ebd53239c7107e9d7e4ef9d3bc78cda17b50c3c9ea5882424951677b433d4d\"" Jan 29 10:49:26.994625 systemd[1]: Started cri-containerd-e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637.scope - libcontainer container e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637. Jan 29 10:49:26.998802 kubelet[2900]: I0129 10:49:26.998735 2900 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-65" Jan 29 10:49:26.999282 kubelet[2900]: E0129 10:49:26.999203 2900 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.65:6443/api/v1/nodes\": dial tcp 172.31.20.65:6443: connect: connection refused" node="ip-172-31-20-65" Jan 29 10:49:27.045136 systemd[1]: Started cri-containerd-c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538.scope - libcontainer container c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538. Jan 29 10:49:27.064324 systemd[1]: Started cri-containerd-50ebd53239c7107e9d7e4ef9d3bc78cda17b50c3c9ea5882424951677b433d4d.scope - libcontainer container 50ebd53239c7107e9d7e4ef9d3bc78cda17b50c3c9ea5882424951677b433d4d. Jan 29 10:49:27.129918 containerd[1944]: time="2025-01-29T10:49:27.129785366Z" level=info msg="StartContainer for \"e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637\" returns successfully" Jan 29 10:49:27.200080 containerd[1944]: time="2025-01-29T10:49:27.199160654Z" level=info msg="StartContainer for \"c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538\" returns successfully" Jan 29 10:49:27.200080 containerd[1944]: time="2025-01-29T10:49:27.199160606Z" level=info msg="StartContainer for \"50ebd53239c7107e9d7e4ef9d3bc78cda17b50c3c9ea5882424951677b433d4d\" returns successfully" Jan 29 10:49:27.527977 update_engine[1924]: I20250129 10:49:27.527890 1924 update_attempter.cc:509] Updating boot flags... Jan 29 10:49:27.682169 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3181) Jan 29 10:49:28.602898 kubelet[2900]: I0129 10:49:28.601783 2900 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-65" Jan 29 10:49:31.847804 kubelet[2900]: E0129 10:49:31.847715 2900 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-65\" not found" node="ip-172-31-20-65" Jan 29 10:49:31.929504 kubelet[2900]: I0129 10:49:31.929422 2900 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-65" Jan 29 10:49:32.466843 kubelet[2900]: I0129 10:49:32.466786 2900 apiserver.go:52] "Watching apiserver" Jan 29 10:49:32.489486 kubelet[2900]: I0129 10:49:32.489403 2900 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 10:49:33.824108 systemd[1]: Reloading requested from client PID 3269 ('systemctl') (unit session-9.scope)... Jan 29 10:49:33.824133 systemd[1]: Reloading... Jan 29 10:49:33.987894 zram_generator::config[3309]: No configuration found. Jan 29 10:49:34.230200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:49:34.430222 systemd[1]: Reloading finished in 605 ms. Jan 29 10:49:34.508388 kubelet[2900]: I0129 10:49:34.508267 2900 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:49:34.508548 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:49:34.521120 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 10:49:34.521575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:49:34.521655 systemd[1]: kubelet.service: Consumed 1.758s CPU time, 111.3M memory peak, 0B memory swap peak. Jan 29 10:49:34.530467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:49:34.847508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:49:34.866180 (kubelet)[3369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:49:34.988517 kubelet[3369]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:49:34.988517 kubelet[3369]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:49:34.988517 kubelet[3369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:49:34.989122 kubelet[3369]: I0129 10:49:34.988598 3369 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:49:35.005131 kubelet[3369]: I0129 10:49:35.005033 3369 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 10:49:35.005961 kubelet[3369]: I0129 10:49:35.005896 3369 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:49:35.006768 kubelet[3369]: I0129 10:49:35.006723 3369 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 10:49:35.013911 kubelet[3369]: I0129 10:49:35.013305 3369 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 10:49:35.021626 kubelet[3369]: I0129 10:49:35.020941 3369 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:49:35.038669 kubelet[3369]: I0129 10:49:35.037935 3369 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:49:35.038669 kubelet[3369]: I0129 10:49:35.038450 3369 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:49:35.039199 kubelet[3369]: I0129 10:49:35.038502 3369 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 10:49:35.039430 kubelet[3369]: I0129 10:49:35.039407 3369 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:49:35.039548 kubelet[3369]: I0129 10:49:35.039529 3369 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 10:49:35.039731 kubelet[3369]: I0129 10:49:35.039709 3369 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:49:35.040108 kubelet[3369]: I0129 10:49:35.040086 3369 kubelet.go:400] "Attempting to sync node with API server" Jan 29 10:49:35.040982 kubelet[3369]: I0129 10:49:35.040947 3369 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:49:35.041182 kubelet[3369]: I0129 10:49:35.041163 3369 kubelet.go:312] "Adding apiserver pod source" Jan 29 10:49:35.041330 kubelet[3369]: I0129 10:49:35.041308 3369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:49:35.042581 kubelet[3369]: I0129 10:49:35.042547 3369 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:49:35.043310 kubelet[3369]: I0129 10:49:35.043280 3369 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:49:35.044900 kubelet[3369]: I0129 10:49:35.044725 3369 server.go:1264] "Started kubelet" Jan 29 10:49:35.054572 sudo[3383]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 10:49:35.055222 sudo[3383]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 10:49:35.064103 kubelet[3369]: I0129 10:49:35.063393 3369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:49:35.070185 kubelet[3369]: I0129 10:49:35.070114 3369 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:49:35.071775 kubelet[3369]: I0129 10:49:35.071730 3369 server.go:455] "Adding debug handlers to kubelet server" Jan 29 10:49:35.087877 kubelet[3369]: I0129 10:49:35.086836 3369 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:49:35.089459 kubelet[3369]: I0129 10:49:35.089046 3369 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:49:35.093416 kubelet[3369]: I0129 10:49:35.091799 3369 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 10:49:35.104006 kubelet[3369]: I0129 10:49:35.103832 3369 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:49:35.104618 kubelet[3369]: I0129 10:49:35.104286 3369 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:49:35.114356 kubelet[3369]: I0129 10:49:35.099073 3369 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 10:49:35.114911 kubelet[3369]: I0129 10:49:35.114543 3369 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:49:35.160908 kubelet[3369]: I0129 10:49:35.160338 3369 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:49:35.167165 kubelet[3369]: E0129 10:49:35.167076 3369 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 10:49:35.217139 kubelet[3369]: I0129 10:49:35.217020 3369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:49:35.228517 kubelet[3369]: I0129 10:49:35.228272 3369 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-65" Jan 29 10:49:35.235142 kubelet[3369]: I0129 10:49:35.235001 3369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:49:35.235142 kubelet[3369]: I0129 10:49:35.235082 3369 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:49:35.235142 kubelet[3369]: I0129 10:49:35.235114 3369 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 10:49:35.235142 kubelet[3369]: E0129 10:49:35.235198 3369 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:49:35.270537 kubelet[3369]: I0129 10:49:35.269836 3369 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-65" Jan 29 10:49:35.270537 kubelet[3369]: I0129 10:49:35.270291 3369 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-65" Jan 29 10:49:35.335398 kubelet[3369]: E0129 10:49:35.335324 3369 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 10:49:35.353916 kubelet[3369]: I0129 10:49:35.353698 3369 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:49:35.353916 kubelet[3369]: I0129 10:49:35.353755 3369 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:49:35.353916 kubelet[3369]: I0129 10:49:35.353792 3369 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:49:35.354204 kubelet[3369]: I0129 10:49:35.354098 3369 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 10:49:35.354204 kubelet[3369]: I0129 10:49:35.354122 3369 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 10:49:35.354204 kubelet[3369]: I0129 10:49:35.354161 3369 policy_none.go:49] "None policy: Start" Jan 29 10:49:35.358447 kubelet[3369]: I0129 10:49:35.357379 3369 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:49:35.358447 kubelet[3369]: I0129 10:49:35.357438 3369 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:49:35.358447 kubelet[3369]: I0129 10:49:35.357748 3369 state_mem.go:75] "Updated machine memory state" Jan 29 10:49:35.381404 kubelet[3369]: I0129 10:49:35.381348 3369 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:49:35.381843 kubelet[3369]: I0129 10:49:35.381636 3369 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:49:35.387277 kubelet[3369]: I0129 10:49:35.382964 3369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:49:35.536149 kubelet[3369]: I0129 10:49:35.536083 3369 topology_manager.go:215] "Topology Admit Handler" podUID="2d8069520efd6eb03ac23dc8eb3da196" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-65" Jan 29 10:49:35.537650 kubelet[3369]: I0129 10:49:35.536646 3369 topology_manager.go:215] "Topology Admit Handler" podUID="43e65cea2876879cd47672f1511dd50f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:35.540903 kubelet[3369]: I0129 10:49:35.538124 3369 topology_manager.go:215] "Topology Admit Handler" podUID="b8946fca2365d69b49ae4fad0b9fc158" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-65" Jan 29 10:49:35.551371 kubelet[3369]: E0129 10:49:35.551322 3369 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-20-65\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-65" Jan 29 10:49:35.618699 kubelet[3369]: I0129 10:49:35.618557 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d8069520efd6eb03ac23dc8eb3da196-ca-certs\") pod \"kube-apiserver-ip-172-31-20-65\" (UID: \"2d8069520efd6eb03ac23dc8eb3da196\") " pod="kube-system/kube-apiserver-ip-172-31-20-65" Jan 29 10:49:35.619062 kubelet[3369]: I0129 10:49:35.619029 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d8069520efd6eb03ac23dc8eb3da196-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-65\" (UID: \"2d8069520efd6eb03ac23dc8eb3da196\") " pod="kube-system/kube-apiserver-ip-172-31-20-65" Jan 29 10:49:35.619247 kubelet[3369]: I0129 10:49:35.619219 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d8069520efd6eb03ac23dc8eb3da196-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-65\" (UID: \"2d8069520efd6eb03ac23dc8eb3da196\") " pod="kube-system/kube-apiserver-ip-172-31-20-65" Jan 29 10:49:35.619665 kubelet[3369]: I0129 10:49:35.619367 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:35.619665 kubelet[3369]: I0129 10:49:35.619439 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:35.619873 kubelet[3369]: I0129 10:49:35.619476 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:35.619873 kubelet[3369]: I0129 10:49:35.619832 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:35.620237 kubelet[3369]: I0129 10:49:35.620094 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43e65cea2876879cd47672f1511dd50f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-65\" (UID: \"43e65cea2876879cd47672f1511dd50f\") " pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:35.620783 kubelet[3369]: I0129 10:49:35.620350 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8946fca2365d69b49ae4fad0b9fc158-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-65\" (UID: \"b8946fca2365d69b49ae4fad0b9fc158\") " pod="kube-system/kube-scheduler-ip-172-31-20-65" Jan 29 10:49:35.941577 sudo[3383]: pam_unix(sudo:session): session closed for user root Jan 29 10:49:36.062527 kubelet[3369]: I0129 10:49:36.062458 3369 apiserver.go:52] "Watching apiserver" Jan 29 10:49:36.115409 kubelet[3369]: I0129 10:49:36.115313 3369 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 10:49:36.143732 kubelet[3369]: I0129 10:49:36.141586 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-65" podStartSLOduration=1.141561682 podStartE2EDuration="1.141561682s" podCreationTimestamp="2025-01-29 10:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:49:36.122284702 +0000 UTC m=+1.247976835" watchObservedRunningTime="2025-01-29 10:49:36.141561682 +0000 UTC m=+1.267253815" Jan 29 10:49:36.162489 kubelet[3369]: I0129 10:49:36.162249 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-65" podStartSLOduration=1.162225863 podStartE2EDuration="1.162225863s" podCreationTimestamp="2025-01-29 10:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:49:36.144024382 +0000 UTC m=+1.269716491" watchObservedRunningTime="2025-01-29 10:49:36.162225863 +0000 UTC m=+1.287917996" Jan 29 10:49:36.182428 kubelet[3369]: I0129 10:49:36.182065 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-65" podStartSLOduration=3.182042519 podStartE2EDuration="3.182042519s" podCreationTimestamp="2025-01-29 10:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:49:36.163456415 +0000 UTC m=+1.289148548" watchObservedRunningTime="2025-01-29 10:49:36.182042519 +0000 UTC m=+1.307734640" Jan 29 10:49:36.323645 kubelet[3369]: E0129 10:49:36.323441 3369 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-20-65\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-65" Jan 29 10:49:36.327910 kubelet[3369]: E0129 10:49:36.327249 3369 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-65\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-65" Jan 29 10:49:38.608400 sudo[2291]: pam_unix(sudo:session): session closed for user root Jan 29 10:49:38.632068 sshd[2290]: Connection closed by 139.178.89.65 port 56922 Jan 29 10:49:38.632942 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Jan 29 10:49:38.640605 systemd[1]: sshd@8-172.31.20.65:22-139.178.89.65:56922.service: Deactivated successfully. Jan 29 10:49:38.644815 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 10:49:38.645388 systemd[1]: session-9.scope: Consumed 12.671s CPU time, 188.6M memory peak, 0B memory swap peak. Jan 29 10:49:38.646810 systemd-logind[1923]: Session 9 logged out. Waiting for processes to exit. Jan 29 10:49:38.649923 systemd-logind[1923]: Removed session 9. Jan 29 10:49:48.431280 kubelet[3369]: I0129 10:49:48.431211 3369 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 10:49:48.434436 kubelet[3369]: I0129 10:49:48.432991 3369 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 10:49:48.434660 containerd[1944]: time="2025-01-29T10:49:48.432390695Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 10:49:49.366583 kubelet[3369]: I0129 10:49:49.366516 3369 topology_manager.go:215] "Topology Admit Handler" podUID="b2592467-4dd0-4bc6-9e9b-c7d1e446755b" podNamespace="kube-system" podName="kube-proxy-5ck8s" Jan 29 10:49:49.387241 systemd[1]: Created slice kubepods-besteffort-podb2592467_4dd0_4bc6_9e9b_c7d1e446755b.slice - libcontainer container kubepods-besteffort-podb2592467_4dd0_4bc6_9e9b_c7d1e446755b.slice. Jan 29 10:49:49.400724 kubelet[3369]: I0129 10:49:49.400449 3369 topology_manager.go:215] "Topology Admit Handler" podUID="049a4c68-0d52-4a41-932a-19a96137410b" podNamespace="kube-system" podName="cilium-85dlj" Jan 29 10:49:49.407421 kubelet[3369]: I0129 10:49:49.407266 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-hostproc\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.407421 kubelet[3369]: I0129 10:49:49.407349 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-bpf-maps\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.407421 kubelet[3369]: I0129 10:49:49.407404 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-lib-modules\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.407697 kubelet[3369]: I0129 10:49:49.407454 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/049a4c68-0d52-4a41-932a-19a96137410b-clustermesh-secrets\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.407697 kubelet[3369]: I0129 10:49:49.407508 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2592467-4dd0-4bc6-9e9b-c7d1e446755b-kube-proxy\") pod \"kube-proxy-5ck8s\" (UID: \"b2592467-4dd0-4bc6-9e9b-c7d1e446755b\") " pod="kube-system/kube-proxy-5ck8s" Jan 29 10:49:49.407697 kubelet[3369]: I0129 10:49:49.407614 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cilium-cgroup\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.407697 kubelet[3369]: I0129 10:49:49.407665 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-host-proc-sys-net\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.408354 kubelet[3369]: I0129 10:49:49.407709 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-host-proc-sys-kernel\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.408354 kubelet[3369]: I0129 10:49:49.407762 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2592467-4dd0-4bc6-9e9b-c7d1e446755b-xtables-lock\") pod \"kube-proxy-5ck8s\" (UID: \"b2592467-4dd0-4bc6-9e9b-c7d1e446755b\") " pod="kube-system/kube-proxy-5ck8s" Jan 29 10:49:49.408354 kubelet[3369]: I0129 10:49:49.407809 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2592467-4dd0-4bc6-9e9b-c7d1e446755b-lib-modules\") pod \"kube-proxy-5ck8s\" (UID: \"b2592467-4dd0-4bc6-9e9b-c7d1e446755b\") " pod="kube-system/kube-proxy-5ck8s" Jan 29 10:49:49.412194 kubelet[3369]: I0129 10:49:49.409929 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cni-path\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.412194 kubelet[3369]: I0129 10:49:49.410029 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cilium-run\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.412194 kubelet[3369]: I0129 10:49:49.410075 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/049a4c68-0d52-4a41-932a-19a96137410b-hubble-tls\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.412194 kubelet[3369]: I0129 10:49:49.410132 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/049a4c68-0d52-4a41-932a-19a96137410b-cilium-config-path\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.412194 kubelet[3369]: I0129 10:49:49.410255 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9c5\" (UniqueName: \"kubernetes.io/projected/049a4c68-0d52-4a41-932a-19a96137410b-kube-api-access-zj9c5\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.412613 kubelet[3369]: I0129 10:49:49.410310 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk65d\" (UniqueName: \"kubernetes.io/projected/b2592467-4dd0-4bc6-9e9b-c7d1e446755b-kube-api-access-jk65d\") pod \"kube-proxy-5ck8s\" (UID: \"b2592467-4dd0-4bc6-9e9b-c7d1e446755b\") " pod="kube-system/kube-proxy-5ck8s" Jan 29 10:49:49.412613 kubelet[3369]: I0129 10:49:49.410361 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-etc-cni-netd\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.412613 kubelet[3369]: I0129 10:49:49.410565 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-xtables-lock\") pod \"cilium-85dlj\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " pod="kube-system/cilium-85dlj" Jan 29 10:49:49.430935 systemd[1]: Created slice kubepods-burstable-pod049a4c68_0d52_4a41_932a_19a96137410b.slice - libcontainer container kubepods-burstable-pod049a4c68_0d52_4a41_932a_19a96137410b.slice. Jan 29 10:49:49.506984 kubelet[3369]: I0129 10:49:49.506931 3369 topology_manager.go:215] "Topology Admit Handler" podUID="4512eab1-b85b-4afb-a89d-3663b27d2166" podNamespace="kube-system" podName="cilium-operator-599987898-g2vpm" Jan 29 10:49:49.533388 systemd[1]: Created slice kubepods-besteffort-pod4512eab1_b85b_4afb_a89d_3663b27d2166.slice - libcontainer container kubepods-besteffort-pod4512eab1_b85b_4afb_a89d_3663b27d2166.slice. Jan 29 10:49:49.611900 kubelet[3369]: I0129 10:49:49.611754 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zttrt\" (UniqueName: \"kubernetes.io/projected/4512eab1-b85b-4afb-a89d-3663b27d2166-kube-api-access-zttrt\") pod \"cilium-operator-599987898-g2vpm\" (UID: \"4512eab1-b85b-4afb-a89d-3663b27d2166\") " pod="kube-system/cilium-operator-599987898-g2vpm" Jan 29 10:49:49.612924 kubelet[3369]: I0129 10:49:49.612143 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4512eab1-b85b-4afb-a89d-3663b27d2166-cilium-config-path\") pod \"cilium-operator-599987898-g2vpm\" (UID: \"4512eab1-b85b-4afb-a89d-3663b27d2166\") " pod="kube-system/cilium-operator-599987898-g2vpm" Jan 29 10:49:49.705393 containerd[1944]: time="2025-01-29T10:49:49.705014918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ck8s,Uid:b2592467-4dd0-4bc6-9e9b-c7d1e446755b,Namespace:kube-system,Attempt:0,}" Jan 29 10:49:49.742711 containerd[1944]: time="2025-01-29T10:49:49.742535894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-85dlj,Uid:049a4c68-0d52-4a41-932a-19a96137410b,Namespace:kube-system,Attempt:0,}" Jan 29 10:49:49.773691 containerd[1944]: time="2025-01-29T10:49:49.772807070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:49.773691 containerd[1944]: time="2025-01-29T10:49:49.772945154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:49.773691 containerd[1944]: time="2025-01-29T10:49:49.772982450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:49.773691 containerd[1944]: time="2025-01-29T10:49:49.773147786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:49.811240 systemd[1]: Started cri-containerd-3ee80172da42cc0d27c74f243c82695cc15cc9941195de45684403466985c4fd.scope - libcontainer container 3ee80172da42cc0d27c74f243c82695cc15cc9941195de45684403466985c4fd. Jan 29 10:49:49.813491 containerd[1944]: time="2025-01-29T10:49:49.813196478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:49.813843 containerd[1944]: time="2025-01-29T10:49:49.813594470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:49.814114 containerd[1944]: time="2025-01-29T10:49:49.813919178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:49.815670 containerd[1944]: time="2025-01-29T10:49:49.814779374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:49.853160 systemd[1]: Started cri-containerd-8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3.scope - libcontainer container 8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3. Jan 29 10:49:49.883018 containerd[1944]: time="2025-01-29T10:49:49.882958695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g2vpm,Uid:4512eab1-b85b-4afb-a89d-3663b27d2166,Namespace:kube-system,Attempt:0,}" Jan 29 10:49:49.909673 containerd[1944]: time="2025-01-29T10:49:49.909574263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ck8s,Uid:b2592467-4dd0-4bc6-9e9b-c7d1e446755b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ee80172da42cc0d27c74f243c82695cc15cc9941195de45684403466985c4fd\"" Jan 29 10:49:49.921400 containerd[1944]: time="2025-01-29T10:49:49.921187731Z" level=info msg="CreateContainer within sandbox \"3ee80172da42cc0d27c74f243c82695cc15cc9941195de45684403466985c4fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 10:49:49.932357 containerd[1944]: time="2025-01-29T10:49:49.932188383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-85dlj,Uid:049a4c68-0d52-4a41-932a-19a96137410b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\"" Jan 29 10:49:49.938705 containerd[1944]: time="2025-01-29T10:49:49.938433219Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 10:49:49.959718 containerd[1944]: time="2025-01-29T10:49:49.959179647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:49.959718 containerd[1944]: time="2025-01-29T10:49:49.959309751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:49.959718 containerd[1944]: time="2025-01-29T10:49:49.959400735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:49.961564 containerd[1944]: time="2025-01-29T10:49:49.959661627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:49.962875 containerd[1944]: time="2025-01-29T10:49:49.962783871Z" level=info msg="CreateContainer within sandbox \"3ee80172da42cc0d27c74f243c82695cc15cc9941195de45684403466985c4fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5eaac88f247293c6b35badbe8a2e98f4b6b4a732ddfdf031b2fa240128d4ff9\"" Jan 29 10:49:49.964089 containerd[1944]: time="2025-01-29T10:49:49.963744555Z" level=info msg="StartContainer for \"f5eaac88f247293c6b35badbe8a2e98f4b6b4a732ddfdf031b2fa240128d4ff9\"" Jan 29 10:49:50.006372 systemd[1]: Started cri-containerd-4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e.scope - libcontainer container 4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e. Jan 29 10:49:50.023215 systemd[1]: Started cri-containerd-f5eaac88f247293c6b35badbe8a2e98f4b6b4a732ddfdf031b2fa240128d4ff9.scope - libcontainer container f5eaac88f247293c6b35badbe8a2e98f4b6b4a732ddfdf031b2fa240128d4ff9. Jan 29 10:49:50.113923 containerd[1944]: time="2025-01-29T10:49:50.113830944Z" level=info msg="StartContainer for \"f5eaac88f247293c6b35badbe8a2e98f4b6b4a732ddfdf031b2fa240128d4ff9\" returns successfully" Jan 29 10:49:50.114258 containerd[1944]: time="2025-01-29T10:49:50.113961300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g2vpm,Uid:4512eab1-b85b-4afb-a89d-3663b27d2166,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\"" Jan 29 10:49:50.358189 kubelet[3369]: I0129 10:49:50.358084 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5ck8s" podStartSLOduration=1.358060729 podStartE2EDuration="1.358060729s" podCreationTimestamp="2025-01-29 10:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:49:50.357731989 +0000 UTC m=+15.483424134" watchObservedRunningTime="2025-01-29 10:49:50.358060729 +0000 UTC m=+15.483752850" Jan 29 10:49:55.387611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814930530.mount: Deactivated successfully. Jan 29 10:49:57.935494 containerd[1944]: time="2025-01-29T10:49:57.935421179Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:57.940822 containerd[1944]: time="2025-01-29T10:49:57.940731935Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 10:49:57.946051 containerd[1944]: time="2025-01-29T10:49:57.945960455Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:57.950452 containerd[1944]: time="2025-01-29T10:49:57.950374775Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.011665196s" Jan 29 10:49:57.950452 containerd[1944]: time="2025-01-29T10:49:57.950442347Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 10:49:57.953710 containerd[1944]: time="2025-01-29T10:49:57.953385191Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 10:49:57.959631 containerd[1944]: time="2025-01-29T10:49:57.959553287Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 10:49:57.998229 containerd[1944]: time="2025-01-29T10:49:57.998048615Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\"" Jan 29 10:49:58.002057 containerd[1944]: time="2025-01-29T10:49:58.000970831Z" level=info msg="StartContainer for \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\"" Jan 29 10:49:58.052208 systemd[1]: Started cri-containerd-e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5.scope - libcontainer container e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5. Jan 29 10:49:58.106287 containerd[1944]: time="2025-01-29T10:49:58.106211779Z" level=info msg="StartContainer for \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\" returns successfully" Jan 29 10:49:58.127421 systemd[1]: cri-containerd-e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5.scope: Deactivated successfully. Jan 29 10:49:58.987272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5-rootfs.mount: Deactivated successfully. Jan 29 10:49:59.138302 containerd[1944]: time="2025-01-29T10:49:59.137786301Z" level=info msg="shim disconnected" id=e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5 namespace=k8s.io Jan 29 10:49:59.138302 containerd[1944]: time="2025-01-29T10:49:59.137890641Z" level=warning msg="cleaning up after shim disconnected" id=e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5 namespace=k8s.io Jan 29 10:49:59.138302 containerd[1944]: time="2025-01-29T10:49:59.137912829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:49:59.394061 containerd[1944]: time="2025-01-29T10:49:59.393978118Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 10:49:59.430585 containerd[1944]: time="2025-01-29T10:49:59.430507534Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\"" Jan 29 10:49:59.431734 containerd[1944]: time="2025-01-29T10:49:59.431685526Z" level=info msg="StartContainer for \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\"" Jan 29 10:49:59.489129 systemd[1]: Started cri-containerd-79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac.scope - libcontainer container 79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac. Jan 29 10:49:59.556736 containerd[1944]: time="2025-01-29T10:49:59.554196767Z" level=info msg="StartContainer for \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\" returns successfully" Jan 29 10:49:59.573691 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:49:59.574537 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:49:59.574664 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:49:59.584429 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:49:59.584829 systemd[1]: cri-containerd-79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac.scope: Deactivated successfully. Jan 29 10:49:59.628756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac-rootfs.mount: Deactivated successfully. Jan 29 10:49:59.632018 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:49:59.640959 containerd[1944]: time="2025-01-29T10:49:59.640848119Z" level=info msg="shim disconnected" id=79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac namespace=k8s.io Jan 29 10:49:59.640959 containerd[1944]: time="2025-01-29T10:49:59.640952711Z" level=warning msg="cleaning up after shim disconnected" id=79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac namespace=k8s.io Jan 29 10:49:59.641265 containerd[1944]: time="2025-01-29T10:49:59.640974479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:50:00.401057 containerd[1944]: time="2025-01-29T10:50:00.400542587Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 10:50:00.447155 containerd[1944]: time="2025-01-29T10:50:00.447100751Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\"" Jan 29 10:50:00.449931 containerd[1944]: time="2025-01-29T10:50:00.448088375Z" level=info msg="StartContainer for \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\"" Jan 29 10:50:00.462251 containerd[1944]: time="2025-01-29T10:50:00.462175367Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:50:00.470983 containerd[1944]: time="2025-01-29T10:50:00.470882531Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 10:50:00.479969 containerd[1944]: time="2025-01-29T10:50:00.479842943Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:50:00.486599 containerd[1944]: time="2025-01-29T10:50:00.486513935Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.533053624s" Jan 29 10:50:00.486599 containerd[1944]: time="2025-01-29T10:50:00.486591527Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 10:50:00.496148 containerd[1944]: time="2025-01-29T10:50:00.496049771Z" level=info msg="CreateContainer within sandbox \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 10:50:00.523414 systemd[1]: Started cri-containerd-3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed.scope - libcontainer container 3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed. Jan 29 10:50:00.544923 containerd[1944]: time="2025-01-29T10:50:00.544819104Z" level=info msg="CreateContainer within sandbox \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\"" Jan 29 10:50:00.547302 containerd[1944]: time="2025-01-29T10:50:00.547220208Z" level=info msg="StartContainer for \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\"" Jan 29 10:50:00.607190 systemd[1]: Started cri-containerd-38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731.scope - libcontainer container 38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731. Jan 29 10:50:00.617674 containerd[1944]: time="2025-01-29T10:50:00.617607204Z" level=info msg="StartContainer for \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\" returns successfully" Jan 29 10:50:00.626564 systemd[1]: cri-containerd-3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed.scope: Deactivated successfully. Jan 29 10:50:00.681009 containerd[1944]: time="2025-01-29T10:50:00.680449236Z" level=info msg="StartContainer for \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\" returns successfully" Jan 29 10:50:00.786027 containerd[1944]: time="2025-01-29T10:50:00.785455273Z" level=info msg="shim disconnected" id=3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed namespace=k8s.io Jan 29 10:50:00.787006 containerd[1944]: time="2025-01-29T10:50:00.786584377Z" level=warning msg="cleaning up after shim disconnected" id=3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed namespace=k8s.io Jan 29 10:50:00.787006 containerd[1944]: time="2025-01-29T10:50:00.786653461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:50:00.992362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed-rootfs.mount: Deactivated successfully. Jan 29 10:50:01.417637 containerd[1944]: time="2025-01-29T10:50:01.417463272Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 10:50:01.455216 containerd[1944]: time="2025-01-29T10:50:01.455120916Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\"" Jan 29 10:50:01.456984 containerd[1944]: time="2025-01-29T10:50:01.456915288Z" level=info msg="StartContainer for \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\"" Jan 29 10:50:01.565704 systemd[1]: Started cri-containerd-e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe.scope - libcontainer container e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe. Jan 29 10:50:01.684885 systemd[1]: cri-containerd-e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe.scope: Deactivated successfully. Jan 29 10:50:01.689913 containerd[1944]: time="2025-01-29T10:50:01.689293633Z" level=info msg="StartContainer for \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\" returns successfully" Jan 29 10:50:01.724009 kubelet[3369]: I0129 10:50:01.721889 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-g2vpm" podStartSLOduration=2.354029234 podStartE2EDuration="12.721830637s" podCreationTimestamp="2025-01-29 10:49:49 +0000 UTC" firstStartedPulling="2025-01-29 10:49:50.121171932 +0000 UTC m=+15.246864053" lastFinishedPulling="2025-01-29 10:50:00.488973323 +0000 UTC m=+25.614665456" observedRunningTime="2025-01-29 10:50:01.602383681 +0000 UTC m=+26.728075814" watchObservedRunningTime="2025-01-29 10:50:01.721830637 +0000 UTC m=+26.847522770" Jan 29 10:50:01.763298 containerd[1944]: time="2025-01-29T10:50:01.762815774Z" level=info msg="shim disconnected" id=e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe namespace=k8s.io Jan 29 10:50:01.763298 containerd[1944]: time="2025-01-29T10:50:01.762953210Z" level=warning msg="cleaning up after shim disconnected" id=e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe namespace=k8s.io Jan 29 10:50:01.763298 containerd[1944]: time="2025-01-29T10:50:01.762974450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:50:01.987228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe-rootfs.mount: Deactivated successfully. Jan 29 10:50:02.425364 containerd[1944]: time="2025-01-29T10:50:02.424650865Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 10:50:02.467815 containerd[1944]: time="2025-01-29T10:50:02.467496001Z" level=info msg="CreateContainer within sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\"" Jan 29 10:50:02.470048 containerd[1944]: time="2025-01-29T10:50:02.469981861Z" level=info msg="StartContainer for \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\"" Jan 29 10:50:02.542180 systemd[1]: Started cri-containerd-a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23.scope - libcontainer container a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23. Jan 29 10:50:02.598613 containerd[1944]: time="2025-01-29T10:50:02.598282898Z" level=info msg="StartContainer for \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\" returns successfully" Jan 29 10:50:02.828740 kubelet[3369]: I0129 10:50:02.828667 3369 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 10:50:02.901074 kubelet[3369]: I0129 10:50:02.901003 3369 topology_manager.go:215] "Topology Admit Handler" podUID="c98dfefe-1da3-40ad-9674-27f2715b03a6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7mdfh" Jan 29 10:50:02.916167 kubelet[3369]: I0129 10:50:02.916098 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gf92\" (UniqueName: \"kubernetes.io/projected/c98dfefe-1da3-40ad-9674-27f2715b03a6-kube-api-access-5gf92\") pod \"coredns-7db6d8ff4d-7mdfh\" (UID: \"c98dfefe-1da3-40ad-9674-27f2715b03a6\") " pod="kube-system/coredns-7db6d8ff4d-7mdfh" Jan 29 10:50:02.916311 kubelet[3369]: I0129 10:50:02.916179 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c98dfefe-1da3-40ad-9674-27f2715b03a6-config-volume\") pod \"coredns-7db6d8ff4d-7mdfh\" (UID: \"c98dfefe-1da3-40ad-9674-27f2715b03a6\") " pod="kube-system/coredns-7db6d8ff4d-7mdfh" Jan 29 10:50:02.916600 systemd[1]: Created slice kubepods-burstable-podc98dfefe_1da3_40ad_9674_27f2715b03a6.slice - libcontainer container kubepods-burstable-podc98dfefe_1da3_40ad_9674_27f2715b03a6.slice. Jan 29 10:50:02.921131 kubelet[3369]: I0129 10:50:02.921061 3369 topology_manager.go:215] "Topology Admit Handler" podUID="bbfe1c99-1869-4639-8072-25686bd88732" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bsz9c" Jan 29 10:50:02.942484 systemd[1]: Created slice kubepods-burstable-podbbfe1c99_1869_4639_8072_25686bd88732.slice - libcontainer container kubepods-burstable-podbbfe1c99_1869_4639_8072_25686bd88732.slice. Jan 29 10:50:03.017892 kubelet[3369]: I0129 10:50:03.016620 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbfe1c99-1869-4639-8072-25686bd88732-config-volume\") pod \"coredns-7db6d8ff4d-bsz9c\" (UID: \"bbfe1c99-1869-4639-8072-25686bd88732\") " pod="kube-system/coredns-7db6d8ff4d-bsz9c" Jan 29 10:50:03.017892 kubelet[3369]: I0129 10:50:03.016718 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27wxz\" (UniqueName: \"kubernetes.io/projected/bbfe1c99-1869-4639-8072-25686bd88732-kube-api-access-27wxz\") pod \"coredns-7db6d8ff4d-bsz9c\" (UID: \"bbfe1c99-1869-4639-8072-25686bd88732\") " pod="kube-system/coredns-7db6d8ff4d-bsz9c" Jan 29 10:50:03.232121 containerd[1944]: time="2025-01-29T10:50:03.231957529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7mdfh,Uid:c98dfefe-1da3-40ad-9674-27f2715b03a6,Namespace:kube-system,Attempt:0,}" Jan 29 10:50:03.251221 containerd[1944]: time="2025-01-29T10:50:03.250802425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bsz9c,Uid:bbfe1c99-1869-4639-8072-25686bd88732,Namespace:kube-system,Attempt:0,}" Jan 29 10:50:05.668003 systemd-networkd[1844]: cilium_host: Link UP Jan 29 10:50:05.670630 (udev-worker)[4157]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:50:05.670944 systemd-networkd[1844]: cilium_net: Link UP Jan 29 10:50:05.671344 systemd-networkd[1844]: cilium_net: Gained carrier Jan 29 10:50:05.671686 systemd-networkd[1844]: cilium_host: Gained carrier Jan 29 10:50:05.676779 (udev-worker)[4191]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:50:05.858750 (udev-worker)[4201]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:50:05.880595 systemd-networkd[1844]: cilium_vxlan: Link UP Jan 29 10:50:05.880615 systemd-networkd[1844]: cilium_vxlan: Gained carrier Jan 29 10:50:05.998152 systemd-networkd[1844]: cilium_host: Gained IPv6LL Jan 29 10:50:06.054120 systemd-networkd[1844]: cilium_net: Gained IPv6LL Jan 29 10:50:06.419911 kernel: NET: Registered PF_ALG protocol family Jan 29 10:50:07.848068 systemd-networkd[1844]: cilium_vxlan: Gained IPv6LL Jan 29 10:50:07.886317 systemd-networkd[1844]: lxc_health: Link UP Jan 29 10:50:07.888134 systemd-networkd[1844]: lxc_health: Gained carrier Jan 29 10:50:08.333732 systemd-networkd[1844]: lxc7d204fbf5217: Link UP Jan 29 10:50:08.344913 kernel: eth0: renamed from tmp5dd2d Jan 29 10:50:08.349516 systemd-networkd[1844]: lxc7d204fbf5217: Gained carrier Jan 29 10:50:08.387455 (udev-worker)[4204]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:50:08.389960 systemd-networkd[1844]: lxcac60609a625b: Link UP Jan 29 10:50:08.393491 kernel: eth0: renamed from tmp29a03 Jan 29 10:50:08.406209 systemd-networkd[1844]: lxcac60609a625b: Gained carrier Jan 29 10:50:08.998743 systemd-networkd[1844]: lxc_health: Gained IPv6LL Jan 29 10:50:09.783102 kubelet[3369]: I0129 10:50:09.783018 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-85dlj" podStartSLOduration=12.766042849 podStartE2EDuration="20.782997165s" podCreationTimestamp="2025-01-29 10:49:49 +0000 UTC" firstStartedPulling="2025-01-29 10:49:49.934933203 +0000 UTC m=+15.060625324" lastFinishedPulling="2025-01-29 10:49:57.951887531 +0000 UTC m=+23.077579640" observedRunningTime="2025-01-29 10:50:03.468426842 +0000 UTC m=+28.594118987" watchObservedRunningTime="2025-01-29 10:50:09.782997165 +0000 UTC m=+34.908689478" Jan 29 10:50:09.958712 systemd-networkd[1844]: lxc7d204fbf5217: Gained IPv6LL Jan 29 10:50:10.470344 systemd-networkd[1844]: lxcac60609a625b: Gained IPv6LL Jan 29 10:50:12.850129 ntpd[1914]: Listen normally on 8 cilium_host 192.168.0.246:123 Jan 29 10:50:12.851268 ntpd[1914]: 29 Jan 10:50:12 ntpd[1914]: Listen normally on 8 cilium_host 192.168.0.246:123 Jan 29 10:50:12.851268 ntpd[1914]: 29 Jan 10:50:12 ntpd[1914]: Listen normally on 9 cilium_net [fe80::c079:b3ff:fe53:c1f9%4]:123 Jan 29 10:50:12.851268 ntpd[1914]: 29 Jan 10:50:12 ntpd[1914]: Listen normally on 10 cilium_host [fe80::e809:efff:fe9f:1faf%5]:123 Jan 29 10:50:12.851268 ntpd[1914]: 29 Jan 10:50:12 ntpd[1914]: Listen normally on 11 cilium_vxlan [fe80::a04e:89ff:fe82:5e15%6]:123 Jan 29 10:50:12.851268 ntpd[1914]: 29 Jan 10:50:12 ntpd[1914]: Listen normally on 12 lxc_health [fe80::8a4:51ff:fe5b:4caa%8]:123 Jan 29 10:50:12.851268 ntpd[1914]: 29 Jan 10:50:12 ntpd[1914]: Listen normally on 13 lxc7d204fbf5217 [fe80::cf5:faff:fe24:c171%10]:123 Jan 29 10:50:12.851268 ntpd[1914]: 29 Jan 10:50:12 ntpd[1914]: Listen normally on 14 lxcac60609a625b [fe80::9c75:6cff:fecd:cdd0%12]:123 Jan 29 10:50:12.850256 ntpd[1914]: Listen normally on 9 cilium_net [fe80::c079:b3ff:fe53:c1f9%4]:123 Jan 29 10:50:12.850362 ntpd[1914]: Listen normally on 10 cilium_host [fe80::e809:efff:fe9f:1faf%5]:123 Jan 29 10:50:12.850433 ntpd[1914]: Listen normally on 11 cilium_vxlan [fe80::a04e:89ff:fe82:5e15%6]:123 Jan 29 10:50:12.850499 ntpd[1914]: Listen normally on 12 lxc_health [fe80::8a4:51ff:fe5b:4caa%8]:123 Jan 29 10:50:12.850567 ntpd[1914]: Listen normally on 13 lxc7d204fbf5217 [fe80::cf5:faff:fe24:c171%10]:123 Jan 29 10:50:12.850632 ntpd[1914]: Listen normally on 14 lxcac60609a625b [fe80::9c75:6cff:fecd:cdd0%12]:123 Jan 29 10:50:14.469220 systemd[1]: Started sshd@9-172.31.20.65:22-139.178.89.65:42028.service - OpenSSH per-connection server daemon (139.178.89.65:42028). Jan 29 10:50:14.669721 sshd[4556]: Accepted publickey for core from 139.178.89.65 port 42028 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:14.672088 sshd-session[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:14.680513 systemd-logind[1923]: New session 10 of user core. Jan 29 10:50:14.689161 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 10:50:15.001444 sshd[4560]: Connection closed by 139.178.89.65 port 42028 Jan 29 10:50:15.002752 sshd-session[4556]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:15.010464 systemd-logind[1923]: Session 10 logged out. Waiting for processes to exit. Jan 29 10:50:15.012362 systemd[1]: sshd@9-172.31.20.65:22-139.178.89.65:42028.service: Deactivated successfully. Jan 29 10:50:15.018648 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 10:50:15.024392 systemd-logind[1923]: Removed session 10. Jan 29 10:50:17.441900 containerd[1944]: time="2025-01-29T10:50:17.440445736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:50:17.441900 containerd[1944]: time="2025-01-29T10:50:17.440541148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:50:17.441900 containerd[1944]: time="2025-01-29T10:50:17.440569744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:50:17.441900 containerd[1944]: time="2025-01-29T10:50:17.440714932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:50:17.487702 containerd[1944]: time="2025-01-29T10:50:17.487442632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:50:17.487702 containerd[1944]: time="2025-01-29T10:50:17.487572208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:50:17.488992 containerd[1944]: time="2025-01-29T10:50:17.487611472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:50:17.488992 containerd[1944]: time="2025-01-29T10:50:17.487773748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:50:17.542239 systemd[1]: Started cri-containerd-5dd2dcd920bf8a3bff2104639addfc24c1c74e7bc639cd4a4010f08c28662adb.scope - libcontainer container 5dd2dcd920bf8a3bff2104639addfc24c1c74e7bc639cd4a4010f08c28662adb. Jan 29 10:50:17.571727 systemd[1]: Started cri-containerd-29a03b8908d8b4ab5e1ca727dac928af8572b9e8525dfc9db47c95c825f98398.scope - libcontainer container 29a03b8908d8b4ab5e1ca727dac928af8572b9e8525dfc9db47c95c825f98398. Jan 29 10:50:17.690351 containerd[1944]: time="2025-01-29T10:50:17.690299381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7mdfh,Uid:c98dfefe-1da3-40ad-9674-27f2715b03a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dd2dcd920bf8a3bff2104639addfc24c1c74e7bc639cd4a4010f08c28662adb\"" Jan 29 10:50:17.703306 containerd[1944]: time="2025-01-29T10:50:17.702954329Z" level=info msg="CreateContainer within sandbox \"5dd2dcd920bf8a3bff2104639addfc24c1c74e7bc639cd4a4010f08c28662adb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:50:17.717032 containerd[1944]: time="2025-01-29T10:50:17.714573713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bsz9c,Uid:bbfe1c99-1869-4639-8072-25686bd88732,Namespace:kube-system,Attempt:0,} returns sandbox id \"29a03b8908d8b4ab5e1ca727dac928af8572b9e8525dfc9db47c95c825f98398\"" Jan 29 10:50:17.721989 containerd[1944]: time="2025-01-29T10:50:17.721230737Z" level=info msg="CreateContainer within sandbox \"29a03b8908d8b4ab5e1ca727dac928af8572b9e8525dfc9db47c95c825f98398\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:50:17.754641 containerd[1944]: time="2025-01-29T10:50:17.754558217Z" level=info msg="CreateContainer within sandbox \"5dd2dcd920bf8a3bff2104639addfc24c1c74e7bc639cd4a4010f08c28662adb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"956bd89787d2eeb6ccc5fd97e2c61621e284490947c06f9e656a484d50bbb1f0\"" Jan 29 10:50:17.756726 containerd[1944]: time="2025-01-29T10:50:17.756653705Z" level=info msg="StartContainer for \"956bd89787d2eeb6ccc5fd97e2c61621e284490947c06f9e656a484d50bbb1f0\"" Jan 29 10:50:17.778884 containerd[1944]: time="2025-01-29T10:50:17.777180725Z" level=info msg="CreateContainer within sandbox \"29a03b8908d8b4ab5e1ca727dac928af8572b9e8525dfc9db47c95c825f98398\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a332e071b736f76a7b2722469e8e1f7417d868c8e8cea1eb802430fd1673736d\"" Jan 29 10:50:17.781222 containerd[1944]: time="2025-01-29T10:50:17.781162085Z" level=info msg="StartContainer for \"a332e071b736f76a7b2722469e8e1f7417d868c8e8cea1eb802430fd1673736d\"" Jan 29 10:50:17.863835 systemd[1]: Started cri-containerd-956bd89787d2eeb6ccc5fd97e2c61621e284490947c06f9e656a484d50bbb1f0.scope - libcontainer container 956bd89787d2eeb6ccc5fd97e2c61621e284490947c06f9e656a484d50bbb1f0. Jan 29 10:50:17.888169 systemd[1]: Started cri-containerd-a332e071b736f76a7b2722469e8e1f7417d868c8e8cea1eb802430fd1673736d.scope - libcontainer container a332e071b736f76a7b2722469e8e1f7417d868c8e8cea1eb802430fd1673736d. Jan 29 10:50:17.971688 containerd[1944]: time="2025-01-29T10:50:17.971471862Z" level=info msg="StartContainer for \"956bd89787d2eeb6ccc5fd97e2c61621e284490947c06f9e656a484d50bbb1f0\" returns successfully" Jan 29 10:50:17.984260 containerd[1944]: time="2025-01-29T10:50:17.984169158Z" level=info msg="StartContainer for \"a332e071b736f76a7b2722469e8e1f7417d868c8e8cea1eb802430fd1673736d\" returns successfully" Jan 29 10:50:18.516990 kubelet[3369]: I0129 10:50:18.516108 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bsz9c" podStartSLOduration=29.516087317 podStartE2EDuration="29.516087317s" podCreationTimestamp="2025-01-29 10:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:50:18.513729953 +0000 UTC m=+43.639422086" watchObservedRunningTime="2025-01-29 10:50:18.516087317 +0000 UTC m=+43.641779438" Jan 29 10:50:18.559961 kubelet[3369]: I0129 10:50:18.559832 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7mdfh" podStartSLOduration=29.559810361 podStartE2EDuration="29.559810361s" podCreationTimestamp="2025-01-29 10:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:50:18.552227129 +0000 UTC m=+43.677919250" watchObservedRunningTime="2025-01-29 10:50:18.559810361 +0000 UTC m=+43.685502482" Jan 29 10:50:20.047473 systemd[1]: Started sshd@10-172.31.20.65:22-139.178.89.65:42038.service - OpenSSH per-connection server daemon (139.178.89.65:42038). Jan 29 10:50:20.246020 sshd[4747]: Accepted publickey for core from 139.178.89.65 port 42038 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:20.249416 sshd-session[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:20.259680 systemd-logind[1923]: New session 11 of user core. Jan 29 10:50:20.268204 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 10:50:20.520843 sshd[4749]: Connection closed by 139.178.89.65 port 42038 Jan 29 10:50:20.521746 sshd-session[4747]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:20.529271 systemd[1]: sshd@10-172.31.20.65:22-139.178.89.65:42038.service: Deactivated successfully. Jan 29 10:50:20.533339 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 10:50:20.535627 systemd-logind[1923]: Session 11 logged out. Waiting for processes to exit. Jan 29 10:50:20.538396 systemd-logind[1923]: Removed session 11. Jan 29 10:50:25.558380 systemd[1]: Started sshd@11-172.31.20.65:22-139.178.89.65:37288.service - OpenSSH per-connection server daemon (139.178.89.65:37288). Jan 29 10:50:25.749374 sshd[4763]: Accepted publickey for core from 139.178.89.65 port 37288 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:25.752846 sshd-session[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:25.762200 systemd-logind[1923]: New session 12 of user core. Jan 29 10:50:25.773182 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 10:50:26.023250 sshd[4767]: Connection closed by 139.178.89.65 port 37288 Jan 29 10:50:26.023777 sshd-session[4763]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:26.031328 systemd[1]: sshd@11-172.31.20.65:22-139.178.89.65:37288.service: Deactivated successfully. Jan 29 10:50:26.035323 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 10:50:26.037817 systemd-logind[1923]: Session 12 logged out. Waiting for processes to exit. Jan 29 10:50:26.040330 systemd-logind[1923]: Removed session 12. Jan 29 10:50:31.067420 systemd[1]: Started sshd@12-172.31.20.65:22-139.178.89.65:60848.service - OpenSSH per-connection server daemon (139.178.89.65:60848). Jan 29 10:50:31.261780 sshd[4782]: Accepted publickey for core from 139.178.89.65 port 60848 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:31.264326 sshd-session[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:31.273653 systemd-logind[1923]: New session 13 of user core. Jan 29 10:50:31.284173 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 10:50:31.530974 sshd[4784]: Connection closed by 139.178.89.65 port 60848 Jan 29 10:50:31.530908 sshd-session[4782]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:31.538956 systemd[1]: sshd@12-172.31.20.65:22-139.178.89.65:60848.service: Deactivated successfully. Jan 29 10:50:31.543682 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 10:50:31.545383 systemd-logind[1923]: Session 13 logged out. Waiting for processes to exit. Jan 29 10:50:31.547354 systemd-logind[1923]: Removed session 13. Jan 29 10:50:36.570412 systemd[1]: Started sshd@13-172.31.20.65:22-139.178.89.65:60858.service - OpenSSH per-connection server daemon (139.178.89.65:60858). Jan 29 10:50:36.761342 sshd[4798]: Accepted publickey for core from 139.178.89.65 port 60858 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:36.763995 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:36.771572 systemd-logind[1923]: New session 14 of user core. Jan 29 10:50:36.782104 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 10:50:37.023777 sshd[4800]: Connection closed by 139.178.89.65 port 60858 Jan 29 10:50:37.024650 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:37.031694 systemd[1]: sshd@13-172.31.20.65:22-139.178.89.65:60858.service: Deactivated successfully. Jan 29 10:50:37.036323 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 10:50:37.037732 systemd-logind[1923]: Session 14 logged out. Waiting for processes to exit. Jan 29 10:50:37.039522 systemd-logind[1923]: Removed session 14. Jan 29 10:50:37.066367 systemd[1]: Started sshd@14-172.31.20.65:22-139.178.89.65:60870.service - OpenSSH per-connection server daemon (139.178.89.65:60870). Jan 29 10:50:37.264484 sshd[4812]: Accepted publickey for core from 139.178.89.65 port 60870 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:37.268938 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:37.278356 systemd-logind[1923]: New session 15 of user core. Jan 29 10:50:37.288167 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 10:50:37.608836 sshd[4814]: Connection closed by 139.178.89.65 port 60870 Jan 29 10:50:37.609672 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:37.618739 systemd[1]: sshd@14-172.31.20.65:22-139.178.89.65:60870.service: Deactivated successfully. Jan 29 10:50:37.623783 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 10:50:37.634303 systemd-logind[1923]: Session 15 logged out. Waiting for processes to exit. Jan 29 10:50:37.659311 systemd[1]: Started sshd@15-172.31.20.65:22-139.178.89.65:60886.service - OpenSSH per-connection server daemon (139.178.89.65:60886). Jan 29 10:50:37.663166 systemd-logind[1923]: Removed session 15. Jan 29 10:50:37.853671 sshd[4823]: Accepted publickey for core from 139.178.89.65 port 60886 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:37.856180 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:37.864784 systemd-logind[1923]: New session 16 of user core. Jan 29 10:50:37.871806 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 10:50:38.136373 sshd[4825]: Connection closed by 139.178.89.65 port 60886 Jan 29 10:50:38.137418 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:38.144173 systemd[1]: sshd@15-172.31.20.65:22-139.178.89.65:60886.service: Deactivated successfully. Jan 29 10:50:38.149636 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 10:50:38.152750 systemd-logind[1923]: Session 16 logged out. Waiting for processes to exit. Jan 29 10:50:38.154881 systemd-logind[1923]: Removed session 16. Jan 29 10:50:43.179363 systemd[1]: Started sshd@16-172.31.20.65:22-139.178.89.65:49798.service - OpenSSH per-connection server daemon (139.178.89.65:49798). Jan 29 10:50:43.366539 sshd[4836]: Accepted publickey for core from 139.178.89.65 port 49798 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:43.369080 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:43.377768 systemd-logind[1923]: New session 17 of user core. Jan 29 10:50:43.388114 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 10:50:43.638071 sshd[4838]: Connection closed by 139.178.89.65 port 49798 Jan 29 10:50:43.638679 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:43.646314 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 10:50:43.649067 systemd[1]: sshd@16-172.31.20.65:22-139.178.89.65:49798.service: Deactivated successfully. Jan 29 10:50:43.654046 systemd-logind[1923]: Session 17 logged out. Waiting for processes to exit. Jan 29 10:50:43.656033 systemd-logind[1923]: Removed session 17. Jan 29 10:50:48.681440 systemd[1]: Started sshd@17-172.31.20.65:22-139.178.89.65:49800.service - OpenSSH per-connection server daemon (139.178.89.65:49800). Jan 29 10:50:48.875580 sshd[4851]: Accepted publickey for core from 139.178.89.65 port 49800 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:48.878081 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:48.888139 systemd-logind[1923]: New session 18 of user core. Jan 29 10:50:48.896183 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 10:50:49.153834 sshd[4853]: Connection closed by 139.178.89.65 port 49800 Jan 29 10:50:49.154731 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:49.162006 systemd[1]: sshd@17-172.31.20.65:22-139.178.89.65:49800.service: Deactivated successfully. Jan 29 10:50:49.165183 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 10:50:49.167082 systemd-logind[1923]: Session 18 logged out. Waiting for processes to exit. Jan 29 10:50:49.168683 systemd-logind[1923]: Removed session 18. Jan 29 10:50:54.194360 systemd[1]: Started sshd@18-172.31.20.65:22-139.178.89.65:60024.service - OpenSSH per-connection server daemon (139.178.89.65:60024). Jan 29 10:50:54.389052 sshd[4867]: Accepted publickey for core from 139.178.89.65 port 60024 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:54.391526 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:54.400330 systemd-logind[1923]: New session 19 of user core. Jan 29 10:50:54.410149 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 10:50:54.657989 sshd[4869]: Connection closed by 139.178.89.65 port 60024 Jan 29 10:50:54.658976 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:54.665455 systemd[1]: sshd@18-172.31.20.65:22-139.178.89.65:60024.service: Deactivated successfully. Jan 29 10:50:54.669466 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 10:50:54.671191 systemd-logind[1923]: Session 19 logged out. Waiting for processes to exit. Jan 29 10:50:54.673093 systemd-logind[1923]: Removed session 19. Jan 29 10:50:54.697390 systemd[1]: Started sshd@19-172.31.20.65:22-139.178.89.65:60030.service - OpenSSH per-connection server daemon (139.178.89.65:60030). Jan 29 10:50:54.895354 sshd[4879]: Accepted publickey for core from 139.178.89.65 port 60030 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:54.898526 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:54.906712 systemd-logind[1923]: New session 20 of user core. Jan 29 10:50:54.914167 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 10:50:55.226214 sshd[4881]: Connection closed by 139.178.89.65 port 60030 Jan 29 10:50:55.226694 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:55.233831 systemd[1]: sshd@19-172.31.20.65:22-139.178.89.65:60030.service: Deactivated successfully. Jan 29 10:50:55.240252 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 10:50:55.243520 systemd-logind[1923]: Session 20 logged out. Waiting for processes to exit. Jan 29 10:50:55.246315 systemd-logind[1923]: Removed session 20. Jan 29 10:50:55.272423 systemd[1]: Started sshd@20-172.31.20.65:22-139.178.89.65:60044.service - OpenSSH per-connection server daemon (139.178.89.65:60044). Jan 29 10:50:55.462417 sshd[4890]: Accepted publickey for core from 139.178.89.65 port 60044 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:55.465050 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:55.472400 systemd-logind[1923]: New session 21 of user core. Jan 29 10:50:55.484134 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 10:50:58.309180 sshd[4892]: Connection closed by 139.178.89.65 port 60044 Jan 29 10:50:58.312155 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:58.323457 systemd-logind[1923]: Session 21 logged out. Waiting for processes to exit. Jan 29 10:50:58.325764 systemd[1]: sshd@20-172.31.20.65:22-139.178.89.65:60044.service: Deactivated successfully. Jan 29 10:50:58.333814 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 10:50:58.358391 systemd[1]: Started sshd@21-172.31.20.65:22-139.178.89.65:60048.service - OpenSSH per-connection server daemon (139.178.89.65:60048). Jan 29 10:50:58.361011 systemd-logind[1923]: Removed session 21. Jan 29 10:50:58.552348 sshd[4908]: Accepted publickey for core from 139.178.89.65 port 60048 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:58.554825 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:58.562348 systemd-logind[1923]: New session 22 of user core. Jan 29 10:50:58.573110 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 10:50:59.045000 sshd[4910]: Connection closed by 139.178.89.65 port 60048 Jan 29 10:50:59.045827 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:59.052993 systemd[1]: sshd@21-172.31.20.65:22-139.178.89.65:60048.service: Deactivated successfully. Jan 29 10:50:59.058444 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 10:50:59.060356 systemd-logind[1923]: Session 22 logged out. Waiting for processes to exit. Jan 29 10:50:59.063050 systemd-logind[1923]: Removed session 22. Jan 29 10:50:59.086445 systemd[1]: Started sshd@22-172.31.20.65:22-139.178.89.65:60054.service - OpenSSH per-connection server daemon (139.178.89.65:60054). Jan 29 10:50:59.281466 sshd[4919]: Accepted publickey for core from 139.178.89.65 port 60054 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:50:59.283981 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:50:59.292950 systemd-logind[1923]: New session 23 of user core. Jan 29 10:50:59.301249 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 10:50:59.545506 sshd[4921]: Connection closed by 139.178.89.65 port 60054 Jan 29 10:50:59.544474 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Jan 29 10:50:59.551360 systemd[1]: sshd@22-172.31.20.65:22-139.178.89.65:60054.service: Deactivated successfully. Jan 29 10:50:59.556484 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 10:50:59.559572 systemd-logind[1923]: Session 23 logged out. Waiting for processes to exit. Jan 29 10:50:59.561730 systemd-logind[1923]: Removed session 23. Jan 29 10:51:04.582407 systemd[1]: Started sshd@23-172.31.20.65:22-139.178.89.65:58590.service - OpenSSH per-connection server daemon (139.178.89.65:58590). Jan 29 10:51:04.773980 sshd[4932]: Accepted publickey for core from 139.178.89.65 port 58590 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:51:04.776599 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:04.786295 systemd-logind[1923]: New session 24 of user core. Jan 29 10:51:04.797129 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 10:51:05.036507 sshd[4934]: Connection closed by 139.178.89.65 port 58590 Jan 29 10:51:05.036384 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:05.042166 systemd-logind[1923]: Session 24 logged out. Waiting for processes to exit. Jan 29 10:51:05.043019 systemd[1]: sshd@23-172.31.20.65:22-139.178.89.65:58590.service: Deactivated successfully. Jan 29 10:51:05.046370 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 10:51:05.051290 systemd-logind[1923]: Removed session 24. Jan 29 10:51:10.077394 systemd[1]: Started sshd@24-172.31.20.65:22-139.178.89.65:58602.service - OpenSSH per-connection server daemon (139.178.89.65:58602). Jan 29 10:51:10.269453 sshd[4949]: Accepted publickey for core from 139.178.89.65 port 58602 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:51:10.272529 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:10.281005 systemd-logind[1923]: New session 25 of user core. Jan 29 10:51:10.286131 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 10:51:10.521313 sshd[4951]: Connection closed by 139.178.89.65 port 58602 Jan 29 10:51:10.521128 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:10.529045 systemd[1]: sshd@24-172.31.20.65:22-139.178.89.65:58602.service: Deactivated successfully. Jan 29 10:51:10.532510 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 10:51:10.535569 systemd-logind[1923]: Session 25 logged out. Waiting for processes to exit. Jan 29 10:51:10.537697 systemd-logind[1923]: Removed session 25. Jan 29 10:51:15.563387 systemd[1]: Started sshd@25-172.31.20.65:22-139.178.89.65:46554.service - OpenSSH per-connection server daemon (139.178.89.65:46554). Jan 29 10:51:15.752234 sshd[4961]: Accepted publickey for core from 139.178.89.65 port 46554 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:51:15.754730 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:15.762784 systemd-logind[1923]: New session 26 of user core. Jan 29 10:51:15.773211 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 10:51:16.016014 sshd[4963]: Connection closed by 139.178.89.65 port 46554 Jan 29 10:51:16.016885 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:16.023185 systemd[1]: sshd@25-172.31.20.65:22-139.178.89.65:46554.service: Deactivated successfully. Jan 29 10:51:16.028162 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 10:51:16.029496 systemd-logind[1923]: Session 26 logged out. Waiting for processes to exit. Jan 29 10:51:16.031459 systemd-logind[1923]: Removed session 26. Jan 29 10:51:21.057408 systemd[1]: Started sshd@26-172.31.20.65:22-139.178.89.65:51996.service - OpenSSH per-connection server daemon (139.178.89.65:51996). Jan 29 10:51:21.252516 sshd[4976]: Accepted publickey for core from 139.178.89.65 port 51996 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:51:21.255100 sshd-session[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:21.263782 systemd-logind[1923]: New session 27 of user core. Jan 29 10:51:21.277151 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 10:51:21.522611 sshd[4978]: Connection closed by 139.178.89.65 port 51996 Jan 29 10:51:21.523565 sshd-session[4976]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:21.529772 systemd[1]: sshd@26-172.31.20.65:22-139.178.89.65:51996.service: Deactivated successfully. Jan 29 10:51:21.534741 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 10:51:21.537112 systemd-logind[1923]: Session 27 logged out. Waiting for processes to exit. Jan 29 10:51:21.539608 systemd-logind[1923]: Removed session 27. Jan 29 10:51:21.557547 systemd[1]: Started sshd@27-172.31.20.65:22-139.178.89.65:52004.service - OpenSSH per-connection server daemon (139.178.89.65:52004). Jan 29 10:51:21.760242 sshd[4989]: Accepted publickey for core from 139.178.89.65 port 52004 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:51:21.763300 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:21.772309 systemd-logind[1923]: New session 28 of user core. Jan 29 10:51:21.778120 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 10:51:24.747510 containerd[1944]: time="2025-01-29T10:51:24.747440014Z" level=info msg="StopContainer for \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\" with timeout 30 (s)" Jan 29 10:51:24.752364 containerd[1944]: time="2025-01-29T10:51:24.748743238Z" level=info msg="Stop container \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\" with signal terminated" Jan 29 10:51:24.773599 systemd[1]: cri-containerd-38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731.scope: Deactivated successfully. Jan 29 10:51:24.783066 containerd[1944]: time="2025-01-29T10:51:24.782990446Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:51:24.800291 containerd[1944]: time="2025-01-29T10:51:24.800070778Z" level=info msg="StopContainer for \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\" with timeout 2 (s)" Jan 29 10:51:24.801015 containerd[1944]: time="2025-01-29T10:51:24.800963566Z" level=info msg="Stop container \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\" with signal terminated" Jan 29 10:51:24.826377 systemd-networkd[1844]: lxc_health: Link DOWN Jan 29 10:51:24.826396 systemd-networkd[1844]: lxc_health: Lost carrier Jan 29 10:51:24.827981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731-rootfs.mount: Deactivated successfully. Jan 29 10:51:24.852129 containerd[1944]: time="2025-01-29T10:51:24.852036034Z" level=info msg="shim disconnected" id=38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731 namespace=k8s.io Jan 29 10:51:24.853262 containerd[1944]: time="2025-01-29T10:51:24.852899878Z" level=warning msg="cleaning up after shim disconnected" id=38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731 namespace=k8s.io Jan 29 10:51:24.853262 containerd[1944]: time="2025-01-29T10:51:24.852957766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:24.859184 systemd[1]: cri-containerd-a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23.scope: Deactivated successfully. Jan 29 10:51:24.860436 systemd[1]: cri-containerd-a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23.scope: Consumed 15.296s CPU time. Jan 29 10:51:24.894691 containerd[1944]: time="2025-01-29T10:51:24.894501767Z" level=info msg="StopContainer for \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\" returns successfully" Jan 29 10:51:24.895651 containerd[1944]: time="2025-01-29T10:51:24.895573019Z" level=info msg="StopPodSandbox for \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\"" Jan 29 10:51:24.896383 containerd[1944]: time="2025-01-29T10:51:24.896200559Z" level=info msg="Container to stop \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:51:24.900729 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e-shm.mount: Deactivated successfully. Jan 29 10:51:24.916524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23-rootfs.mount: Deactivated successfully. Jan 29 10:51:24.922051 systemd[1]: cri-containerd-4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e.scope: Deactivated successfully. Jan 29 10:51:24.929323 containerd[1944]: time="2025-01-29T10:51:24.929226527Z" level=info msg="shim disconnected" id=a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23 namespace=k8s.io Jan 29 10:51:24.929782 containerd[1944]: time="2025-01-29T10:51:24.929537999Z" level=warning msg="cleaning up after shim disconnected" id=a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23 namespace=k8s.io Jan 29 10:51:24.929782 containerd[1944]: time="2025-01-29T10:51:24.929638751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:24.958286 containerd[1944]: time="2025-01-29T10:51:24.958133603Z" level=warning msg="cleanup warnings time=\"2025-01-29T10:51:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 10:51:24.964522 containerd[1944]: time="2025-01-29T10:51:24.964443647Z" level=info msg="StopContainer for \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\" returns successfully" Jan 29 10:51:24.966310 containerd[1944]: time="2025-01-29T10:51:24.966005723Z" level=info msg="StopPodSandbox for \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\"" Jan 29 10:51:24.966310 containerd[1944]: time="2025-01-29T10:51:24.966069563Z" level=info msg="Container to stop \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:51:24.966310 containerd[1944]: time="2025-01-29T10:51:24.966096443Z" level=info msg="Container to stop \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:51:24.966310 containerd[1944]: time="2025-01-29T10:51:24.966117731Z" level=info msg="Container to stop \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:51:24.966310 containerd[1944]: time="2025-01-29T10:51:24.966138407Z" level=info msg="Container to stop \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:51:24.966310 containerd[1944]: time="2025-01-29T10:51:24.966158039Z" level=info msg="Container to stop \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:51:24.971141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3-shm.mount: Deactivated successfully. Jan 29 10:51:24.983518 systemd[1]: cri-containerd-8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3.scope: Deactivated successfully. Jan 29 10:51:24.994459 containerd[1944]: time="2025-01-29T10:51:24.994375799Z" level=info msg="shim disconnected" id=4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e namespace=k8s.io Jan 29 10:51:24.994459 containerd[1944]: time="2025-01-29T10:51:24.994454627Z" level=warning msg="cleaning up after shim disconnected" id=4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e namespace=k8s.io Jan 29 10:51:24.994774 containerd[1944]: time="2025-01-29T10:51:24.994476203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:25.034934 containerd[1944]: time="2025-01-29T10:51:25.034795915Z" level=info msg="TearDown network for sandbox \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\" successfully" Jan 29 10:51:25.036143 containerd[1944]: time="2025-01-29T10:51:25.035946835Z" level=info msg="StopPodSandbox for \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\" returns successfully" Jan 29 10:51:25.046304 containerd[1944]: time="2025-01-29T10:51:25.046150339Z" level=info msg="shim disconnected" id=8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3 namespace=k8s.io Jan 29 10:51:25.046304 containerd[1944]: time="2025-01-29T10:51:25.046231051Z" level=warning msg="cleaning up after shim disconnected" id=8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3 namespace=k8s.io Jan 29 10:51:25.047155 containerd[1944]: time="2025-01-29T10:51:25.046253899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:25.086178 containerd[1944]: time="2025-01-29T10:51:25.086114852Z" level=info msg="TearDown network for sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" successfully" Jan 29 10:51:25.086178 containerd[1944]: time="2025-01-29T10:51:25.086166044Z" level=info msg="StopPodSandbox for \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" returns successfully" Jan 29 10:51:25.158391 kubelet[3369]: I0129 10:51:25.157091 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-hostproc\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.158391 kubelet[3369]: I0129 10:51:25.157150 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-bpf-maps\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.158391 kubelet[3369]: I0129 10:51:25.157189 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-host-proc-sys-kernel\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.158391 kubelet[3369]: I0129 10:51:25.157230 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/049a4c68-0d52-4a41-932a-19a96137410b-hubble-tls\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.158391 kubelet[3369]: I0129 10:51:25.157268 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj9c5\" (UniqueName: \"kubernetes.io/projected/049a4c68-0d52-4a41-932a-19a96137410b-kube-api-access-zj9c5\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.158391 kubelet[3369]: I0129 10:51:25.157300 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-lib-modules\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.159268 kubelet[3369]: I0129 10:51:25.157331 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cilium-run\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.159268 kubelet[3369]: I0129 10:51:25.157367 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4512eab1-b85b-4afb-a89d-3663b27d2166-cilium-config-path\") pod \"4512eab1-b85b-4afb-a89d-3663b27d2166\" (UID: \"4512eab1-b85b-4afb-a89d-3663b27d2166\") " Jan 29 10:51:25.159268 kubelet[3369]: I0129 10:51:25.157403 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zttrt\" (UniqueName: \"kubernetes.io/projected/4512eab1-b85b-4afb-a89d-3663b27d2166-kube-api-access-zttrt\") pod \"4512eab1-b85b-4afb-a89d-3663b27d2166\" (UID: \"4512eab1-b85b-4afb-a89d-3663b27d2166\") " Jan 29 10:51:25.159268 kubelet[3369]: I0129 10:51:25.157437 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cni-path\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.159268 kubelet[3369]: I0129 10:51:25.157474 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/049a4c68-0d52-4a41-932a-19a96137410b-clustermesh-secrets\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.159268 kubelet[3369]: I0129 10:51:25.157508 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-host-proc-sys-net\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.159593 kubelet[3369]: I0129 10:51:25.157542 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-xtables-lock\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.159593 kubelet[3369]: I0129 10:51:25.157574 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cilium-cgroup\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.159593 kubelet[3369]: I0129 10:51:25.157613 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/049a4c68-0d52-4a41-932a-19a96137410b-cilium-config-path\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.159593 kubelet[3369]: I0129 10:51:25.157644 3369 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-etc-cni-netd\") pod \"049a4c68-0d52-4a41-932a-19a96137410b\" (UID: \"049a4c68-0d52-4a41-932a-19a96137410b\") " Jan 29 10:51:25.159593 kubelet[3369]: I0129 10:51:25.157737 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.159593 kubelet[3369]: I0129 10:51:25.157797 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-hostproc" (OuterVolumeSpecName: "hostproc") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.159992 kubelet[3369]: I0129 10:51:25.157833 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.159992 kubelet[3369]: I0129 10:51:25.157916 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.159992 kubelet[3369]: I0129 10:51:25.157979 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cni-path" (OuterVolumeSpecName: "cni-path") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.159992 kubelet[3369]: I0129 10:51:25.159244 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.160247 kubelet[3369]: I0129 10:51:25.159335 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.161593 kubelet[3369]: I0129 10:51:25.161527 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.162032 kubelet[3369]: I0129 10:51:25.161774 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.162032 kubelet[3369]: I0129 10:51:25.161804 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:51:25.171056 kubelet[3369]: I0129 10:51:25.171001 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049a4c68-0d52-4a41-932a-19a96137410b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:51:25.173461 kubelet[3369]: I0129 10:51:25.173408 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049a4c68-0d52-4a41-932a-19a96137410b-kube-api-access-zj9c5" (OuterVolumeSpecName: "kube-api-access-zj9c5") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "kube-api-access-zj9c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:51:25.175135 kubelet[3369]: I0129 10:51:25.175078 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4512eab1-b85b-4afb-a89d-3663b27d2166-kube-api-access-zttrt" (OuterVolumeSpecName: "kube-api-access-zttrt") pod "4512eab1-b85b-4afb-a89d-3663b27d2166" (UID: "4512eab1-b85b-4afb-a89d-3663b27d2166"). InnerVolumeSpecName "kube-api-access-zttrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:51:25.176068 kubelet[3369]: I0129 10:51:25.175964 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049a4c68-0d52-4a41-932a-19a96137410b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:51:25.176729 kubelet[3369]: I0129 10:51:25.176670 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4512eab1-b85b-4afb-a89d-3663b27d2166-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4512eab1-b85b-4afb-a89d-3663b27d2166" (UID: "4512eab1-b85b-4afb-a89d-3663b27d2166"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:51:25.179181 kubelet[3369]: I0129 10:51:25.179112 3369 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/049a4c68-0d52-4a41-932a-19a96137410b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "049a4c68-0d52-4a41-932a-19a96137410b" (UID: "049a4c68-0d52-4a41-932a-19a96137410b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:51:25.253042 systemd[1]: Removed slice kubepods-burstable-pod049a4c68_0d52_4a41_932a_19a96137410b.slice - libcontainer container kubepods-burstable-pod049a4c68_0d52_4a41_932a_19a96137410b.slice. Jan 29 10:51:25.253270 systemd[1]: kubepods-burstable-pod049a4c68_0d52_4a41_932a_19a96137410b.slice: Consumed 15.453s CPU time. Jan 29 10:51:25.256042 systemd[1]: Removed slice kubepods-besteffort-pod4512eab1_b85b_4afb_a89d_3663b27d2166.slice - libcontainer container kubepods-besteffort-pod4512eab1_b85b_4afb_a89d_3663b27d2166.slice. Jan 29 10:51:25.258568 kubelet[3369]: I0129 10:51:25.258507 3369 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cni-path\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.258568 kubelet[3369]: I0129 10:51:25.258560 3369 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/049a4c68-0d52-4a41-932a-19a96137410b-clustermesh-secrets\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.258772 kubelet[3369]: I0129 10:51:25.258591 3369 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-host-proc-sys-net\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.258772 kubelet[3369]: I0129 10:51:25.258615 3369 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zttrt\" (UniqueName: \"kubernetes.io/projected/4512eab1-b85b-4afb-a89d-3663b27d2166-kube-api-access-zttrt\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.258772 kubelet[3369]: I0129 10:51:25.258659 3369 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-xtables-lock\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.258772 kubelet[3369]: I0129 10:51:25.258684 3369 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cilium-cgroup\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.258772 kubelet[3369]: I0129 10:51:25.258706 3369 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/049a4c68-0d52-4a41-932a-19a96137410b-cilium-config-path\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.258772 kubelet[3369]: I0129 10:51:25.258726 3369 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-etc-cni-netd\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.258772 kubelet[3369]: I0129 10:51:25.258747 3369 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-hostproc\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.258772 kubelet[3369]: I0129 10:51:25.258765 3369 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-bpf-maps\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.259243 kubelet[3369]: I0129 10:51:25.258784 3369 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-host-proc-sys-kernel\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.259243 kubelet[3369]: I0129 10:51:25.258803 3369 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/049a4c68-0d52-4a41-932a-19a96137410b-hubble-tls\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.259243 kubelet[3369]: I0129 10:51:25.258821 3369 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zj9c5\" (UniqueName: \"kubernetes.io/projected/049a4c68-0d52-4a41-932a-19a96137410b-kube-api-access-zj9c5\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.259243 kubelet[3369]: I0129 10:51:25.258839 3369 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-lib-modules\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.259243 kubelet[3369]: I0129 10:51:25.258901 3369 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/049a4c68-0d52-4a41-932a-19a96137410b-cilium-run\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.259243 kubelet[3369]: I0129 10:51:25.258923 3369 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4512eab1-b85b-4afb-a89d-3663b27d2166-cilium-config-path\") on node \"ip-172-31-20-65\" DevicePath \"\"" Jan 29 10:51:25.423516 kubelet[3369]: E0129 10:51:25.422223 3369 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 10:51:25.678670 kubelet[3369]: I0129 10:51:25.678061 3369 scope.go:117] "RemoveContainer" containerID="38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731" Jan 29 10:51:25.683281 containerd[1944]: time="2025-01-29T10:51:25.682318222Z" level=info msg="RemoveContainer for \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\"" Jan 29 10:51:25.696161 containerd[1944]: time="2025-01-29T10:51:25.695207651Z" level=info msg="RemoveContainer for \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\" returns successfully" Jan 29 10:51:25.696353 kubelet[3369]: I0129 10:51:25.695708 3369 scope.go:117] "RemoveContainer" containerID="38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731" Jan 29 10:51:25.697489 containerd[1944]: time="2025-01-29T10:51:25.697358807Z" level=error msg="ContainerStatus for \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\": not found" Jan 29 10:51:25.698044 kubelet[3369]: E0129 10:51:25.697742 3369 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\": not found" containerID="38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731" Jan 29 10:51:25.698044 kubelet[3369]: I0129 10:51:25.697794 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731"} err="failed to get container status \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\": rpc error: code = NotFound desc = an error occurred when try to find container \"38a6509b879958e23649f0011e54eda234e63da41091b8af5a46341697c46731\": not found" Jan 29 10:51:25.698044 kubelet[3369]: I0129 10:51:25.697963 3369 scope.go:117] "RemoveContainer" containerID="a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23" Jan 29 10:51:25.705406 containerd[1944]: time="2025-01-29T10:51:25.704926535Z" level=info msg="RemoveContainer for \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\"" Jan 29 10:51:25.712324 containerd[1944]: time="2025-01-29T10:51:25.712011923Z" level=info msg="RemoveContainer for \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\" returns successfully" Jan 29 10:51:25.714334 kubelet[3369]: I0129 10:51:25.713186 3369 scope.go:117] "RemoveContainer" containerID="e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe" Jan 29 10:51:25.725065 containerd[1944]: time="2025-01-29T10:51:25.723279683Z" level=info msg="RemoveContainer for \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\"" Jan 29 10:51:25.735138 containerd[1944]: time="2025-01-29T10:51:25.734809583Z" level=info msg="RemoveContainer for \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\" returns successfully" Jan 29 10:51:25.738647 kubelet[3369]: I0129 10:51:25.736086 3369 scope.go:117] "RemoveContainer" containerID="3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed" Jan 29 10:51:25.738373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e-rootfs.mount: Deactivated successfully. Jan 29 10:51:25.738569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3-rootfs.mount: Deactivated successfully. Jan 29 10:51:25.738705 systemd[1]: var-lib-kubelet-pods-4512eab1\x2db85b\x2d4afb\x2da89d\x2d3663b27d2166-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzttrt.mount: Deactivated successfully. Jan 29 10:51:25.738874 systemd[1]: var-lib-kubelet-pods-049a4c68\x2d0d52\x2d4a41\x2d932a\x2d19a96137410b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzj9c5.mount: Deactivated successfully. Jan 29 10:51:25.740417 systemd[1]: var-lib-kubelet-pods-049a4c68\x2d0d52\x2d4a41\x2d932a\x2d19a96137410b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 10:51:25.741277 systemd[1]: var-lib-kubelet-pods-049a4c68\x2d0d52\x2d4a41\x2d932a\x2d19a96137410b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 10:51:25.744367 containerd[1944]: time="2025-01-29T10:51:25.744112067Z" level=info msg="RemoveContainer for \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\"" Jan 29 10:51:25.756944 containerd[1944]: time="2025-01-29T10:51:25.756870143Z" level=info msg="RemoveContainer for \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\" returns successfully" Jan 29 10:51:25.757453 kubelet[3369]: I0129 10:51:25.757213 3369 scope.go:117] "RemoveContainer" containerID="79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac" Jan 29 10:51:25.759323 containerd[1944]: time="2025-01-29T10:51:25.759270143Z" level=info msg="RemoveContainer for \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\"" Jan 29 10:51:25.768453 containerd[1944]: time="2025-01-29T10:51:25.768379547Z" level=info msg="RemoveContainer for \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\" returns successfully" Jan 29 10:51:25.769477 kubelet[3369]: I0129 10:51:25.768687 3369 scope.go:117] "RemoveContainer" containerID="e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5" Jan 29 10:51:25.770834 containerd[1944]: time="2025-01-29T10:51:25.770790179Z" level=info msg="RemoveContainer for \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\"" Jan 29 10:51:25.777721 containerd[1944]: time="2025-01-29T10:51:25.777670139Z" level=info msg="RemoveContainer for \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\" returns successfully" Jan 29 10:51:25.778248 kubelet[3369]: I0129 10:51:25.778204 3369 scope.go:117] "RemoveContainer" containerID="a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23" Jan 29 10:51:25.778669 containerd[1944]: time="2025-01-29T10:51:25.778620983Z" level=error msg="ContainerStatus for \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\": not found" Jan 29 10:51:25.779590 kubelet[3369]: E0129 10:51:25.779369 3369 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\": not found" containerID="a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23" Jan 29 10:51:25.779590 kubelet[3369]: I0129 10:51:25.779421 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23"} err="failed to get container status \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9a2f57367c1191d0381fcd93a5429047eb151b11f585557548f30b4b0ce7c23\": not found" Jan 29 10:51:25.779590 kubelet[3369]: I0129 10:51:25.779462 3369 scope.go:117] "RemoveContainer" containerID="e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe" Jan 29 10:51:25.779983 containerd[1944]: time="2025-01-29T10:51:25.779780243Z" level=error msg="ContainerStatus for \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\": not found" Jan 29 10:51:25.780342 kubelet[3369]: E0129 10:51:25.780173 3369 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\": not found" containerID="e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe" Jan 29 10:51:25.780342 kubelet[3369]: I0129 10:51:25.780223 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe"} err="failed to get container status \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\": rpc error: code = NotFound desc = an error occurred when try to find container \"e06c7c0725784f0cd574986621d2079b885da1c65968c551f07390547c487ebe\": not found" Jan 29 10:51:25.780342 kubelet[3369]: I0129 10:51:25.780261 3369 scope.go:117] "RemoveContainer" containerID="3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed" Jan 29 10:51:25.780711 containerd[1944]: time="2025-01-29T10:51:25.780575915Z" level=error msg="ContainerStatus for \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\": not found" Jan 29 10:51:25.781137 kubelet[3369]: E0129 10:51:25.780922 3369 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\": not found" containerID="3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed" Jan 29 10:51:25.781137 kubelet[3369]: I0129 10:51:25.780989 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed"} err="failed to get container status \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c231911709ec7ece24c45b1f14c8b7d285909cf6686008c12974cf87463d8ed\": not found" Jan 29 10:51:25.781137 kubelet[3369]: I0129 10:51:25.781021 3369 scope.go:117] "RemoveContainer" containerID="79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac" Jan 29 10:51:25.781422 containerd[1944]: time="2025-01-29T10:51:25.781371215Z" level=error msg="ContainerStatus for \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\": not found" Jan 29 10:51:25.781674 kubelet[3369]: E0129 10:51:25.781631 3369 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\": not found" containerID="79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac" Jan 29 10:51:25.781751 kubelet[3369]: I0129 10:51:25.781680 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac"} err="failed to get container status \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"79bff55e9bad339cb1bb3ab9f9d4ae59e698b930d795251d39eb7eec0929e3ac\": not found" Jan 29 10:51:25.781751 kubelet[3369]: I0129 10:51:25.781713 3369 scope.go:117] "RemoveContainer" containerID="e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5" Jan 29 10:51:25.782121 containerd[1944]: time="2025-01-29T10:51:25.782065055Z" level=error msg="ContainerStatus for \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\": not found" Jan 29 10:51:25.782436 kubelet[3369]: E0129 10:51:25.782365 3369 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\": not found" containerID="e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5" Jan 29 10:51:25.782524 kubelet[3369]: I0129 10:51:25.782444 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5"} err="failed to get container status \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4dd4b15bfa3a5b556383b5ae4d34cc7868f22994796c9bd02d1014761a434d5\": not found" Jan 29 10:51:26.657511 sshd[4991]: Connection closed by 139.178.89.65 port 52004 Jan 29 10:51:26.658464 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:26.665516 systemd[1]: sshd@27-172.31.20.65:22-139.178.89.65:52004.service: Deactivated successfully. Jan 29 10:51:26.669815 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 10:51:26.670680 systemd[1]: session-28.scope: Consumed 2.184s CPU time. Jan 29 10:51:26.672131 systemd-logind[1923]: Session 28 logged out. Waiting for processes to exit. Jan 29 10:51:26.674294 systemd-logind[1923]: Removed session 28. Jan 29 10:51:26.709965 systemd[1]: Started sshd@28-172.31.20.65:22-139.178.89.65:52012.service - OpenSSH per-connection server daemon (139.178.89.65:52012). Jan 29 10:51:26.850212 ntpd[1914]: Deleting interface #12 lxc_health, fe80::8a4:51ff:fe5b:4caa%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Jan 29 10:51:26.850707 ntpd[1914]: 29 Jan 10:51:26 ntpd[1914]: Deleting interface #12 lxc_health, fe80::8a4:51ff:fe5b:4caa%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Jan 29 10:51:26.904897 sshd[5157]: Accepted publickey for core from 139.178.89.65 port 52012 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:51:26.907848 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:26.915628 systemd-logind[1923]: New session 29 of user core. Jan 29 10:51:26.926111 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 10:51:27.242972 kubelet[3369]: I0129 10:51:27.242625 3369 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049a4c68-0d52-4a41-932a-19a96137410b" path="/var/lib/kubelet/pods/049a4c68-0d52-4a41-932a-19a96137410b/volumes" Jan 29 10:51:27.245512 kubelet[3369]: I0129 10:51:27.244952 3369 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4512eab1-b85b-4afb-a89d-3663b27d2166" path="/var/lib/kubelet/pods/4512eab1-b85b-4afb-a89d-3663b27d2166/volumes" Jan 29 10:51:27.659548 kubelet[3369]: I0129 10:51:27.659296 3369 setters.go:580] "Node became not ready" node="ip-172-31-20-65" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T10:51:27Z","lastTransitionTime":"2025-01-29T10:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 10:51:28.604016 sshd[5159]: Connection closed by 139.178.89.65 port 52012 Jan 29 10:51:28.608544 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:28.618699 systemd[1]: sshd@28-172.31.20.65:22-139.178.89.65:52012.service: Deactivated successfully. Jan 29 10:51:28.622811 kubelet[3369]: I0129 10:51:28.622734 3369 topology_manager.go:215] "Topology Admit Handler" podUID="7a9cc86a-3fc4-4343-b471-3bf2a082edb2" podNamespace="kube-system" podName="cilium-4f2mp" Jan 29 10:51:28.623447 kubelet[3369]: E0129 10:51:28.622881 3369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="049a4c68-0d52-4a41-932a-19a96137410b" containerName="mount-cgroup" Jan 29 10:51:28.623447 kubelet[3369]: E0129 10:51:28.622904 3369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="049a4c68-0d52-4a41-932a-19a96137410b" containerName="apply-sysctl-overwrites" Jan 29 10:51:28.623447 kubelet[3369]: E0129 10:51:28.622923 3369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="049a4c68-0d52-4a41-932a-19a96137410b" containerName="mount-bpf-fs" Jan 29 10:51:28.623447 kubelet[3369]: E0129 10:51:28.622938 3369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4512eab1-b85b-4afb-a89d-3663b27d2166" containerName="cilium-operator" Jan 29 10:51:28.623447 kubelet[3369]: E0129 10:51:28.622982 3369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="049a4c68-0d52-4a41-932a-19a96137410b" containerName="cilium-agent" Jan 29 10:51:28.623447 kubelet[3369]: E0129 10:51:28.622998 3369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="049a4c68-0d52-4a41-932a-19a96137410b" containerName="clean-cilium-state" Jan 29 10:51:28.626702 kubelet[3369]: I0129 10:51:28.625961 3369 memory_manager.go:354] "RemoveStaleState removing state" podUID="049a4c68-0d52-4a41-932a-19a96137410b" containerName="cilium-agent" Jan 29 10:51:28.626702 kubelet[3369]: I0129 10:51:28.626024 3369 memory_manager.go:354] "RemoveStaleState removing state" podUID="4512eab1-b85b-4afb-a89d-3663b27d2166" containerName="cilium-operator" Jan 29 10:51:28.632375 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 10:51:28.632778 systemd[1]: session-29.scope: Consumed 1.476s CPU time. Jan 29 10:51:28.635644 systemd-logind[1923]: Session 29 logged out. Waiting for processes to exit. Jan 29 10:51:28.665422 systemd[1]: Started sshd@29-172.31.20.65:22-139.178.89.65:52026.service - OpenSSH per-connection server daemon (139.178.89.65:52026). Jan 29 10:51:28.668331 systemd-logind[1923]: Removed session 29. Jan 29 10:51:28.676690 kubelet[3369]: I0129 10:51:28.676648 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-cilium-run\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.677008 kubelet[3369]: I0129 10:51:28.676956 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-host-proc-sys-kernel\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.677246 kubelet[3369]: I0129 10:51:28.677200 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-cilium-config-path\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.677438 kubelet[3369]: I0129 10:51:28.677412 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-cilium-ipsec-secrets\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.678240 kubelet[3369]: I0129 10:51:28.678176 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-bpf-maps\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.678781 kubelet[3369]: I0129 10:51:28.678736 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-host-proc-sys-net\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.681007 kubelet[3369]: I0129 10:51:28.680950 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-hubble-tls\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.681813 kubelet[3369]: I0129 10:51:28.681222 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-cni-path\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.681813 kubelet[3369]: I0129 10:51:28.681271 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-etc-cni-netd\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.681813 kubelet[3369]: I0129 10:51:28.681309 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-lib-modules\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.681813 kubelet[3369]: I0129 10:51:28.681346 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-clustermesh-secrets\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.681813 kubelet[3369]: I0129 10:51:28.681392 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqsls\" (UniqueName: \"kubernetes.io/projected/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-kube-api-access-hqsls\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.681813 kubelet[3369]: I0129 10:51:28.681433 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-hostproc\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.682281 kubelet[3369]: I0129 10:51:28.681468 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-cilium-cgroup\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.682281 kubelet[3369]: I0129 10:51:28.681504 3369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a9cc86a-3fc4-4343-b471-3bf2a082edb2-xtables-lock\") pod \"cilium-4f2mp\" (UID: \"7a9cc86a-3fc4-4343-b471-3bf2a082edb2\") " pod="kube-system/cilium-4f2mp" Jan 29 10:51:28.686046 systemd[1]: Created slice kubepods-burstable-pod7a9cc86a_3fc4_4343_b471_3bf2a082edb2.slice - libcontainer container kubepods-burstable-pod7a9cc86a_3fc4_4343_b471_3bf2a082edb2.slice. Jan 29 10:51:28.899232 sshd[5169]: Accepted publickey for core from 139.178.89.65 port 52026 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:51:28.901785 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:28.909370 systemd-logind[1923]: New session 30 of user core. Jan 29 10:51:28.920148 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 10:51:29.000937 containerd[1944]: time="2025-01-29T10:51:29.000824783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4f2mp,Uid:7a9cc86a-3fc4-4343-b471-3bf2a082edb2,Namespace:kube-system,Attempt:0,}" Jan 29 10:51:29.046773 sshd[5175]: Connection closed by 139.178.89.65 port 52026 Jan 29 10:51:29.046667 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:29.051919 containerd[1944]: time="2025-01-29T10:51:29.050213615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:51:29.051919 containerd[1944]: time="2025-01-29T10:51:29.050321159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:51:29.051919 containerd[1944]: time="2025-01-29T10:51:29.050357471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:29.051919 containerd[1944]: time="2025-01-29T10:51:29.050509955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:51:29.055623 systemd[1]: sshd@29-172.31.20.65:22-139.178.89.65:52026.service: Deactivated successfully. Jan 29 10:51:29.070374 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 10:51:29.075719 systemd-logind[1923]: Session 30 logged out. Waiting for processes to exit. Jan 29 10:51:29.107282 systemd-logind[1923]: Removed session 30. Jan 29 10:51:29.114624 systemd[1]: Started cri-containerd-c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786.scope - libcontainer container c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786. Jan 29 10:51:29.119693 systemd[1]: Started sshd@30-172.31.20.65:22-139.178.89.65:52036.service - OpenSSH per-connection server daemon (139.178.89.65:52036). Jan 29 10:51:29.178443 containerd[1944]: time="2025-01-29T10:51:29.178155012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4f2mp,Uid:7a9cc86a-3fc4-4343-b471-3bf2a082edb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\"" Jan 29 10:51:29.186823 containerd[1944]: time="2025-01-29T10:51:29.186387312Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 10:51:29.214429 containerd[1944]: time="2025-01-29T10:51:29.214349364Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"33d91e6457e59208ef7505097647c52e219eb94c6a4528547420730b43039052\"" Jan 29 10:51:29.216354 containerd[1944]: time="2025-01-29T10:51:29.216289644Z" level=info msg="StartContainer for \"33d91e6457e59208ef7505097647c52e219eb94c6a4528547420730b43039052\"" Jan 29 10:51:29.238536 kubelet[3369]: E0129 10:51:29.237831 3369 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bsz9c" podUID="bbfe1c99-1869-4639-8072-25686bd88732" Jan 29 10:51:29.276161 systemd[1]: Started cri-containerd-33d91e6457e59208ef7505097647c52e219eb94c6a4528547420730b43039052.scope - libcontainer container 33d91e6457e59208ef7505097647c52e219eb94c6a4528547420730b43039052. Jan 29 10:51:29.325959 containerd[1944]: time="2025-01-29T10:51:29.325896745Z" level=info msg="StartContainer for \"33d91e6457e59208ef7505097647c52e219eb94c6a4528547420730b43039052\" returns successfully" Jan 29 10:51:29.328824 sshd[5211]: Accepted publickey for core from 139.178.89.65 port 52036 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:51:29.331484 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:51:29.345881 systemd-logind[1923]: New session 31 of user core. Jan 29 10:51:29.356707 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 29 10:51:29.357578 systemd[1]: cri-containerd-33d91e6457e59208ef7505097647c52e219eb94c6a4528547420730b43039052.scope: Deactivated successfully. Jan 29 10:51:29.413157 containerd[1944]: time="2025-01-29T10:51:29.412893625Z" level=info msg="shim disconnected" id=33d91e6457e59208ef7505097647c52e219eb94c6a4528547420730b43039052 namespace=k8s.io Jan 29 10:51:29.413157 containerd[1944]: time="2025-01-29T10:51:29.412966525Z" level=warning msg="cleaning up after shim disconnected" id=33d91e6457e59208ef7505097647c52e219eb94c6a4528547420730b43039052 namespace=k8s.io Jan 29 10:51:29.413157 containerd[1944]: time="2025-01-29T10:51:29.412988077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:29.727345 containerd[1944]: time="2025-01-29T10:51:29.727243263Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 10:51:29.756595 containerd[1944]: time="2025-01-29T10:51:29.756207735Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f4b77510b4471e0d0d79492b8866e8540a7583cee6c5db0ef6f70f3e576d9fa\"" Jan 29 10:51:29.758691 containerd[1944]: time="2025-01-29T10:51:29.757802811Z" level=info msg="StartContainer for \"2f4b77510b4471e0d0d79492b8866e8540a7583cee6c5db0ef6f70f3e576d9fa\"" Jan 29 10:51:29.807243 systemd[1]: Started cri-containerd-2f4b77510b4471e0d0d79492b8866e8540a7583cee6c5db0ef6f70f3e576d9fa.scope - libcontainer container 2f4b77510b4471e0d0d79492b8866e8540a7583cee6c5db0ef6f70f3e576d9fa. Jan 29 10:51:29.862323 containerd[1944]: time="2025-01-29T10:51:29.861827427Z" level=info msg="StartContainer for \"2f4b77510b4471e0d0d79492b8866e8540a7583cee6c5db0ef6f70f3e576d9fa\" returns successfully" Jan 29 10:51:29.875823 systemd[1]: cri-containerd-2f4b77510b4471e0d0d79492b8866e8540a7583cee6c5db0ef6f70f3e576d9fa.scope: Deactivated successfully. Jan 29 10:51:29.912584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f4b77510b4471e0d0d79492b8866e8540a7583cee6c5db0ef6f70f3e576d9fa-rootfs.mount: Deactivated successfully. Jan 29 10:51:29.917228 containerd[1944]: time="2025-01-29T10:51:29.917138176Z" level=info msg="shim disconnected" id=2f4b77510b4471e0d0d79492b8866e8540a7583cee6c5db0ef6f70f3e576d9fa namespace=k8s.io Jan 29 10:51:29.917228 containerd[1944]: time="2025-01-29T10:51:29.917215072Z" level=warning msg="cleaning up after shim disconnected" id=2f4b77510b4471e0d0d79492b8866e8540a7583cee6c5db0ef6f70f3e576d9fa namespace=k8s.io Jan 29 10:51:29.917476 containerd[1944]: time="2025-01-29T10:51:29.917237224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:30.424074 kubelet[3369]: E0129 10:51:30.424012 3369 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 10:51:30.733879 containerd[1944]: time="2025-01-29T10:51:30.733605256Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 10:51:30.774744 containerd[1944]: time="2025-01-29T10:51:30.774670588Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eba6bf72d6ac7da25d3b064026566e62a58ef5ef216afbac5ca6ece27dde9081\"" Jan 29 10:51:30.775942 containerd[1944]: time="2025-01-29T10:51:30.775848244Z" level=info msg="StartContainer for \"eba6bf72d6ac7da25d3b064026566e62a58ef5ef216afbac5ca6ece27dde9081\"" Jan 29 10:51:30.843215 systemd[1]: Started cri-containerd-eba6bf72d6ac7da25d3b064026566e62a58ef5ef216afbac5ca6ece27dde9081.scope - libcontainer container eba6bf72d6ac7da25d3b064026566e62a58ef5ef216afbac5ca6ece27dde9081. Jan 29 10:51:30.908685 containerd[1944]: time="2025-01-29T10:51:30.908592568Z" level=info msg="StartContainer for \"eba6bf72d6ac7da25d3b064026566e62a58ef5ef216afbac5ca6ece27dde9081\" returns successfully" Jan 29 10:51:30.910840 systemd[1]: cri-containerd-eba6bf72d6ac7da25d3b064026566e62a58ef5ef216afbac5ca6ece27dde9081.scope: Deactivated successfully. Jan 29 10:51:30.951973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eba6bf72d6ac7da25d3b064026566e62a58ef5ef216afbac5ca6ece27dde9081-rootfs.mount: Deactivated successfully. Jan 29 10:51:30.960590 containerd[1944]: time="2025-01-29T10:51:30.960435221Z" level=info msg="shim disconnected" id=eba6bf72d6ac7da25d3b064026566e62a58ef5ef216afbac5ca6ece27dde9081 namespace=k8s.io Jan 29 10:51:30.960590 containerd[1944]: time="2025-01-29T10:51:30.960571313Z" level=warning msg="cleaning up after shim disconnected" id=eba6bf72d6ac7da25d3b064026566e62a58ef5ef216afbac5ca6ece27dde9081 namespace=k8s.io Jan 29 10:51:30.960590 containerd[1944]: time="2025-01-29T10:51:30.960594257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:30.981569 containerd[1944]: time="2025-01-29T10:51:30.981399737Z" level=warning msg="cleanup warnings time=\"2025-01-29T10:51:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 10:51:31.237384 kubelet[3369]: E0129 10:51:31.236060 3369 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bsz9c" podUID="bbfe1c99-1869-4639-8072-25686bd88732" Jan 29 10:51:31.739358 containerd[1944]: time="2025-01-29T10:51:31.738608237Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 10:51:31.773335 containerd[1944]: time="2025-01-29T10:51:31.773245109Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab6483ab427a82fdd9df56f70f6dcce1e352c8173981f3f928f71432c0d8ac1b\"" Jan 29 10:51:31.774300 containerd[1944]: time="2025-01-29T10:51:31.774163625Z" level=info msg="StartContainer for \"ab6483ab427a82fdd9df56f70f6dcce1e352c8173981f3f928f71432c0d8ac1b\"" Jan 29 10:51:31.851206 systemd[1]: Started cri-containerd-ab6483ab427a82fdd9df56f70f6dcce1e352c8173981f3f928f71432c0d8ac1b.scope - libcontainer container ab6483ab427a82fdd9df56f70f6dcce1e352c8173981f3f928f71432c0d8ac1b. Jan 29 10:51:31.904667 systemd[1]: cri-containerd-ab6483ab427a82fdd9df56f70f6dcce1e352c8173981f3f928f71432c0d8ac1b.scope: Deactivated successfully. Jan 29 10:51:31.909918 containerd[1944]: time="2025-01-29T10:51:31.909811565Z" level=info msg="StartContainer for \"ab6483ab427a82fdd9df56f70f6dcce1e352c8173981f3f928f71432c0d8ac1b\" returns successfully" Jan 29 10:51:31.969839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab6483ab427a82fdd9df56f70f6dcce1e352c8173981f3f928f71432c0d8ac1b-rootfs.mount: Deactivated successfully. Jan 29 10:51:31.979375 containerd[1944]: time="2025-01-29T10:51:31.979277934Z" level=info msg="shim disconnected" id=ab6483ab427a82fdd9df56f70f6dcce1e352c8173981f3f928f71432c0d8ac1b namespace=k8s.io Jan 29 10:51:31.979375 containerd[1944]: time="2025-01-29T10:51:31.979364718Z" level=warning msg="cleaning up after shim disconnected" id=ab6483ab427a82fdd9df56f70f6dcce1e352c8173981f3f928f71432c0d8ac1b namespace=k8s.io Jan 29 10:51:31.982067 containerd[1944]: time="2025-01-29T10:51:31.979388706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:32.750122 containerd[1944]: time="2025-01-29T10:51:32.750068442Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 10:51:32.788781 containerd[1944]: time="2025-01-29T10:51:32.788702610Z" level=info msg="CreateContainer within sandbox \"c3112121c31684c7e894e3d34529974a74bd91747c4e84ca56122967b5e75786\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8604556cda09f17502e190ebd8d2ed51f25bf5f1aaeefe018c2a01fdb81d3aad\"" Jan 29 10:51:32.791185 containerd[1944]: time="2025-01-29T10:51:32.789990738Z" level=info msg="StartContainer for \"8604556cda09f17502e190ebd8d2ed51f25bf5f1aaeefe018c2a01fdb81d3aad\"" Jan 29 10:51:32.852724 systemd[1]: run-containerd-runc-k8s.io-8604556cda09f17502e190ebd8d2ed51f25bf5f1aaeefe018c2a01fdb81d3aad-runc.n54q3V.mount: Deactivated successfully. Jan 29 10:51:32.866156 systemd[1]: Started cri-containerd-8604556cda09f17502e190ebd8d2ed51f25bf5f1aaeefe018c2a01fdb81d3aad.scope - libcontainer container 8604556cda09f17502e190ebd8d2ed51f25bf5f1aaeefe018c2a01fdb81d3aad. Jan 29 10:51:32.924480 containerd[1944]: time="2025-01-29T10:51:32.924412098Z" level=info msg="StartContainer for \"8604556cda09f17502e190ebd8d2ed51f25bf5f1aaeefe018c2a01fdb81d3aad\" returns successfully" Jan 29 10:51:33.238365 kubelet[3369]: E0129 10:51:33.237292 3369 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bsz9c" podUID="bbfe1c99-1869-4639-8072-25686bd88732" Jan 29 10:51:33.746183 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 10:51:33.789173 kubelet[3369]: I0129 10:51:33.789069 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4f2mp" podStartSLOduration=5.789045919 podStartE2EDuration="5.789045919s" podCreationTimestamp="2025-01-29 10:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:51:33.788717407 +0000 UTC m=+118.914409540" watchObservedRunningTime="2025-01-29 10:51:33.789045919 +0000 UTC m=+118.914738040" Jan 29 10:51:35.157545 containerd[1944]: time="2025-01-29T10:51:35.157079754Z" level=info msg="StopPodSandbox for \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\"" Jan 29 10:51:35.157545 containerd[1944]: time="2025-01-29T10:51:35.157218198Z" level=info msg="TearDown network for sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" successfully" Jan 29 10:51:35.157545 containerd[1944]: time="2025-01-29T10:51:35.157241046Z" level=info msg="StopPodSandbox for \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" returns successfully" Jan 29 10:51:35.157545 containerd[1944]: time="2025-01-29T10:51:35.157883058Z" level=info msg="RemovePodSandbox for \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\"" Jan 29 10:51:35.157545 containerd[1944]: time="2025-01-29T10:51:35.157923810Z" level=info msg="Forcibly stopping sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\"" Jan 29 10:51:35.157545 containerd[1944]: time="2025-01-29T10:51:35.158015334Z" level=info msg="TearDown network for sandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" successfully" Jan 29 10:51:35.164830 containerd[1944]: time="2025-01-29T10:51:35.164748426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:51:35.165029 containerd[1944]: time="2025-01-29T10:51:35.164881974Z" level=info msg="RemovePodSandbox \"8f3fb6fcba4419b2aab05883468bddc902bb2f0911f809e3ef3102dd2e21d5c3\" returns successfully" Jan 29 10:51:35.165908 containerd[1944]: time="2025-01-29T10:51:35.165788142Z" level=info msg="StopPodSandbox for \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\"" Jan 29 10:51:35.166045 containerd[1944]: time="2025-01-29T10:51:35.165964278Z" level=info msg="TearDown network for sandbox \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\" successfully" Jan 29 10:51:35.166045 containerd[1944]: time="2025-01-29T10:51:35.165989034Z" level=info msg="StopPodSandbox for \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\" returns successfully" Jan 29 10:51:35.166764 containerd[1944]: time="2025-01-29T10:51:35.166648590Z" level=info msg="RemovePodSandbox for \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\"" Jan 29 10:51:35.166764 containerd[1944]: time="2025-01-29T10:51:35.166697622Z" level=info msg="Forcibly stopping sandbox \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\"" Jan 29 10:51:35.167078 containerd[1944]: time="2025-01-29T10:51:35.166796934Z" level=info msg="TearDown network for sandbox \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\" successfully" Jan 29 10:51:35.172965 containerd[1944]: time="2025-01-29T10:51:35.172900794Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:51:35.173144 containerd[1944]: time="2025-01-29T10:51:35.173014734Z" level=info msg="RemovePodSandbox \"4ece82adfd48a1bad05ec8f8546b34a33ac6f5a5e92281499bb9674d0cd0c75e\" returns successfully" Jan 29 10:51:35.236021 kubelet[3369]: E0129 10:51:35.235924 3369 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-7mdfh" podUID="c98dfefe-1da3-40ad-9674-27f2715b03a6" Jan 29 10:51:35.238740 kubelet[3369]: E0129 10:51:35.238304 3369 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bsz9c" podUID="bbfe1c99-1869-4639-8072-25686bd88732" Jan 29 10:51:38.015961 systemd-networkd[1844]: lxc_health: Link UP Jan 29 10:51:38.028370 (udev-worker)[5997]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:51:38.038613 systemd-networkd[1844]: lxc_health: Gained carrier Jan 29 10:51:39.942099 systemd-networkd[1844]: lxc_health: Gained IPv6LL Jan 29 10:51:40.621702 systemd[1]: run-containerd-runc-k8s.io-8604556cda09f17502e190ebd8d2ed51f25bf5f1aaeefe018c2a01fdb81d3aad-runc.ndYElv.mount: Deactivated successfully. Jan 29 10:51:42.850226 ntpd[1914]: Listen normally on 15 lxc_health [fe80::9c0e:4ff:fee5:5c45%14]:123 Jan 29 10:51:42.854656 ntpd[1914]: 29 Jan 10:51:42 ntpd[1914]: Listen normally on 15 lxc_health [fe80::9c0e:4ff:fee5:5c45%14]:123 Jan 29 10:51:43.040361 kubelet[3369]: E0129 10:51:43.040290 3369 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57478->127.0.0.1:46053: write tcp 127.0.0.1:57478->127.0.0.1:46053: write: connection reset by peer Jan 29 10:51:43.063892 sshd[5268]: Connection closed by 139.178.89.65 port 52036 Jan 29 10:51:43.065036 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Jan 29 10:51:43.072947 systemd-logind[1923]: Session 31 logged out. Waiting for processes to exit. Jan 29 10:51:43.073410 systemd[1]: sshd@30-172.31.20.65:22-139.178.89.65:52036.service: Deactivated successfully. Jan 29 10:51:43.080738 systemd[1]: session-31.scope: Deactivated successfully. Jan 29 10:51:43.091308 systemd-logind[1923]: Removed session 31. Jan 29 10:51:57.285024 systemd[1]: cri-containerd-c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538.scope: Deactivated successfully. Jan 29 10:51:57.285547 systemd[1]: cri-containerd-c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538.scope: Consumed 4.857s CPU time, 23.0M memory peak, 0B memory swap peak. Jan 29 10:51:57.324020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538-rootfs.mount: Deactivated successfully. Jan 29 10:51:57.336771 containerd[1944]: time="2025-01-29T10:51:57.336696616Z" level=info msg="shim disconnected" id=c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538 namespace=k8s.io Jan 29 10:51:57.337440 containerd[1944]: time="2025-01-29T10:51:57.337354960Z" level=warning msg="cleaning up after shim disconnected" id=c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538 namespace=k8s.io Jan 29 10:51:57.337440 containerd[1944]: time="2025-01-29T10:51:57.337386100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:51:57.357337 containerd[1944]: time="2025-01-29T10:51:57.357250600Z" level=warning msg="cleanup warnings time=\"2025-01-29T10:51:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 10:51:57.820149 kubelet[3369]: I0129 10:51:57.820103 3369 scope.go:117] "RemoveContainer" containerID="c5411d96e116fd1e233dc8d20ef86954a29d8f49e75cf180d413d0fa50e63538" Jan 29 10:51:57.830661 containerd[1944]: time="2025-01-29T10:51:57.830527182Z" level=info msg="CreateContainer within sandbox \"7f90dc97564964e497f74ed0508ecad84c97d9571d16675d4bce1dcf08d5c7d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 10:51:57.862508 containerd[1944]: time="2025-01-29T10:51:57.862430790Z" level=info msg="CreateContainer within sandbox \"7f90dc97564964e497f74ed0508ecad84c97d9571d16675d4bce1dcf08d5c7d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"de6b2de3ba6050c5329ab677114d76ba5e08527a63e4526fbd8fa9ffce8cb9c2\"" Jan 29 10:51:57.863321 containerd[1944]: time="2025-01-29T10:51:57.863271210Z" level=info msg="StartContainer for \"de6b2de3ba6050c5329ab677114d76ba5e08527a63e4526fbd8fa9ffce8cb9c2\"" Jan 29 10:51:57.919181 systemd[1]: Started cri-containerd-de6b2de3ba6050c5329ab677114d76ba5e08527a63e4526fbd8fa9ffce8cb9c2.scope - libcontainer container de6b2de3ba6050c5329ab677114d76ba5e08527a63e4526fbd8fa9ffce8cb9c2. Jan 29 10:51:57.930947 kubelet[3369]: E0129 10:51:57.930845 3369 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-20-65)" Jan 29 10:51:57.992745 containerd[1944]: time="2025-01-29T10:51:57.992091871Z" level=info msg="StartContainer for \"de6b2de3ba6050c5329ab677114d76ba5e08527a63e4526fbd8fa9ffce8cb9c2\" returns successfully" Jan 29 10:52:01.220872 systemd[1]: cri-containerd-e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637.scope: Deactivated successfully. Jan 29 10:52:01.221332 systemd[1]: cri-containerd-e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637.scope: Consumed 2.793s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 29 10:52:01.263053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637-rootfs.mount: Deactivated successfully. Jan 29 10:52:01.281835 containerd[1944]: time="2025-01-29T10:52:01.281740087Z" level=info msg="shim disconnected" id=e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637 namespace=k8s.io Jan 29 10:52:01.281835 containerd[1944]: time="2025-01-29T10:52:01.281819515Z" level=warning msg="cleaning up after shim disconnected" id=e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637 namespace=k8s.io Jan 29 10:52:01.283051 containerd[1944]: time="2025-01-29T10:52:01.281841355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:52:01.837911 kubelet[3369]: I0129 10:52:01.837558 3369 scope.go:117] "RemoveContainer" containerID="e6c0b851617c2fc5117557dc259bde20cda90b163054df56d6647d0dfc1fe637" Jan 29 10:52:01.841330 containerd[1944]: time="2025-01-29T10:52:01.841002214Z" level=info msg="CreateContainer within sandbox \"1b7350aed180b73c3767642d29e142d470d01962b78be3eec84c096968349efa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 10:52:01.869346 containerd[1944]: time="2025-01-29T10:52:01.869240302Z" level=info msg="CreateContainer within sandbox \"1b7350aed180b73c3767642d29e142d470d01962b78be3eec84c096968349efa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7de05c18df2fa8e044597c6b9d3f7b7b0b4a423cac4fbc5ea361bb4a3f551c15\"" Jan 29 10:52:01.870141 containerd[1944]: time="2025-01-29T10:52:01.869997238Z" level=info msg="StartContainer for \"7de05c18df2fa8e044597c6b9d3f7b7b0b4a423cac4fbc5ea361bb4a3f551c15\"" Jan 29 10:52:01.925183 systemd[1]: Started cri-containerd-7de05c18df2fa8e044597c6b9d3f7b7b0b4a423cac4fbc5ea361bb4a3f551c15.scope - libcontainer container 7de05c18df2fa8e044597c6b9d3f7b7b0b4a423cac4fbc5ea361bb4a3f551c15. Jan 29 10:52:01.987698 containerd[1944]: time="2025-01-29T10:52:01.987620999Z" level=info msg="StartContainer for \"7de05c18df2fa8e044597c6b9d3f7b7b0b4a423cac4fbc5ea361bb4a3f551c15\" returns successfully" Jan 29 10:52:07.932324 kubelet[3369]: E0129 10:52:07.932123 3369 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-20-65)"