Jan 30 14:01:32.207054 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 30 14:01:32.207100 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:01:32.207127 kernel: KASLR disabled due to lack of seed Jan 30 14:01:32.207144 kernel: efi: EFI v2.7 by EDK II Jan 30 14:01:32.207161 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 30 14:01:32.207178 kernel: ACPI: Early table checksum verification disabled Jan 30 14:01:32.207196 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 30 14:01:32.207212 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 30 14:01:32.207228 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 14:01:32.211333 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 30 14:01:32.211384 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 14:01:32.211401 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 30 14:01:32.211418 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 30 14:01:32.211434 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 30 14:01:32.211453 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 14:01:32.211474 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 30 14:01:32.211491 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 30 14:01:32.211508 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 30 14:01:32.211524 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 30 14:01:32.211541 kernel: printk: bootconsole [uart0] enabled Jan 30 14:01:32.211557 kernel: NUMA: Failed to initialise from firmware Jan 30 14:01:32.211575 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 14:01:32.211592 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 30 14:01:32.211608 kernel: Zone ranges: Jan 30 14:01:32.211624 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 14:01:32.211641 kernel: DMA32 empty Jan 30 14:01:32.211662 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 30 14:01:32.211678 kernel: Movable zone start for each node Jan 30 14:01:32.211695 kernel: Early memory node ranges Jan 30 14:01:32.211711 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 30 14:01:32.211727 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 30 14:01:32.211743 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 30 14:01:32.211760 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 30 14:01:32.211776 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 30 14:01:32.211793 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 30 14:01:32.211809 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 30 14:01:32.211826 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 30 14:01:32.211842 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 14:01:32.211862 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 30 14:01:32.211880 kernel: psci: probing for conduit method from ACPI. Jan 30 14:01:32.211903 kernel: psci: PSCIv1.0 detected in firmware. Jan 30 14:01:32.211921 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:01:32.211938 kernel: psci: Trusted OS migration not required Jan 30 14:01:32.211960 kernel: psci: SMC Calling Convention v1.1 Jan 30 14:01:32.211977 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:01:32.211995 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:01:32.212013 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:01:32.212031 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:01:32.212048 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:01:32.212065 kernel: CPU features: detected: Spectre-v2 Jan 30 14:01:32.212083 kernel: CPU features: detected: Spectre-v3a Jan 30 14:01:32.212100 kernel: CPU features: detected: Spectre-BHB Jan 30 14:01:32.212117 kernel: CPU features: detected: ARM erratum 1742098 Jan 30 14:01:32.212135 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 30 14:01:32.212156 kernel: alternatives: applying boot alternatives Jan 30 14:01:32.212176 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:01:32.212195 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:01:32.212213 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:01:32.212230 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:01:32.212290 kernel: Fallback order for Node 0: 0 Jan 30 14:01:32.212311 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 30 14:01:32.212329 kernel: Policy zone: Normal Jan 30 14:01:32.212347 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:01:32.212364 kernel: software IO TLB: area num 2. Jan 30 14:01:32.212381 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 30 14:01:32.212407 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 30 14:01:32.212425 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:01:32.212442 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:01:32.212460 kernel: rcu: RCU event tracing is enabled. Jan 30 14:01:32.212478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:01:32.212496 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:01:32.212514 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:01:32.212532 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:01:32.212549 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:01:32.212566 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:01:32.212583 kernel: GICv3: 96 SPIs implemented Jan 30 14:01:32.212605 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:01:32.212623 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:01:32.212640 kernel: GICv3: GICv3 features: 16 PPIs Jan 30 14:01:32.212658 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 30 14:01:32.212675 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 30 14:01:32.212692 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 14:01:32.212710 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 30 14:01:32.212727 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 30 14:01:32.212745 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 30 14:01:32.212762 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 30 14:01:32.212779 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:01:32.212796 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 30 14:01:32.212818 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 30 14:01:32.212836 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 30 14:01:32.212854 kernel: Console: colour dummy device 80x25 Jan 30 14:01:32.212871 kernel: printk: console [tty1] enabled Jan 30 14:01:32.212889 kernel: ACPI: Core revision 20230628 Jan 30 14:01:32.212908 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 30 14:01:32.212925 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:01:32.212943 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:01:32.212961 kernel: landlock: Up and running. Jan 30 14:01:32.212983 kernel: SELinux: Initializing. Jan 30 14:01:32.213001 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:01:32.213019 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:01:32.213036 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:01:32.213054 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:01:32.213072 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:01:32.213090 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:01:32.213108 kernel: Platform MSI: ITS@0x10080000 domain created Jan 30 14:01:32.213125 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 30 14:01:32.213147 kernel: Remapping and enabling EFI services. Jan 30 14:01:32.213165 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:01:32.213183 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:01:32.213201 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 30 14:01:32.213218 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 30 14:01:32.213236 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 30 14:01:32.217354 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:01:32.217378 kernel: SMP: Total of 2 processors activated. Jan 30 14:01:32.217397 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:01:32.217428 kernel: CPU features: detected: 32-bit EL1 Support Jan 30 14:01:32.217447 kernel: CPU features: detected: CRC32 instructions Jan 30 14:01:32.217467 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:01:32.217499 kernel: alternatives: applying system-wide alternatives Jan 30 14:01:32.217525 kernel: devtmpfs: initialized Jan 30 14:01:32.217545 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:01:32.217565 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:01:32.217586 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:01:32.217606 kernel: SMBIOS 3.0.0 present. Jan 30 14:01:32.217626 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 30 14:01:32.217651 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:01:32.217670 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:01:32.217690 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:01:32.217709 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:01:32.217728 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:01:32.217747 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Jan 30 14:01:32.217765 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:01:32.217790 kernel: cpuidle: using governor menu Jan 30 14:01:32.217810 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:01:32.217831 kernel: ASID allocator initialised with 65536 entries Jan 30 14:01:32.217852 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:01:32.217874 kernel: Serial: AMBA PL011 UART driver Jan 30 14:01:32.217894 kernel: Modules: 17520 pages in range for non-PLT usage Jan 30 14:01:32.217914 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:01:32.217935 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:01:32.217956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:01:32.217983 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:01:32.218004 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:01:32.218023 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:01:32.218046 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:01:32.218067 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:01:32.218087 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:01:32.218107 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:01:32.218127 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:01:32.218146 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:01:32.218173 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:01:32.218193 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:01:32.218213 kernel: ACPI: Interpreter enabled Jan 30 14:01:32.218232 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:01:32.218294 kernel: ACPI: MCFG table detected, 1 entries Jan 30 14:01:32.218318 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 30 14:01:32.218661 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:01:32.218899 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 14:01:32.219166 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 14:01:32.222521 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 30 14:01:32.222750 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 30 14:01:32.222777 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 30 14:01:32.222798 kernel: acpiphp: Slot [1] registered Jan 30 14:01:32.222817 kernel: acpiphp: Slot [2] registered Jan 30 14:01:32.222836 kernel: acpiphp: Slot [3] registered Jan 30 14:01:32.222855 kernel: acpiphp: Slot [4] registered Jan 30 14:01:32.222882 kernel: acpiphp: Slot [5] registered Jan 30 14:01:32.222901 kernel: acpiphp: Slot [6] registered Jan 30 14:01:32.222920 kernel: acpiphp: Slot [7] registered Jan 30 14:01:32.222939 kernel: acpiphp: Slot [8] registered Jan 30 14:01:32.222957 kernel: acpiphp: Slot [9] registered Jan 30 14:01:32.222976 kernel: acpiphp: Slot [10] registered Jan 30 14:01:32.222995 kernel: acpiphp: Slot [11] registered Jan 30 14:01:32.223015 kernel: acpiphp: Slot [12] registered Jan 30 14:01:32.223038 kernel: acpiphp: Slot [13] registered Jan 30 14:01:32.223057 kernel: acpiphp: Slot [14] registered Jan 30 14:01:32.223080 kernel: acpiphp: Slot [15] registered Jan 30 14:01:32.223099 kernel: acpiphp: Slot [16] registered Jan 30 14:01:32.223117 kernel: acpiphp: Slot [17] registered Jan 30 14:01:32.223136 kernel: acpiphp: Slot [18] registered Jan 30 14:01:32.223154 kernel: acpiphp: Slot [19] registered Jan 30 14:01:32.223173 kernel: acpiphp: Slot [20] registered Jan 30 14:01:32.223192 kernel: acpiphp: Slot [21] registered Jan 30 14:01:32.223210 kernel: acpiphp: Slot [22] registered Jan 30 14:01:32.223230 kernel: acpiphp: Slot [23] registered Jan 30 14:01:32.223281 kernel: acpiphp: Slot [24] registered Jan 30 14:01:32.223325 kernel: acpiphp: Slot [25] registered Jan 30 14:01:32.223349 kernel: acpiphp: Slot [26] registered Jan 30 14:01:32.223371 kernel: acpiphp: Slot [27] registered Jan 30 14:01:32.223394 kernel: acpiphp: Slot [28] registered Jan 30 14:01:32.223413 kernel: acpiphp: Slot [29] registered Jan 30 14:01:32.223432 kernel: acpiphp: Slot [30] registered Jan 30 14:01:32.223450 kernel: acpiphp: Slot [31] registered Jan 30 14:01:32.223469 kernel: PCI host bridge to bus 0000:00 Jan 30 14:01:32.223722 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 30 14:01:32.223917 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 14:01:32.224108 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 30 14:01:32.226354 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 30 14:01:32.226616 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 30 14:01:32.226855 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 30 14:01:32.231515 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 30 14:01:32.231776 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 14:01:32.231991 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 30 14:01:32.232203 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:01:32.232502 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 14:01:32.232717 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 30 14:01:32.232923 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 30 14:01:32.233138 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 30 14:01:32.233399 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:01:32.233616 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 30 14:01:32.233828 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 30 14:01:32.234042 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 30 14:01:32.236315 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 30 14:01:32.236596 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 30 14:01:32.236799 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 30 14:01:32.236983 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 14:01:32.237167 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 30 14:01:32.237193 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 14:01:32.237213 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 14:01:32.237232 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 14:01:32.237271 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 14:01:32.237293 kernel: iommu: Default domain type: Translated Jan 30 14:01:32.237313 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:01:32.237342 kernel: efivars: Registered efivars operations Jan 30 14:01:32.237362 kernel: vgaarb: loaded Jan 30 14:01:32.237383 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:01:32.237402 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:01:32.237422 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:01:32.237441 kernel: pnp: PnP ACPI init Jan 30 14:01:32.237718 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 30 14:01:32.237752 kernel: pnp: PnP ACPI: found 1 devices Jan 30 14:01:32.237781 kernel: NET: Registered PF_INET protocol family Jan 30 14:01:32.237801 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:01:32.237826 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:01:32.237846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:01:32.237866 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:01:32.237885 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:01:32.237904 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:01:32.237923 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:01:32.237943 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:01:32.237967 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:01:32.237986 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:01:32.238006 kernel: kvm [1]: HYP mode not available Jan 30 14:01:32.238025 kernel: Initialise system trusted keyrings Jan 30 14:01:32.238045 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:01:32.238064 kernel: Key type asymmetric registered Jan 30 14:01:32.238083 kernel: Asymmetric key parser 'x509' registered Jan 30 14:01:32.238101 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:01:32.238120 kernel: io scheduler mq-deadline registered Jan 30 14:01:32.238144 kernel: io scheduler kyber registered Jan 30 14:01:32.238163 kernel: io scheduler bfq registered Jan 30 14:01:32.240579 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 30 14:01:32.240626 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 14:01:32.240646 kernel: ACPI: button: Power Button [PWRB] Jan 30 14:01:32.240666 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 30 14:01:32.240684 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 14:01:32.240703 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:01:32.240732 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 14:01:32.240951 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 30 14:01:32.240978 kernel: printk: console [ttyS0] disabled Jan 30 14:01:32.240997 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 30 14:01:32.244880 kernel: printk: console [ttyS0] enabled Jan 30 14:01:32.244923 kernel: printk: bootconsole [uart0] disabled Jan 30 14:01:32.244944 kernel: thunder_xcv, ver 1.0 Jan 30 14:01:32.244963 kernel: thunder_bgx, ver 1.0 Jan 30 14:01:32.244982 kernel: nicpf, ver 1.0 Jan 30 14:01:32.245013 kernel: nicvf, ver 1.0 Jan 30 14:01:32.245309 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:01:32.245523 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:01:31 UTC (1738245691) Jan 30 14:01:32.245551 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:01:32.245571 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 30 14:01:32.245591 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:01:32.245611 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:01:32.245630 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:01:32.245671 kernel: Segment Routing with IPv6 Jan 30 14:01:32.245718 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:01:32.245778 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:01:32.245818 kernel: Key type dns_resolver registered Jan 30 14:01:32.245841 kernel: registered taskstats version 1 Jan 30 14:01:32.245866 kernel: Loading compiled-in X.509 certificates Jan 30 14:01:32.245886 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:01:32.245906 kernel: Key type .fscrypt registered Jan 30 14:01:32.245925 kernel: Key type fscrypt-provisioning registered Jan 30 14:01:32.245949 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:01:32.245969 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:01:32.245987 kernel: ima: No architecture policies found Jan 30 14:01:32.246005 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:01:32.246024 kernel: clk: Disabling unused clocks Jan 30 14:01:32.246042 kernel: Freeing unused kernel memory: 39360K Jan 30 14:01:32.246061 kernel: Run /init as init process Jan 30 14:01:32.246079 kernel: with arguments: Jan 30 14:01:32.246098 kernel: /init Jan 30 14:01:32.246116 kernel: with environment: Jan 30 14:01:32.246139 kernel: HOME=/ Jan 30 14:01:32.246158 kernel: TERM=linux Jan 30 14:01:32.246177 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:01:32.246200 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:01:32.246224 systemd[1]: Detected virtualization amazon. Jan 30 14:01:32.246283 systemd[1]: Detected architecture arm64. Jan 30 14:01:32.246308 systemd[1]: Running in initrd. Jan 30 14:01:32.246334 systemd[1]: No hostname configured, using default hostname. Jan 30 14:01:32.246354 systemd[1]: Hostname set to . Jan 30 14:01:32.246374 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:01:32.246394 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:01:32.246414 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:01:32.246434 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:01:32.246456 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:01:32.246477 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:01:32.246502 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:01:32.246523 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:01:32.246547 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:01:32.246568 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:01:32.246588 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:01:32.246608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:01:32.246629 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:01:32.246654 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:01:32.246674 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:01:32.246694 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:01:32.246714 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:01:32.246735 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:01:32.246755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:01:32.246775 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:01:32.246796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:01:32.246816 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:01:32.246841 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:01:32.246861 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:01:32.246881 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:01:32.246902 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:01:32.246922 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:01:32.246942 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:01:32.246962 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:01:32.246982 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:01:32.247007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:32.247028 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:01:32.247048 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:01:32.247068 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:01:32.247133 systemd-journald[251]: Collecting audit messages is disabled. Jan 30 14:01:32.247183 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:01:32.247205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:01:32.247226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:01:32.250087 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:01:32.250120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:32.250143 systemd-journald[251]: Journal started Jan 30 14:01:32.250184 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2c66b726c7285f1e0f50265a707cc8) is 8.0M, max 75.3M, 67.3M free. Jan 30 14:01:32.200098 systemd-modules-load[252]: Inserted module 'overlay' Jan 30 14:01:32.254300 kernel: Bridge firewalling registered Jan 30 14:01:32.253422 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 30 14:01:32.261731 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:01:32.269799 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:01:32.285537 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:01:32.298583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:01:32.306553 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:01:32.312343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:01:32.347909 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:01:32.355188 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:01:32.360475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:01:32.376608 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:01:32.389577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:01:32.423465 dracut-cmdline[285]: dracut-dracut-053 Jan 30 14:01:32.429652 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:01:32.470063 systemd-resolved[287]: Positive Trust Anchors: Jan 30 14:01:32.470103 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:01:32.470166 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:01:32.570283 kernel: SCSI subsystem initialized Jan 30 14:01:32.578281 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:01:32.590409 kernel: iscsi: registered transport (tcp) Jan 30 14:01:32.613296 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:01:32.613366 kernel: QLogic iSCSI HBA Driver Jan 30 14:01:32.701289 kernel: random: crng init done Jan 30 14:01:32.701508 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 30 14:01:32.704887 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:01:32.709289 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:01:32.731306 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:01:32.740551 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:01:32.782432 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:01:32.784048 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:01:32.784089 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:01:32.850322 kernel: raid6: neonx8 gen() 6752 MB/s Jan 30 14:01:32.867280 kernel: raid6: neonx4 gen() 6562 MB/s Jan 30 14:01:32.884287 kernel: raid6: neonx2 gen() 5456 MB/s Jan 30 14:01:32.901293 kernel: raid6: neonx1 gen() 3956 MB/s Jan 30 14:01:32.918282 kernel: raid6: int64x8 gen() 3823 MB/s Jan 30 14:01:32.935287 kernel: raid6: int64x4 gen() 3730 MB/s Jan 30 14:01:32.952273 kernel: raid6: int64x2 gen() 3621 MB/s Jan 30 14:01:32.970039 kernel: raid6: int64x1 gen() 2765 MB/s Jan 30 14:01:32.970103 kernel: raid6: using algorithm neonx8 gen() 6752 MB/s Jan 30 14:01:32.988005 kernel: raid6: .... xor() 4796 MB/s, rmw enabled Jan 30 14:01:32.988074 kernel: raid6: using neon recovery algorithm Jan 30 14:01:32.996439 kernel: xor: measuring software checksum speed Jan 30 14:01:32.996508 kernel: 8regs : 10991 MB/sec Jan 30 14:01:32.997510 kernel: 32regs : 11947 MB/sec Jan 30 14:01:32.998681 kernel: arm64_neon : 9275 MB/sec Jan 30 14:01:32.998713 kernel: xor: using function: 32regs (11947 MB/sec) Jan 30 14:01:33.083302 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:01:33.102190 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:01:33.112740 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:01:33.155016 systemd-udevd[469]: Using default interface naming scheme 'v255'. Jan 30 14:01:33.164704 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:01:33.180491 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:01:33.214180 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jan 30 14:01:33.272467 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:01:33.282574 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:01:33.404565 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:01:33.415984 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:01:33.466693 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:01:33.471128 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:01:33.475735 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:01:33.477869 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:01:33.496751 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:01:33.522211 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:01:33.617201 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 14:01:33.617298 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 30 14:01:33.637212 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 14:01:33.637264 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 14:01:33.639494 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 14:01:33.639787 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 14:01:33.650800 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:eb:0b:87:67:39 Jan 30 14:01:33.627830 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:01:33.628092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:01:33.631793 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:01:33.659318 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 14:01:33.637086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:01:33.637395 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:33.639803 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:33.660718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:33.676038 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:01:33.676115 kernel: GPT:9289727 != 16777215 Jan 30 14:01:33.676143 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:01:33.676168 kernel: GPT:9289727 != 16777215 Jan 30 14:01:33.677335 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:01:33.678584 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:01:33.682133 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:01:33.701316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:33.713599 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:01:33.759128 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:01:33.775469 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (536) Jan 30 14:01:33.801337 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (529) Jan 30 14:01:33.903089 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 14:01:33.933891 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 14:01:33.950346 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 14:01:33.952776 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 14:01:33.971332 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 14:01:33.994599 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:01:34.012278 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:01:34.012884 disk-uuid[661]: Primary Header is updated. Jan 30 14:01:34.012884 disk-uuid[661]: Secondary Entries is updated. Jan 30 14:01:34.012884 disk-uuid[661]: Secondary Header is updated. Jan 30 14:01:35.035391 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:01:35.037344 disk-uuid[662]: The operation has completed successfully. Jan 30 14:01:35.205436 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:01:35.205643 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:01:35.268551 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:01:35.284345 sh[1008]: Success Jan 30 14:01:35.309280 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:01:35.421848 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:01:35.430871 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:01:35.439498 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:01:35.476290 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:01:35.476351 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:01:35.476379 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:01:35.477926 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:01:35.479149 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:01:35.620284 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:01:35.644885 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:01:35.648739 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:01:35.658532 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:01:35.674606 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:01:35.709969 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:01:35.710048 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:01:35.711338 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:01:35.720320 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:01:35.737108 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:01:35.739471 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:01:35.751726 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:01:35.766671 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:01:35.851087 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:01:35.884558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:01:35.933142 systemd-networkd[1202]: lo: Link UP Jan 30 14:01:35.933164 systemd-networkd[1202]: lo: Gained carrier Jan 30 14:01:35.937447 systemd-networkd[1202]: Enumeration completed Jan 30 14:01:35.938592 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:01:35.939114 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:01:35.939121 systemd-networkd[1202]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:01:35.943618 systemd[1]: Reached target network.target - Network. Jan 30 14:01:35.947696 systemd-networkd[1202]: eth0: Link UP Jan 30 14:01:35.947705 systemd-networkd[1202]: eth0: Gained carrier Jan 30 14:01:35.947726 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:01:35.991349 systemd-networkd[1202]: eth0: DHCPv4 address 172.31.23.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 14:01:36.189675 ignition[1129]: Ignition 2.19.0 Jan 30 14:01:36.189704 ignition[1129]: Stage: fetch-offline Jan 30 14:01:36.191195 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:36.191225 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:01:36.194516 ignition[1129]: Ignition finished successfully Jan 30 14:01:36.200332 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:01:36.218697 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:01:36.241758 ignition[1212]: Ignition 2.19.0 Jan 30 14:01:36.241779 ignition[1212]: Stage: fetch Jan 30 14:01:36.242892 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:36.242918 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:01:36.243076 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:01:36.254091 ignition[1212]: PUT result: OK Jan 30 14:01:36.256931 ignition[1212]: parsed url from cmdline: "" Jan 30 14:01:36.257057 ignition[1212]: no config URL provided Jan 30 14:01:36.257077 ignition[1212]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:01:36.257103 ignition[1212]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:01:36.257134 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:01:36.258945 ignition[1212]: PUT result: OK Jan 30 14:01:36.259025 ignition[1212]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 14:01:36.269146 ignition[1212]: GET result: OK Jan 30 14:01:36.270443 ignition[1212]: parsing config with SHA512: 95c2042c10df8e15cd120b4faf331be18d330edf274418e01e6f63d724de060ed6eef135fcd8f83107b405dabbe1e61fc166712acf581356c16836a85ff384b9 Jan 30 14:01:36.280858 unknown[1212]: fetched base config from "system" Jan 30 14:01:36.280899 unknown[1212]: fetched base config from "system" Jan 30 14:01:36.280915 unknown[1212]: fetched user config from "aws" Jan 30 14:01:36.286751 ignition[1212]: fetch: fetch complete Jan 30 14:01:36.286768 ignition[1212]: fetch: fetch passed Jan 30 14:01:36.286868 ignition[1212]: Ignition finished successfully Jan 30 14:01:36.292810 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:01:36.316699 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:01:36.341625 ignition[1220]: Ignition 2.19.0 Jan 30 14:01:36.342116 ignition[1220]: Stage: kargs Jan 30 14:01:36.342760 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:36.342814 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:01:36.342964 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:01:36.346729 ignition[1220]: PUT result: OK Jan 30 14:01:36.354730 ignition[1220]: kargs: kargs passed Jan 30 14:01:36.354826 ignition[1220]: Ignition finished successfully Jan 30 14:01:36.360136 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:01:36.376933 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:01:36.399543 ignition[1226]: Ignition 2.19.0 Jan 30 14:01:36.399571 ignition[1226]: Stage: disks Jan 30 14:01:36.401130 ignition[1226]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:36.401159 ignition[1226]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:01:36.402165 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:01:36.406542 ignition[1226]: PUT result: OK Jan 30 14:01:36.412936 ignition[1226]: disks: disks passed Jan 30 14:01:36.413118 ignition[1226]: Ignition finished successfully Jan 30 14:01:36.418183 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:01:36.422039 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:01:36.423393 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:01:36.424092 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:01:36.424679 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:01:36.424983 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:01:36.444634 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:01:36.488903 systemd-fsck[1234]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:01:36.496261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:01:36.515617 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:01:36.595159 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:01:36.595747 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:01:36.599743 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:01:36.609430 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:01:36.614322 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:01:36.616926 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 14:01:36.617007 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:01:36.617056 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:01:36.648282 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1253) Jan 30 14:01:36.649102 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:01:36.654036 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:01:36.654074 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:01:36.654112 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:01:36.663677 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:01:36.669356 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:01:36.672694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:01:37.043443 initrd-setup-root[1277]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:01:37.075651 initrd-setup-root[1284]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:01:37.084605 initrd-setup-root[1291]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:01:37.092871 initrd-setup-root[1298]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:01:37.475556 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:01:37.485462 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:01:37.495638 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:01:37.515379 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:01:37.518287 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:01:37.549966 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:01:37.565714 ignition[1366]: INFO : Ignition 2.19.0 Jan 30 14:01:37.567580 ignition[1366]: INFO : Stage: mount Jan 30 14:01:37.569629 ignition[1366]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:37.569629 ignition[1366]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:01:37.573644 ignition[1366]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:01:37.576796 ignition[1366]: INFO : PUT result: OK Jan 30 14:01:37.581605 ignition[1366]: INFO : mount: mount passed Jan 30 14:01:37.583702 ignition[1366]: INFO : Ignition finished successfully Jan 30 14:01:37.587339 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:01:37.595448 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:01:37.611506 systemd-networkd[1202]: eth0: Gained IPv6LL Jan 30 14:01:37.618633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:01:37.653296 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1377) Jan 30 14:01:37.656752 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:01:37.656800 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:01:37.656826 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:01:37.665287 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:01:37.666579 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:01:37.701375 ignition[1394]: INFO : Ignition 2.19.0 Jan 30 14:01:37.701375 ignition[1394]: INFO : Stage: files Jan 30 14:01:37.704750 ignition[1394]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:37.704750 ignition[1394]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:01:37.704750 ignition[1394]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:01:37.711889 ignition[1394]: INFO : PUT result: OK Jan 30 14:01:37.719034 ignition[1394]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:01:37.736950 ignition[1394]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:01:37.739981 ignition[1394]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:01:37.800503 ignition[1394]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:01:37.803419 ignition[1394]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:01:37.806137 unknown[1394]: wrote ssh authorized keys file for user: core Jan 30 14:01:37.808615 ignition[1394]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:01:37.811593 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:01:37.814787 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:01:37.814787 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:01:37.814787 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:01:37.908305 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 14:01:38.042962 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:01:38.042962 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:01:38.049592 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 14:01:38.517440 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 30 14:01:38.652076 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:01:38.652076 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:01:38.652076 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:01:38.652076 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:01:38.668960 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 14:01:38.990391 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 30 14:01:39.326649 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:01:39.326649 ignition[1394]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 30 14:01:39.333746 ignition[1394]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:01:39.333746 ignition[1394]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:01:39.333746 ignition[1394]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 30 14:01:39.333746 ignition[1394]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 30 14:01:39.333746 ignition[1394]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:01:39.333746 ignition[1394]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:01:39.333746 ignition[1394]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 30 14:01:39.333746 ignition[1394]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:01:39.333746 ignition[1394]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:01:39.361497 ignition[1394]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:01:39.361497 ignition[1394]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:01:39.361497 ignition[1394]: INFO : files: files passed Jan 30 14:01:39.361497 ignition[1394]: INFO : Ignition finished successfully Jan 30 14:01:39.371526 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:01:39.386512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:01:39.393876 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:01:39.410718 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:01:39.413008 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:01:39.431770 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:01:39.431770 initrd-setup-root-after-ignition[1423]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:01:39.438324 initrd-setup-root-after-ignition[1427]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:01:39.442119 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:01:39.447562 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:01:39.456634 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:01:39.512018 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:01:39.513959 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:01:39.517318 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:01:39.521511 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:01:39.523465 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:01:39.541550 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:01:39.566896 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:01:39.578531 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:01:39.607578 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:01:39.612429 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:01:39.615235 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:01:39.619998 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:01:39.620260 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:01:39.623048 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:01:39.631051 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:01:39.633354 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:01:39.638616 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:01:39.640931 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:01:39.643307 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:01:39.650829 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:01:39.653415 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:01:39.659269 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:01:39.661541 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:01:39.666090 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:01:39.666354 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:01:39.668879 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:01:39.676534 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:01:39.679229 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:01:39.683174 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:01:39.685632 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:01:39.685894 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:01:39.693817 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:01:39.694059 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:01:39.697906 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:01:39.698124 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:01:39.713709 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:01:39.715993 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:01:39.716306 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:01:39.730167 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:01:39.736173 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:01:39.738623 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:01:39.747060 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:01:39.747501 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:01:39.758899 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:01:39.760148 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:01:39.785305 ignition[1447]: INFO : Ignition 2.19.0 Jan 30 14:01:39.785305 ignition[1447]: INFO : Stage: umount Jan 30 14:01:39.785305 ignition[1447]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:01:39.785305 ignition[1447]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:01:39.785305 ignition[1447]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:01:39.796931 ignition[1447]: INFO : PUT result: OK Jan 30 14:01:39.796188 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:01:39.804672 ignition[1447]: INFO : umount: umount passed Jan 30 14:01:39.807435 ignition[1447]: INFO : Ignition finished successfully Jan 30 14:01:39.812338 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:01:39.812748 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:01:39.818455 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:01:39.818642 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:01:39.821181 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:01:39.821366 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:01:39.824216 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:01:39.824342 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:01:39.835546 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:01:39.835650 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:01:39.849223 systemd[1]: Stopped target network.target - Network. Jan 30 14:01:39.849675 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:01:39.850875 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:01:39.859975 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:01:39.861616 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:01:39.865228 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:01:39.867568 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:01:39.869176 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:01:39.871410 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:01:39.871494 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:01:39.881910 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:01:39.881988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:01:39.883849 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:01:39.883939 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:01:39.885796 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:01:39.885875 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:01:39.887813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:01:39.887890 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:01:39.890090 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:01:39.892026 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:01:39.918634 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:01:39.918898 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:01:39.922673 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:01:39.922795 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:01:39.936329 systemd-networkd[1202]: eth0: DHCPv6 lease lost Jan 30 14:01:39.940237 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:01:39.940701 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:01:39.948339 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:01:39.948424 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:01:39.960442 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:01:39.963909 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:01:39.964028 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:01:39.966982 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:01:39.967065 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:01:39.969691 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:01:39.969769 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:01:39.972171 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:01:40.002099 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:01:40.003789 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:01:40.010782 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:01:40.010890 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:01:40.016216 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:01:40.016312 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:01:40.018237 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:01:40.018423 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:01:40.027513 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:01:40.027612 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:01:40.030016 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:01:40.030120 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:01:40.056671 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:01:40.060545 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:01:40.060688 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:01:40.064855 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:01:40.064955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:40.078670 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:01:40.082421 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:01:40.094681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:01:40.095062 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:01:40.103160 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:01:40.111624 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:01:40.149027 systemd[1]: Switching root. Jan 30 14:01:40.185037 systemd-journald[251]: Journal stopped Jan 30 14:01:42.776881 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 30 14:01:42.777002 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:01:42.777044 kernel: SELinux: policy capability open_perms=1 Jan 30 14:01:42.777085 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:01:42.777116 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:01:42.777149 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:01:42.777187 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:01:42.777223 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:01:42.779526 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:01:42.779577 kernel: audit: type=1403 audit(1738245701.050:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:01:42.779624 systemd[1]: Successfully loaded SELinux policy in 63.521ms. Jan 30 14:01:42.779670 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.836ms. Jan 30 14:01:42.779705 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:01:42.779738 systemd[1]: Detected virtualization amazon. Jan 30 14:01:42.779771 systemd[1]: Detected architecture arm64. Jan 30 14:01:42.779807 systemd[1]: Detected first boot. Jan 30 14:01:42.779841 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:01:42.779874 zram_generator::config[1507]: No configuration found. Jan 30 14:01:42.779911 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:01:42.779943 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:01:42.779975 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 14:01:42.780009 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:01:42.780042 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:01:42.780074 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:01:42.780109 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:01:42.780141 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:01:42.780173 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:01:42.780205 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:01:42.780236 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:01:42.780295 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:01:42.780326 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:01:42.780358 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:01:42.780395 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:01:42.780428 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:01:42.780460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:01:42.780492 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:01:42.780524 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:01:42.780555 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:01:42.780584 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:01:42.780618 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:01:42.780648 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:01:42.780683 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:01:42.780713 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:01:42.780744 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:01:42.780776 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:01:42.780805 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:01:42.780835 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:01:42.780864 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:01:42.780894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:01:42.780923 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:01:42.780959 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:01:42.780989 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:01:42.781021 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:01:42.781050 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:01:42.781080 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:01:42.781111 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:01:42.781140 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:01:42.781171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:01:42.781205 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:01:42.781236 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:01:42.783379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:01:42.783419 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:01:42.783453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:01:42.783483 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:01:42.786382 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:01:42.786425 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:01:42.786456 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 14:01:42.786498 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 14:01:42.786531 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:01:42.786559 kernel: fuse: init (API version 7.39) Jan 30 14:01:42.786590 kernel: loop: module loaded Jan 30 14:01:42.786619 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:01:42.786650 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:01:42.786680 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:01:42.786712 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:01:42.786744 kernel: ACPI: bus type drm_connector registered Jan 30 14:01:42.786781 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:01:42.786810 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:01:42.786839 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:01:42.786869 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:01:42.786898 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:01:42.786930 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:01:42.786959 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:01:42.786989 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:01:42.787018 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:01:42.787053 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:01:42.787082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:01:42.787112 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:01:42.787193 systemd-journald[1611]: Collecting audit messages is disabled. Jan 30 14:01:42.787313 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:01:42.787347 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:01:42.787377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:01:42.787406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:01:42.787438 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:01:42.787467 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:01:42.787499 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:01:42.787528 systemd-journald[1611]: Journal started Jan 30 14:01:42.787581 systemd-journald[1611]: Runtime Journal (/run/log/journal/ec2c66b726c7285f1e0f50265a707cc8) is 8.0M, max 75.3M, 67.3M free. Jan 30 14:01:42.795317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:01:42.801380 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:01:42.804894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:01:42.807677 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:01:42.810908 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:01:42.838666 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:01:42.848485 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:01:42.860450 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:01:42.862574 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:01:42.882296 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:01:42.897704 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:01:42.900512 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:01:42.907554 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:01:42.913522 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:01:42.918715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:01:42.926019 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:01:42.934164 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:01:42.938722 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:01:42.971144 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:01:42.981521 systemd-journald[1611]: Time spent on flushing to /var/log/journal/ec2c66b726c7285f1e0f50265a707cc8 is 55.217ms for 897 entries. Jan 30 14:01:42.981521 systemd-journald[1611]: System Journal (/var/log/journal/ec2c66b726c7285f1e0f50265a707cc8) is 8.0M, max 195.6M, 187.6M free. Jan 30 14:01:43.049733 systemd-journald[1611]: Received client request to flush runtime journal. Jan 30 14:01:42.974638 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:01:43.038991 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:01:43.054514 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:01:43.069572 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:01:43.077444 systemd-tmpfiles[1659]: ACLs are not supported, ignoring. Jan 30 14:01:43.077477 systemd-tmpfiles[1659]: ACLs are not supported, ignoring. Jan 30 14:01:43.088743 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:01:43.093706 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:01:43.118522 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:01:43.125068 udevadm[1670]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 14:01:43.185294 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:01:43.196543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:01:43.251221 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jan 30 14:01:43.251296 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Jan 30 14:01:43.262067 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:01:44.013399 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:01:44.022607 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:01:44.076924 systemd-udevd[1687]: Using default interface naming scheme 'v255'. Jan 30 14:01:44.166675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:01:44.185601 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:01:44.216823 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:01:44.320844 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 14:01:44.341578 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:01:44.364722 (udev-worker)[1695]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:01:44.505452 systemd-networkd[1692]: lo: Link UP Jan 30 14:01:44.505938 systemd-networkd[1692]: lo: Gained carrier Jan 30 14:01:44.509861 systemd-networkd[1692]: Enumeration completed Jan 30 14:01:44.510443 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:01:44.511827 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:01:44.511931 systemd-networkd[1692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:01:44.514743 systemd-networkd[1692]: eth0: Link UP Jan 30 14:01:44.516183 systemd-networkd[1692]: eth0: Gained carrier Jan 30 14:01:44.516223 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:01:44.522707 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:01:44.534370 systemd-networkd[1692]: eth0: DHCPv4 address 172.31.23.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 14:01:44.601318 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1709) Jan 30 14:01:44.613961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:01:44.766961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:01:44.810920 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:01:44.827499 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 14:01:44.873562 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:01:44.905726 lvm[1816]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:01:44.943917 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:01:44.947765 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:01:44.956565 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:01:44.973773 lvm[1819]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:01:45.012849 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:01:45.015803 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:01:45.018749 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:01:45.018959 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:01:45.021191 systemd[1]: Reached target machines.target - Containers. Jan 30 14:01:45.025605 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:01:45.035557 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:01:45.047409 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:01:45.049558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:01:45.051715 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:01:45.065531 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:01:45.078384 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:01:45.085474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:01:45.114275 kernel: loop0: detected capacity change from 0 to 52536 Jan 30 14:01:45.128105 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:01:45.130016 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:01:45.144892 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:01:45.151856 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:01:45.192300 kernel: loop1: detected capacity change from 0 to 114432 Jan 30 14:01:45.304284 kernel: loop2: detected capacity change from 0 to 194096 Jan 30 14:01:45.435293 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 14:01:45.530289 kernel: loop4: detected capacity change from 0 to 52536 Jan 30 14:01:45.549318 kernel: loop5: detected capacity change from 0 to 114432 Jan 30 14:01:45.561297 kernel: loop6: detected capacity change from 0 to 194096 Jan 30 14:01:45.588286 kernel: loop7: detected capacity change from 0 to 114328 Jan 30 14:01:45.596842 (sd-merge)[1840]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 14:01:45.597894 (sd-merge)[1840]: Merged extensions into '/usr'. Jan 30 14:01:45.606807 systemd[1]: Reloading requested from client PID 1827 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:01:45.606838 systemd[1]: Reloading... Jan 30 14:01:45.719178 zram_generator::config[1868]: No configuration found. Jan 30 14:01:46.013661 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:01:46.161935 systemd[1]: Reloading finished in 554 ms. Jan 30 14:01:46.187697 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:01:46.190342 systemd-networkd[1692]: eth0: Gained IPv6LL Jan 30 14:01:46.209587 systemd[1]: Starting ensure-sysext.service... Jan 30 14:01:46.220747 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:01:46.226276 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:01:46.240394 systemd[1]: Reloading requested from client PID 1926 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:01:46.240443 systemd[1]: Reloading... Jan 30 14:01:46.284549 systemd-tmpfiles[1927]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:01:46.285205 systemd-tmpfiles[1927]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:01:46.287829 systemd-tmpfiles[1927]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:01:46.288636 systemd-tmpfiles[1927]: ACLs are not supported, ignoring. Jan 30 14:01:46.288779 systemd-tmpfiles[1927]: ACLs are not supported, ignoring. Jan 30 14:01:46.298024 systemd-tmpfiles[1927]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:01:46.298272 systemd-tmpfiles[1927]: Skipping /boot Jan 30 14:01:46.321230 systemd-tmpfiles[1927]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:01:46.321477 systemd-tmpfiles[1927]: Skipping /boot Jan 30 14:01:46.422313 zram_generator::config[1959]: No configuration found. Jan 30 14:01:46.593194 ldconfig[1823]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:01:46.669561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:01:46.816267 systemd[1]: Reloading finished in 575 ms. Jan 30 14:01:46.842137 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:01:46.852499 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:01:46.873678 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:01:46.879775 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:01:46.890506 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:01:46.900584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:01:46.909688 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:01:46.932981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:01:46.942427 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:01:46.950179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:01:46.970572 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:01:46.972681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:01:46.991934 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:01:46.992528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:01:47.014511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:01:47.020529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:01:47.022659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:01:47.022845 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:01:47.036783 systemd[1]: Finished ensure-sysext.service. Jan 30 14:01:47.040195 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:01:47.045654 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:01:47.046034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:01:47.051612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:01:47.052041 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:01:47.056454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:01:47.056804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:01:47.073324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:01:47.073514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:01:47.078226 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:01:47.101620 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:01:47.104592 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:01:47.107652 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:01:47.132275 augenrules[2059]: No rules Jan 30 14:01:47.132958 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:01:47.157179 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:01:47.211409 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:01:47.214300 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:01:47.225174 systemd-resolved[2022]: Positive Trust Anchors: Jan 30 14:01:47.225214 systemd-resolved[2022]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:01:47.225299 systemd-resolved[2022]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:01:47.233443 systemd-resolved[2022]: Defaulting to hostname 'linux'. Jan 30 14:01:47.236778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:01:47.239067 systemd[1]: Reached target network.target - Network. Jan 30 14:01:47.240951 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:01:47.243008 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:01:47.245193 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:01:47.247565 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:01:47.253558 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:01:47.256324 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:01:47.258660 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:01:47.260903 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:01:47.263099 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:01:47.263162 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:01:47.264822 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:01:47.267884 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:01:47.273872 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:01:47.278530 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:01:47.282794 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:01:47.287485 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:01:47.291515 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:01:47.294678 systemd[1]: System is tainted: cgroupsv1 Jan 30 14:01:47.294890 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:01:47.295055 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:01:47.304905 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:01:47.314548 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:01:47.330566 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:01:47.337873 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:01:47.343655 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:01:47.346435 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:01:47.358636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:47.372441 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:01:47.383393 jq[2076]: false Jan 30 14:01:47.393383 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 14:01:47.424543 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:01:47.443521 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:01:47.460070 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 14:01:47.488392 extend-filesystems[2077]: Found loop4 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found loop5 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found loop6 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found loop7 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found nvme0n1 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found nvme0n1p1 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found nvme0n1p2 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found nvme0n1p3 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found usr Jan 30 14:01:47.505839 extend-filesystems[2077]: Found nvme0n1p4 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found nvme0n1p6 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found nvme0n1p7 Jan 30 14:01:47.505839 extend-filesystems[2077]: Found nvme0n1p9 Jan 30 14:01:47.505839 extend-filesystems[2077]: Checking size of /dev/nvme0n1p9 Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: ---------------------------------------------------- Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: corporation. Support and training for ntp-4 are Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: available at https://www.nwtime.org/support Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: ---------------------------------------------------- Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: proto: precision = 0.108 usec (-23) Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: basedate set to 2025-01-17 Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: gps base set to 2025-01-19 (week 2350) Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: Listen normally on 3 eth0 172.31.23.215:123 Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: Listen normally on 4 lo [::1]:123 Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: Listen normally on 5 eth0 [fe80::4eb:bff:fe87:6739%2]:123 Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: Listening on routing socket on fd #22 for interface updates Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:01:47.554339 ntpd[2081]: 30 Jan 14:01:47 ntpd[2081]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:01:47.491965 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:01:47.502076 dbus-daemon[2075]: [system] SELinux support is enabled Jan 30 14:01:47.538558 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:01:47.504454 ntpd[2081]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 30 14:01:47.574587 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:01:47.504500 ntpd[2081]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:01:47.578407 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:01:47.504521 ntpd[2081]: ---------------------------------------------------- Jan 30 14:01:47.504545 ntpd[2081]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:01:47.504565 ntpd[2081]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:01:47.504584 ntpd[2081]: corporation. Support and training for ntp-4 are Jan 30 14:01:47.504604 ntpd[2081]: available at https://www.nwtime.org/support Jan 30 14:01:47.504623 ntpd[2081]: ---------------------------------------------------- Jan 30 14:01:47.511732 ntpd[2081]: proto: precision = 0.108 usec (-23) Jan 30 14:01:47.512140 ntpd[2081]: basedate set to 2025-01-17 Jan 30 14:01:47.512165 ntpd[2081]: gps base set to 2025-01-19 (week 2350) Jan 30 14:01:47.518562 ntpd[2081]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:01:47.518640 ntpd[2081]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:01:47.518891 ntpd[2081]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:01:47.518955 ntpd[2081]: Listen normally on 3 eth0 172.31.23.215:123 Jan 30 14:01:47.519031 ntpd[2081]: Listen normally on 4 lo [::1]:123 Jan 30 14:01:47.519105 ntpd[2081]: Listen normally on 5 eth0 [fe80::4eb:bff:fe87:6739%2]:123 Jan 30 14:01:47.519165 ntpd[2081]: Listening on routing socket on fd #22 for interface updates Jan 30 14:01:47.529229 ntpd[2081]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:01:47.529304 ntpd[2081]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:01:47.604778 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:01:47.546817 dbus-daemon[2075]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1692 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 14:01:47.630522 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:01:47.635746 extend-filesystems[2077]: Resized partition /dev/nvme0n1p9 Jan 30 14:01:47.635590 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetch successful Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetch successful Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetch successful Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetch successful Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetch failed with 404: resource not found Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetch successful Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetch successful Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetch successful Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetch successful Jan 30 14:01:47.652727 coreos-metadata[2074]: Jan 30 14:01:47.652 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 14:01:47.691698 coreos-metadata[2074]: Jan 30 14:01:47.653 INFO Fetch successful Jan 30 14:01:47.691771 extend-filesystems[2117]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:01:47.705452 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 14:01:47.702122 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:01:47.706505 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:01:47.715989 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:01:47.716544 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:01:47.721320 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:01:47.738586 jq[2113]: true Jan 30 14:01:47.739485 update_engine[2108]: I20250130 14:01:47.734467 2108 main.cc:92] Flatcar Update Engine starting Jan 30 14:01:47.759190 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:01:47.760851 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:01:47.776373 update_engine[2108]: I20250130 14:01:47.768558 2108 update_check_scheduler.cc:74] Next update check in 4m41s Jan 30 14:01:47.821215 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 14:01:47.855636 extend-filesystems[2117]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 14:01:47.855636 extend-filesystems[2117]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 14:01:47.855636 extend-filesystems[2117]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 14:01:47.841694 (ntainerd)[2132]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:01:47.875063 jq[2130]: true Jan 30 14:01:47.889334 extend-filesystems[2077]: Resized filesystem in /dev/nvme0n1p9 Jan 30 14:01:47.885583 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:01:47.886146 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:01:47.907677 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:01:47.964720 dbus-daemon[2075]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 14:01:47.977276 tar[2125]: linux-arm64/helm Jan 30 14:01:47.977187 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:01:47.994059 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 14:01:48.035790 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 14:01:48.039616 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:01:48.039831 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:01:48.039885 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:01:48.048322 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 14:01:48.050510 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:01:48.050553 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:01:48.054385 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:01:48.060704 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:01:48.085708 bash[2184]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:01:48.106756 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:01:48.171809 systemd[1]: Starting sshkeys.service... Jan 30 14:01:48.199048 systemd-logind[2104]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 14:01:48.199102 systemd-logind[2104]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 30 14:01:48.214568 systemd-logind[2104]: New seat seat0. Jan 30 14:01:48.227228 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:01:48.267567 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:01:48.330713 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:01:48.366521 amazon-ssm-agent[2180]: Initializing new seelog logger Jan 30 14:01:48.372724 amazon-ssm-agent[2180]: New Seelog Logger Creation Complete Jan 30 14:01:48.372724 amazon-ssm-agent[2180]: 2025/01/30 14:01:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:01:48.372724 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:01:48.372724 amazon-ssm-agent[2180]: 2025/01/30 14:01:48 processing appconfig overrides Jan 30 14:01:48.380282 amazon-ssm-agent[2180]: 2025/01/30 14:01:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:01:48.380282 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:01:48.380282 amazon-ssm-agent[2180]: 2025/01/30 14:01:48 processing appconfig overrides Jan 30 14:01:48.380282 amazon-ssm-agent[2180]: 2025/01/30 14:01:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:01:48.380282 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:01:48.380282 amazon-ssm-agent[2180]: 2025/01/30 14:01:48 processing appconfig overrides Jan 30 14:01:48.389373 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO Proxy environment variables: Jan 30 14:01:48.412270 amazon-ssm-agent[2180]: 2025/01/30 14:01:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:01:48.412270 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:01:48.412270 amazon-ssm-agent[2180]: 2025/01/30 14:01:48 processing appconfig overrides Jan 30 14:01:48.492178 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO https_proxy: Jan 30 14:01:48.609285 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2190) Jan 30 14:01:48.612798 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO http_proxy: Jan 30 14:01:48.713187 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO no_proxy: Jan 30 14:01:48.721902 dbus-daemon[2075]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 14:01:48.722165 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 14:01:48.728050 dbus-daemon[2075]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2185 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 14:01:48.749894 locksmithd[2186]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:01:48.751109 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 14:01:48.765720 coreos-metadata[2224]: Jan 30 14:01:48.765 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 14:01:48.771210 coreos-metadata[2224]: Jan 30 14:01:48.771 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 14:01:48.775962 coreos-metadata[2224]: Jan 30 14:01:48.775 INFO Fetch successful Jan 30 14:01:48.775962 coreos-metadata[2224]: Jan 30 14:01:48.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 14:01:48.780939 coreos-metadata[2224]: Jan 30 14:01:48.780 INFO Fetch successful Jan 30 14:01:48.785932 unknown[2224]: wrote ssh authorized keys file for user: core Jan 30 14:01:48.812191 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO Checking if agent identity type OnPrem can be assumed Jan 30 14:01:48.853213 polkitd[2268]: Started polkitd version 121 Jan 30 14:01:48.873131 update-ssh-keys[2286]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:01:48.877627 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:01:48.893942 containerd[2132]: time="2025-01-30T14:01:48.886408610Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:01:48.892333 systemd[1]: Finished sshkeys.service. Jan 30 14:01:48.916266 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO Checking if agent identity type EC2 can be assumed Jan 30 14:01:48.923145 polkitd[2268]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 14:01:48.923381 polkitd[2268]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 14:01:48.946601 polkitd[2268]: Finished loading, compiling and executing 2 rules Jan 30 14:01:48.961764 dbus-daemon[2075]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 14:01:48.965927 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 14:01:48.971479 polkitd[2268]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 14:01:49.020971 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO Agent will take identity from EC2 Jan 30 14:01:49.044507 systemd-hostnamed[2185]: Hostname set to (transient) Jan 30 14:01:49.046381 containerd[2132]: time="2025-01-30T14:01:49.046289651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:49.047349 systemd-resolved[2022]: System hostname changed to 'ip-172-31-23-215'. Jan 30 14:01:49.051958 containerd[2132]: time="2025-01-30T14:01:49.051888287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:49.052117 containerd[2132]: time="2025-01-30T14:01:49.052087715Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:01:49.052227 containerd[2132]: time="2025-01-30T14:01:49.052200407Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:01:49.052850 containerd[2132]: time="2025-01-30T14:01:49.052814891Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:01:49.055220 containerd[2132]: time="2025-01-30T14:01:49.053646275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:49.055220 containerd[2132]: time="2025-01-30T14:01:49.053823419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:49.055220 containerd[2132]: time="2025-01-30T14:01:49.053853131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:49.055220 containerd[2132]: time="2025-01-30T14:01:49.054207239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:49.055220 containerd[2132]: time="2025-01-30T14:01:49.054256019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:49.055220 containerd[2132]: time="2025-01-30T14:01:49.054293003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:49.055220 containerd[2132]: time="2025-01-30T14:01:49.054318551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:49.055220 containerd[2132]: time="2025-01-30T14:01:49.054478859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:49.056612 containerd[2132]: time="2025-01-30T14:01:49.056566475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:01:49.058271 containerd[2132]: time="2025-01-30T14:01:49.057855143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:01:49.058271 containerd[2132]: time="2025-01-30T14:01:49.057904115Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:01:49.058271 containerd[2132]: time="2025-01-30T14:01:49.058101911Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:01:49.058271 containerd[2132]: time="2025-01-30T14:01:49.058194791Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:01:49.065810 containerd[2132]: time="2025-01-30T14:01:49.065748455Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:01:49.069840 containerd[2132]: time="2025-01-30T14:01:49.066037019Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:01:49.069840 containerd[2132]: time="2025-01-30T14:01:49.067412507Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:01:49.069840 containerd[2132]: time="2025-01-30T14:01:49.067460063Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:01:49.069840 containerd[2132]: time="2025-01-30T14:01:49.067523759Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:01:49.069840 containerd[2132]: time="2025-01-30T14:01:49.069586235Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:01:49.072185 containerd[2132]: time="2025-01-30T14:01:49.071530151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:01:49.073466 containerd[2132]: time="2025-01-30T14:01:49.073398395Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:01:49.073649 containerd[2132]: time="2025-01-30T14:01:49.073622171Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:01:49.073789 containerd[2132]: time="2025-01-30T14:01:49.073757519Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:01:49.074301 containerd[2132]: time="2025-01-30T14:01:49.073934855Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:01:49.074460 containerd[2132]: time="2025-01-30T14:01:49.074430695Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:01:49.074604 containerd[2132]: time="2025-01-30T14:01:49.074577911Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:01:49.074734 containerd[2132]: time="2025-01-30T14:01:49.074707967Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:01:49.074988 containerd[2132]: time="2025-01-30T14:01:49.074958491Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:01:49.075116 containerd[2132]: time="2025-01-30T14:01:49.075089723Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:01:49.075356 containerd[2132]: time="2025-01-30T14:01:49.075327263Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:01:49.075806 containerd[2132]: time="2025-01-30T14:01:49.075768875Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:01:49.076069 containerd[2132]: time="2025-01-30T14:01:49.075935339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.076069 containerd[2132]: time="2025-01-30T14:01:49.075998183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.076069 containerd[2132]: time="2025-01-30T14:01:49.076031951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.077318 containerd[2132]: time="2025-01-30T14:01:49.076368887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.077510 containerd[2132]: time="2025-01-30T14:01:49.076407167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077619743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077669855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077704715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077753111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077791511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077821571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077851427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077883131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077920463Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.077974343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.078004139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.078033083Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.078163799Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:01:49.078282 containerd[2132]: time="2025-01-30T14:01:49.078202151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:01:49.078918 containerd[2132]: time="2025-01-30T14:01:49.078228299Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:01:49.079575 containerd[2132]: time="2025-01-30T14:01:49.079352471Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:01:49.079575 containerd[2132]: time="2025-01-30T14:01:49.079419443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.079575 containerd[2132]: time="2025-01-30T14:01:49.079453703Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:01:49.079575 containerd[2132]: time="2025-01-30T14:01:49.079500635Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:01:49.079575 containerd[2132]: time="2025-01-30T14:01:49.079529171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:01:49.083822 containerd[2132]: time="2025-01-30T14:01:49.083356427Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:01:49.083822 containerd[2132]: time="2025-01-30T14:01:49.083554619Z" level=info msg="Connect containerd service" Jan 30 14:01:49.083822 containerd[2132]: time="2025-01-30T14:01:49.083645243Z" level=info msg="using legacy CRI server" Jan 30 14:01:49.083822 containerd[2132]: time="2025-01-30T14:01:49.083665835Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:01:49.085546 containerd[2132]: time="2025-01-30T14:01:49.084533075Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:01:49.089949 containerd[2132]: time="2025-01-30T14:01:49.088438919Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:01:49.090183 containerd[2132]: time="2025-01-30T14:01:49.090098903Z" level=info msg="Start subscribing containerd event" Jan 30 14:01:49.090261 containerd[2132]: time="2025-01-30T14:01:49.090200543Z" level=info msg="Start recovering state" Jan 30 14:01:49.090392 containerd[2132]: time="2025-01-30T14:01:49.090353939Z" level=info msg="Start event monitor" Jan 30 14:01:49.090480 containerd[2132]: time="2025-01-30T14:01:49.090388907Z" level=info msg="Start snapshots syncer" Jan 30 14:01:49.090480 containerd[2132]: time="2025-01-30T14:01:49.090413987Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:01:49.090480 containerd[2132]: time="2025-01-30T14:01:49.090433487Z" level=info msg="Start streaming server" Jan 30 14:01:49.096285 containerd[2132]: time="2025-01-30T14:01:49.090858251Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:01:49.096285 containerd[2132]: time="2025-01-30T14:01:49.090967955Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:01:49.092141 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:01:49.096624 containerd[2132]: time="2025-01-30T14:01:49.091058171Z" level=info msg="containerd successfully booted in 0.214131s" Jan 30 14:01:49.125285 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:01:49.223396 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:01:49.323469 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:01:49.422613 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 14:01:49.523370 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 30 14:01:49.625076 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 14:01:49.725360 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 14:01:49.830289 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO [Registrar] Starting registrar module Jan 30 14:01:49.869128 sshd_keygen[2121]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:01:49.928021 amazon-ssm-agent[2180]: 2025-01-30 14:01:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 14:01:49.978351 tar[2125]: linux-arm64/LICENSE Jan 30 14:01:49.978947 tar[2125]: linux-arm64/README.md Jan 30 14:01:49.996952 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:01:50.014306 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:01:50.028789 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:01:50.049227 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:01:50.050669 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:01:50.063813 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:01:50.078360 amazon-ssm-agent[2180]: 2025-01-30 14:01:50 INFO [EC2Identity] EC2 registration was successful. Jan 30 14:01:50.091026 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:01:50.107828 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:01:50.118865 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:01:50.124130 amazon-ssm-agent[2180]: 2025-01-30 14:01:50 INFO [CredentialRefresher] credentialRefresher has started Jan 30 14:01:50.124130 amazon-ssm-agent[2180]: 2025-01-30 14:01:50 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 14:01:50.124130 amazon-ssm-agent[2180]: 2025-01-30 14:01:50 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 14:01:50.121477 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:01:50.177649 amazon-ssm-agent[2180]: 2025-01-30 14:01:50 INFO [CredentialRefresher] Next credential rotation will be in 31.716659744666668 minutes Jan 30 14:01:50.513616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:50.518493 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:01:50.521607 (kubelet)[2367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:01:50.525831 systemd[1]: Startup finished in 10.384s (kernel) + 9.537s (userspace) = 19.921s. Jan 30 14:01:51.149453 amazon-ssm-agent[2180]: 2025-01-30 14:01:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 14:01:51.250773 amazon-ssm-agent[2180]: 2025-01-30 14:01:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2377) started Jan 30 14:01:51.351612 amazon-ssm-agent[2180]: 2025-01-30 14:01:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 14:01:51.760483 kubelet[2367]: E0130 14:01:51.760357 2367 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:01:51.763859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:01:51.764231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:01:54.709282 systemd-resolved[2022]: Clock change detected. Flushing caches. Jan 30 14:01:55.244926 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:01:55.250676 systemd[1]: Started sshd@0-172.31.23.215:22-139.178.89.65:51970.service - OpenSSH per-connection server daemon (139.178.89.65:51970). Jan 30 14:01:55.427666 sshd[2391]: Accepted publickey for core from 139.178.89.65 port 51970 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:01:55.431552 sshd[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:55.451269 systemd-logind[2104]: New session 1 of user core. Jan 30 14:01:55.452740 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:01:55.460671 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:01:55.494682 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:01:55.509906 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:01:55.517978 (systemd)[2397]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:01:55.729687 systemd[2397]: Queued start job for default target default.target. Jan 30 14:01:55.730402 systemd[2397]: Created slice app.slice - User Application Slice. Jan 30 14:01:55.730456 systemd[2397]: Reached target paths.target - Paths. Jan 30 14:01:55.730487 systemd[2397]: Reached target timers.target - Timers. Jan 30 14:01:55.747352 systemd[2397]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:01:55.759751 systemd[2397]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:01:55.761013 systemd[2397]: Reached target sockets.target - Sockets. Jan 30 14:01:55.761076 systemd[2397]: Reached target basic.target - Basic System. Jan 30 14:01:55.761799 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:01:55.763286 systemd[2397]: Reached target default.target - Main User Target. Jan 30 14:01:55.763402 systemd[2397]: Startup finished in 233ms. Jan 30 14:01:55.768709 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:01:55.918978 systemd[1]: Started sshd@1-172.31.23.215:22-139.178.89.65:51974.service - OpenSSH per-connection server daemon (139.178.89.65:51974). Jan 30 14:01:56.097647 sshd[2409]: Accepted publickey for core from 139.178.89.65 port 51974 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:01:56.100543 sshd[2409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:56.108670 systemd-logind[2104]: New session 2 of user core. Jan 30 14:01:56.119823 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:01:56.250538 sshd[2409]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:56.257967 systemd[1]: sshd@1-172.31.23.215:22-139.178.89.65:51974.service: Deactivated successfully. Jan 30 14:01:56.258033 systemd-logind[2104]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:01:56.264484 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:01:56.266518 systemd-logind[2104]: Removed session 2. Jan 30 14:01:56.279719 systemd[1]: Started sshd@2-172.31.23.215:22-139.178.89.65:51978.service - OpenSSH per-connection server daemon (139.178.89.65:51978). Jan 30 14:01:56.458980 sshd[2417]: Accepted publickey for core from 139.178.89.65 port 51978 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:01:56.461532 sshd[2417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:56.470103 systemd-logind[2104]: New session 3 of user core. Jan 30 14:01:56.476704 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:01:56.597850 sshd[2417]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:56.604565 systemd[1]: sshd@2-172.31.23.215:22-139.178.89.65:51978.service: Deactivated successfully. Jan 30 14:01:56.610247 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:01:56.610248 systemd-logind[2104]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:01:56.613248 systemd-logind[2104]: Removed session 3. Jan 30 14:01:56.627698 systemd[1]: Started sshd@3-172.31.23.215:22-139.178.89.65:51986.service - OpenSSH per-connection server daemon (139.178.89.65:51986). Jan 30 14:01:56.806017 sshd[2425]: Accepted publickey for core from 139.178.89.65 port 51986 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:01:56.807941 sshd[2425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:56.815384 systemd-logind[2104]: New session 4 of user core. Jan 30 14:01:56.825644 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:01:56.952810 sshd[2425]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:56.957596 systemd[1]: sshd@3-172.31.23.215:22-139.178.89.65:51986.service: Deactivated successfully. Jan 30 14:01:56.964623 systemd-logind[2104]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:01:56.965818 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:01:56.968799 systemd-logind[2104]: Removed session 4. Jan 30 14:01:56.981687 systemd[1]: Started sshd@4-172.31.23.215:22-139.178.89.65:52002.service - OpenSSH per-connection server daemon (139.178.89.65:52002). Jan 30 14:01:57.163427 sshd[2433]: Accepted publickey for core from 139.178.89.65 port 52002 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:01:57.166126 sshd[2433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:57.173579 systemd-logind[2104]: New session 5 of user core. Jan 30 14:01:57.185769 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:01:57.303585 sudo[2437]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:01:57.304422 sudo[2437]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:01:57.319698 sudo[2437]: pam_unix(sudo:session): session closed for user root Jan 30 14:01:57.343611 sshd[2433]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:57.350768 systemd[1]: sshd@4-172.31.23.215:22-139.178.89.65:52002.service: Deactivated successfully. Jan 30 14:01:57.355948 systemd-logind[2104]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:01:57.356589 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:01:57.360542 systemd-logind[2104]: Removed session 5. Jan 30 14:01:57.372679 systemd[1]: Started sshd@5-172.31.23.215:22-139.178.89.65:52018.service - OpenSSH per-connection server daemon (139.178.89.65:52018). Jan 30 14:01:57.551525 sshd[2442]: Accepted publickey for core from 139.178.89.65 port 52018 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:01:57.553250 sshd[2442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:57.561258 systemd-logind[2104]: New session 6 of user core. Jan 30 14:01:57.564694 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:01:57.670402 sudo[2447]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:01:57.671599 sudo[2447]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:01:57.677816 sudo[2447]: pam_unix(sudo:session): session closed for user root Jan 30 14:01:57.687870 sudo[2446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:01:57.688511 sudo[2446]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:01:57.712221 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:01:57.716745 auditctl[2450]: No rules Jan 30 14:01:57.717566 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:01:57.718061 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:01:57.728129 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:01:57.784895 augenrules[2469]: No rules Jan 30 14:01:57.787890 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:01:57.791508 sudo[2446]: pam_unix(sudo:session): session closed for user root Jan 30 14:01:57.815796 sshd[2442]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:57.823952 systemd[1]: sshd@5-172.31.23.215:22-139.178.89.65:52018.service: Deactivated successfully. Jan 30 14:01:57.824784 systemd-logind[2104]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:01:57.831838 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:01:57.833597 systemd-logind[2104]: Removed session 6. Jan 30 14:01:57.848696 systemd[1]: Started sshd@6-172.31.23.215:22-139.178.89.65:52026.service - OpenSSH per-connection server daemon (139.178.89.65:52026). Jan 30 14:01:58.017041 sshd[2478]: Accepted publickey for core from 139.178.89.65 port 52026 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:01:58.019563 sshd[2478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:58.028022 systemd-logind[2104]: New session 7 of user core. Jan 30 14:01:58.038726 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:01:58.144361 sudo[2482]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:01:58.145525 sudo[2482]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:01:58.581660 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:01:58.590922 (dockerd)[2498]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:01:58.945264 dockerd[2498]: time="2025-01-30T14:01:58.944876529Z" level=info msg="Starting up" Jan 30 14:01:59.255870 dockerd[2498]: time="2025-01-30T14:01:59.255376434Z" level=info msg="Loading containers: start." Jan 30 14:01:59.411213 kernel: Initializing XFRM netlink socket Jan 30 14:01:59.443430 (udev-worker)[2520]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:01:59.531975 systemd-networkd[1692]: docker0: Link UP Jan 30 14:01:59.558493 dockerd[2498]: time="2025-01-30T14:01:59.558425180Z" level=info msg="Loading containers: done." Jan 30 14:01:59.588459 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2959151262-merged.mount: Deactivated successfully. Jan 30 14:01:59.596280 dockerd[2498]: time="2025-01-30T14:01:59.596212244Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:01:59.597173 dockerd[2498]: time="2025-01-30T14:01:59.597096320Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:01:59.597570 dockerd[2498]: time="2025-01-30T14:01:59.597538964Z" level=info msg="Daemon has completed initialization" Jan 30 14:01:59.648881 dockerd[2498]: time="2025-01-30T14:01:59.648763436Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:01:59.649424 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:02:00.840816 containerd[2132]: time="2025-01-30T14:02:00.840746662Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:02:01.481954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758592498.mount: Deactivated successfully. Jan 30 14:02:02.140126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:02:02.150597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:02.488534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:02.493748 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:02:02.585523 kubelet[2709]: E0130 14:02:02.584648 2709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:02:02.592452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:02:02.592831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:02:03.137252 containerd[2132]: time="2025-01-30T14:02:03.136152501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:03.138449 containerd[2132]: time="2025-01-30T14:02:03.138376641Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 30 14:02:03.140242 containerd[2132]: time="2025-01-30T14:02:03.140139237Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:03.145984 containerd[2132]: time="2025-01-30T14:02:03.145899861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:03.148588 containerd[2132]: time="2025-01-30T14:02:03.148315821Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.307503195s" Jan 30 14:02:03.148588 containerd[2132]: time="2025-01-30T14:02:03.148372497Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 14:02:03.186639 containerd[2132]: time="2025-01-30T14:02:03.186586258Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:02:04.737600 containerd[2132]: time="2025-01-30T14:02:04.737244853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:04.739361 containerd[2132]: time="2025-01-30T14:02:04.738924577Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 30 14:02:04.745244 containerd[2132]: time="2025-01-30T14:02:04.743750029Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:04.750137 containerd[2132]: time="2025-01-30T14:02:04.750077905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:04.752534 containerd[2132]: time="2025-01-30T14:02:04.752469529Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.565808271s" Jan 30 14:02:04.752670 containerd[2132]: time="2025-01-30T14:02:04.752532697Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 14:02:04.791916 containerd[2132]: time="2025-01-30T14:02:04.791864306Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:02:05.908985 containerd[2132]: time="2025-01-30T14:02:05.908907627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:05.910554 containerd[2132]: time="2025-01-30T14:02:05.910478763Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 30 14:02:05.912025 containerd[2132]: time="2025-01-30T14:02:05.911949219Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:05.917591 containerd[2132]: time="2025-01-30T14:02:05.917496339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:05.920007 containerd[2132]: time="2025-01-30T14:02:05.919827123Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.127901221s" Jan 30 14:02:05.920007 containerd[2132]: time="2025-01-30T14:02:05.919881735Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 14:02:05.957097 containerd[2132]: time="2025-01-30T14:02:05.957013263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:02:07.190571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782320010.mount: Deactivated successfully. Jan 30 14:02:07.708461 containerd[2132]: time="2025-01-30T14:02:07.708380464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:07.709893 containerd[2132]: time="2025-01-30T14:02:07.709824712Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 30 14:02:07.711448 containerd[2132]: time="2025-01-30T14:02:07.711370192Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:07.714949 containerd[2132]: time="2025-01-30T14:02:07.714900364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:07.716926 containerd[2132]: time="2025-01-30T14:02:07.716469184Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.759373649s" Jan 30 14:02:07.716926 containerd[2132]: time="2025-01-30T14:02:07.716521648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 14:02:07.754463 containerd[2132]: time="2025-01-30T14:02:07.754383172Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:02:08.288057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238721746.mount: Deactivated successfully. Jan 30 14:02:09.369259 containerd[2132]: time="2025-01-30T14:02:09.368621200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:09.370943 containerd[2132]: time="2025-01-30T14:02:09.370837960Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 30 14:02:09.373377 containerd[2132]: time="2025-01-30T14:02:09.373289572Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:09.380015 containerd[2132]: time="2025-01-30T14:02:09.379916788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:09.383404 containerd[2132]: time="2025-01-30T14:02:09.382455640Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.628001476s" Jan 30 14:02:09.383404 containerd[2132]: time="2025-01-30T14:02:09.382525276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:02:09.421965 containerd[2132]: time="2025-01-30T14:02:09.421910621Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:02:09.900616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031320822.mount: Deactivated successfully. Jan 30 14:02:09.907610 containerd[2132]: time="2025-01-30T14:02:09.907538167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:09.909119 containerd[2132]: time="2025-01-30T14:02:09.909070819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 30 14:02:09.910008 containerd[2132]: time="2025-01-30T14:02:09.909922099Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:09.914152 containerd[2132]: time="2025-01-30T14:02:09.914055835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:09.916123 containerd[2132]: time="2025-01-30T14:02:09.915935107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 493.958234ms" Jan 30 14:02:09.916123 containerd[2132]: time="2025-01-30T14:02:09.915990427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 14:02:09.957639 containerd[2132]: time="2025-01-30T14:02:09.957503923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:02:10.489304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2779950351.mount: Deactivated successfully. Jan 30 14:02:12.640064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:02:12.650596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:13.015508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:13.025032 (kubelet)[2865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:02:13.126088 kubelet[2865]: E0130 14:02:13.125971 2865 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:02:13.131049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:02:13.131552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:02:13.708921 containerd[2132]: time="2025-01-30T14:02:13.708838414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:13.711292 containerd[2132]: time="2025-01-30T14:02:13.711224230Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 30 14:02:13.713370 containerd[2132]: time="2025-01-30T14:02:13.713286694Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:13.719906 containerd[2132]: time="2025-01-30T14:02:13.719808190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:13.722224 containerd[2132]: time="2025-01-30T14:02:13.722156962Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.764598631s" Jan 30 14:02:13.722490 containerd[2132]: time="2025-01-30T14:02:13.722357086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 14:02:19.284176 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 14:02:21.010724 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:21.024966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:21.067280 systemd[1]: Reloading requested from client PID 2943 ('systemctl') (unit session-7.scope)... Jan 30 14:02:21.067482 systemd[1]: Reloading... Jan 30 14:02:21.305218 zram_generator::config[2984]: No configuration found. Jan 30 14:02:21.545882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:02:21.706925 systemd[1]: Reloading finished in 638 ms. Jan 30 14:02:21.790423 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:02:21.790826 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:02:21.791848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:21.804560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:22.067540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:22.080870 (kubelet)[3058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:02:22.155364 kubelet[3058]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:02:22.155364 kubelet[3058]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:02:22.155364 kubelet[3058]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:02:22.157131 kubelet[3058]: I0130 14:02:22.157054 3058 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:02:23.283224 kubelet[3058]: I0130 14:02:23.282446 3058 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:02:23.283224 kubelet[3058]: I0130 14:02:23.282487 3058 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:02:23.283224 kubelet[3058]: I0130 14:02:23.282820 3058 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:02:23.312169 kubelet[3058]: E0130 14:02:23.312130 3058 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.215:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:23.314221 kubelet[3058]: I0130 14:02:23.313985 3058 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:02:23.328121 kubelet[3058]: I0130 14:02:23.328055 3058 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:02:23.328864 kubelet[3058]: I0130 14:02:23.328797 3058 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:02:23.329160 kubelet[3058]: I0130 14:02:23.328857 3058 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:02:23.329383 kubelet[3058]: I0130 14:02:23.329217 3058 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:02:23.329383 kubelet[3058]: I0130 14:02:23.329241 3058 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:02:23.329514 kubelet[3058]: I0130 14:02:23.329471 3058 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:02:23.331002 kubelet[3058]: I0130 14:02:23.330954 3058 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:02:23.333223 kubelet[3058]: I0130 14:02:23.331593 3058 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:02:23.333223 kubelet[3058]: I0130 14:02:23.331698 3058 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:02:23.333223 kubelet[3058]: I0130 14:02:23.331745 3058 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:02:23.333223 kubelet[3058]: W0130 14:02:23.331723 3058 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-215&limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:23.333223 kubelet[3058]: E0130 14:02:23.331797 3058 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-215&limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:23.333223 kubelet[3058]: I0130 14:02:23.333128 3058 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:02:23.333750 kubelet[3058]: I0130 14:02:23.333714 3058 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:02:23.333844 kubelet[3058]: W0130 14:02:23.333815 3058 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:02:23.334915 kubelet[3058]: I0130 14:02:23.334864 3058 server.go:1264] "Started kubelet" Jan 30 14:02:23.335148 kubelet[3058]: W0130 14:02:23.335076 3058 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.215:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:23.335379 kubelet[3058]: E0130 14:02:23.335162 3058 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.215:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:23.343284 kubelet[3058]: I0130 14:02:23.343231 3058 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:02:23.345305 kubelet[3058]: E0130 14:02:23.345072 3058 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.215:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.215:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-215.181f7d4b3fceb90a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-215,UID:ip-172-31-23-215,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-215,},FirstTimestamp:2025-01-30 14:02:23.334832394 +0000 UTC m=+1.247750684,LastTimestamp:2025-01-30 14:02:23.334832394 +0000 UTC m=+1.247750684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-215,}" Jan 30 14:02:23.351781 kubelet[3058]: I0130 14:02:23.351694 3058 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:02:23.352472 kubelet[3058]: I0130 14:02:23.352419 3058 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:02:23.354235 kubelet[3058]: I0130 14:02:23.353714 3058 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:02:23.354235 kubelet[3058]: I0130 14:02:23.353881 3058 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:02:23.354700 kubelet[3058]: I0130 14:02:23.354674 3058 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:02:23.356469 kubelet[3058]: I0130 14:02:23.356423 3058 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:02:23.356891 kubelet[3058]: I0130 14:02:23.356842 3058 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:02:23.357567 kubelet[3058]: W0130 14:02:23.357504 3058 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:23.358224 kubelet[3058]: E0130 14:02:23.357723 3058 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:23.358224 kubelet[3058]: E0130 14:02:23.357854 3058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-215?timeout=10s\": dial tcp 172.31.23.215:6443: connect: connection refused" interval="200ms" Jan 30 14:02:23.358846 kubelet[3058]: I0130 14:02:23.358556 3058 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:02:23.360115 kubelet[3058]: I0130 14:02:23.360082 3058 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:02:23.360833 kubelet[3058]: E0130 14:02:23.360802 3058 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:02:23.364341 kubelet[3058]: I0130 14:02:23.364302 3058 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:02:23.400233 kubelet[3058]: I0130 14:02:23.400144 3058 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:02:23.404910 kubelet[3058]: I0130 14:02:23.404355 3058 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:02:23.404910 kubelet[3058]: I0130 14:02:23.404449 3058 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:02:23.404910 kubelet[3058]: I0130 14:02:23.404483 3058 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:02:23.404910 kubelet[3058]: E0130 14:02:23.404550 3058 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:02:23.412232 kubelet[3058]: W0130 14:02:23.412109 3058 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:23.412971 kubelet[3058]: E0130 14:02:23.412513 3058 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:23.427846 kubelet[3058]: I0130 14:02:23.427815 3058 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:02:23.428305 kubelet[3058]: I0130 14:02:23.428024 3058 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:02:23.428305 kubelet[3058]: I0130 14:02:23.428063 3058 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:02:23.431120 kubelet[3058]: I0130 14:02:23.431090 3058 policy_none.go:49] "None policy: Start" Jan 30 14:02:23.433251 kubelet[3058]: I0130 14:02:23.432791 3058 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:02:23.433251 kubelet[3058]: I0130 14:02:23.432841 3058 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:02:23.441216 kubelet[3058]: I0130 14:02:23.441151 3058 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:02:23.441523 kubelet[3058]: I0130 14:02:23.441451 3058 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:02:23.441652 kubelet[3058]: I0130 14:02:23.441628 3058 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:02:23.452727 kubelet[3058]: E0130 14:02:23.452667 3058 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-215\" not found" Jan 30 14:02:23.455145 kubelet[3058]: I0130 14:02:23.455104 3058 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-215" Jan 30 14:02:23.455688 kubelet[3058]: E0130 14:02:23.455644 3058 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.215:6443/api/v1/nodes\": dial tcp 172.31.23.215:6443: connect: connection refused" node="ip-172-31-23-215" Jan 30 14:02:23.505249 kubelet[3058]: I0130 14:02:23.505012 3058 topology_manager.go:215] "Topology Admit Handler" podUID="53cbdbd556dcddb3af03992eb9035735" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-215" Jan 30 14:02:23.507470 kubelet[3058]: I0130 14:02:23.507073 3058 topology_manager.go:215] "Topology Admit Handler" podUID="c1e8ddb5a0e4dd3018c2c7c0310b066d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:23.509617 kubelet[3058]: I0130 14:02:23.509560 3058 topology_manager.go:215] "Topology Admit Handler" podUID="f9a572f6597ed826b77284cf25209822" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-215" Jan 30 14:02:23.554741 kubelet[3058]: I0130 14:02:23.554372 3058 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53cbdbd556dcddb3af03992eb9035735-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-215\" (UID: \"53cbdbd556dcddb3af03992eb9035735\") " pod="kube-system/kube-apiserver-ip-172-31-23-215" Jan 30 14:02:23.554741 kubelet[3058]: I0130 14:02:23.554431 3058 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53cbdbd556dcddb3af03992eb9035735-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-215\" (UID: \"53cbdbd556dcddb3af03992eb9035735\") " pod="kube-system/kube-apiserver-ip-172-31-23-215" Jan 30 14:02:23.554741 kubelet[3058]: I0130 14:02:23.554474 3058 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:23.554741 kubelet[3058]: I0130 14:02:23.554567 3058 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:23.554741 kubelet[3058]: I0130 14:02:23.554621 3058 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53cbdbd556dcddb3af03992eb9035735-ca-certs\") pod \"kube-apiserver-ip-172-31-23-215\" (UID: \"53cbdbd556dcddb3af03992eb9035735\") " pod="kube-system/kube-apiserver-ip-172-31-23-215" Jan 30 14:02:23.555230 kubelet[3058]: I0130 14:02:23.554658 3058 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:23.555545 kubelet[3058]: I0130 14:02:23.555386 3058 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:23.555545 kubelet[3058]: I0130 14:02:23.555436 3058 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:23.555545 kubelet[3058]: I0130 14:02:23.555477 3058 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9a572f6597ed826b77284cf25209822-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-215\" (UID: \"f9a572f6597ed826b77284cf25209822\") " pod="kube-system/kube-scheduler-ip-172-31-23-215" Jan 30 14:02:23.558701 kubelet[3058]: E0130 14:02:23.558635 3058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-215?timeout=10s\": dial tcp 172.31.23.215:6443: connect: connection refused" interval="400ms" Jan 30 14:02:23.658442 kubelet[3058]: I0130 14:02:23.658383 3058 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-215" Jan 30 14:02:23.658913 kubelet[3058]: E0130 14:02:23.658840 3058 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.215:6443/api/v1/nodes\": dial tcp 172.31.23.215:6443: connect: connection refused" node="ip-172-31-23-215" Jan 30 14:02:23.816977 containerd[2132]: time="2025-01-30T14:02:23.816821684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-215,Uid:53cbdbd556dcddb3af03992eb9035735,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:23.820749 containerd[2132]: time="2025-01-30T14:02:23.820611980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-215,Uid:c1e8ddb5a0e4dd3018c2c7c0310b066d,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:23.826022 containerd[2132]: time="2025-01-30T14:02:23.825670892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-215,Uid:f9a572f6597ed826b77284cf25209822,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:23.960006 kubelet[3058]: E0130 14:02:23.959950 3058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-215?timeout=10s\": dial tcp 172.31.23.215:6443: connect: connection refused" interval="800ms" Jan 30 14:02:24.061426 kubelet[3058]: I0130 14:02:24.061392 3058 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-215" Jan 30 14:02:24.062282 kubelet[3058]: E0130 14:02:24.062236 3058 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.215:6443/api/v1/nodes\": dial tcp 172.31.23.215:6443: connect: connection refused" node="ip-172-31-23-215" Jan 30 14:02:24.249657 kubelet[3058]: W0130 14:02:24.249538 3058 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:24.249657 kubelet[3058]: E0130 14:02:24.249624 3058 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:24.351552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177324376.mount: Deactivated successfully. Jan 30 14:02:24.367003 containerd[2132]: time="2025-01-30T14:02:24.366919807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:24.369311 containerd[2132]: time="2025-01-30T14:02:24.369243019Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:24.371428 containerd[2132]: time="2025-01-30T14:02:24.371335615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 14:02:24.373306 containerd[2132]: time="2025-01-30T14:02:24.373257895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:02:24.375402 containerd[2132]: time="2025-01-30T14:02:24.375337063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:24.378606 containerd[2132]: time="2025-01-30T14:02:24.378421099Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:24.379989 containerd[2132]: time="2025-01-30T14:02:24.379885975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:02:24.384532 containerd[2132]: time="2025-01-30T14:02:24.384450139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:02:24.388927 containerd[2132]: time="2025-01-30T14:02:24.388568743Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.788879ms" Jan 30 14:02:24.392879 containerd[2132]: time="2025-01-30T14:02:24.392805271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.869119ms" Jan 30 14:02:24.397007 containerd[2132]: time="2025-01-30T14:02:24.396752827Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.038535ms" Jan 30 14:02:24.591943 containerd[2132]: time="2025-01-30T14:02:24.591556952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:24.592728 containerd[2132]: time="2025-01-30T14:02:24.592091348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:24.593083 containerd[2132]: time="2025-01-30T14:02:24.592135184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:24.594849 containerd[2132]: time="2025-01-30T14:02:24.594561404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:24.595090 containerd[2132]: time="2025-01-30T14:02:24.594876416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:24.595392 containerd[2132]: time="2025-01-30T14:02:24.595105688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:24.595819 containerd[2132]: time="2025-01-30T14:02:24.595695548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:24.596038 containerd[2132]: time="2025-01-30T14:02:24.595550624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:24.607253 kubelet[3058]: W0130 14:02:24.605610 3058 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.215:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:24.607253 kubelet[3058]: E0130 14:02:24.605706 3058 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.215:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:24.618972 containerd[2132]: time="2025-01-30T14:02:24.618345728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:24.618972 containerd[2132]: time="2025-01-30T14:02:24.618515072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:24.618972 containerd[2132]: time="2025-01-30T14:02:24.618576392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:24.618972 containerd[2132]: time="2025-01-30T14:02:24.618793832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:24.622539 kubelet[3058]: W0130 14:02:24.622443 3058 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-215&limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:24.622688 kubelet[3058]: E0130 14:02:24.622552 3058 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-215&limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:24.757300 kubelet[3058]: W0130 14:02:24.756834 3058 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:24.757300 kubelet[3058]: E0130 14:02:24.756898 3058 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.215:6443: connect: connection refused Jan 30 14:02:24.758512 containerd[2132]: time="2025-01-30T14:02:24.758043189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-215,Uid:c1e8ddb5a0e4dd3018c2c7c0310b066d,Namespace:kube-system,Attempt:0,} returns sandbox id \"34bd861f22c00ddc2c87f4cc92916accb316cef357d2969b50551dac73549be5\"" Jan 30 14:02:24.760993 kubelet[3058]: E0130 14:02:24.760649 3058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-215?timeout=10s\": dial tcp 172.31.23.215:6443: connect: connection refused" interval="1.6s" Jan 30 14:02:24.771162 containerd[2132]: time="2025-01-30T14:02:24.771076005Z" level=info msg="CreateContainer within sandbox \"34bd861f22c00ddc2c87f4cc92916accb316cef357d2969b50551dac73549be5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:02:24.773483 containerd[2132]: time="2025-01-30T14:02:24.773422773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-215,Uid:f9a572f6597ed826b77284cf25209822,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9e423a4edf0ea55f8b62f701592b73f17a95f36fba486675dd0bc178f4f32ce\"" Jan 30 14:02:24.779670 containerd[2132]: time="2025-01-30T14:02:24.779455917Z" level=info msg="CreateContainer within sandbox \"b9e423a4edf0ea55f8b62f701592b73f17a95f36fba486675dd0bc178f4f32ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:02:24.786498 containerd[2132]: time="2025-01-30T14:02:24.786369609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-215,Uid:53cbdbd556dcddb3af03992eb9035735,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c86744391f80bf2f7a8eefd1cb5ae79ad5e13b6019051e93686a92daea79e8f\"" Jan 30 14:02:24.794582 containerd[2132]: time="2025-01-30T14:02:24.794111253Z" level=info msg="CreateContainer within sandbox \"9c86744391f80bf2f7a8eefd1cb5ae79ad5e13b6019051e93686a92daea79e8f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:02:24.830598 containerd[2132]: time="2025-01-30T14:02:24.830527605Z" level=info msg="CreateContainer within sandbox \"34bd861f22c00ddc2c87f4cc92916accb316cef357d2969b50551dac73549be5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e2c4f87ab947c826887f002f2946d42e816116f3c71ffe3a95b5507f4cc37127\"" Jan 30 14:02:24.832257 containerd[2132]: time="2025-01-30T14:02:24.831572637Z" level=info msg="StartContainer for \"e2c4f87ab947c826887f002f2946d42e816116f3c71ffe3a95b5507f4cc37127\"" Jan 30 14:02:24.845203 containerd[2132]: time="2025-01-30T14:02:24.843671841Z" level=info msg="CreateContainer within sandbox \"b9e423a4edf0ea55f8b62f701592b73f17a95f36fba486675dd0bc178f4f32ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c32bbb46f1c42fae627b81fd0627cc16f2b13d526453629048591d105510e46a\"" Jan 30 14:02:24.846790 containerd[2132]: time="2025-01-30T14:02:24.846647217Z" level=info msg="StartContainer for \"c32bbb46f1c42fae627b81fd0627cc16f2b13d526453629048591d105510e46a\"" Jan 30 14:02:24.854994 containerd[2132]: time="2025-01-30T14:02:24.854670933Z" level=info msg="CreateContainer within sandbox \"9c86744391f80bf2f7a8eefd1cb5ae79ad5e13b6019051e93686a92daea79e8f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f30a446f28830025101adbe6c7f075e3ebe24a966f9a47e334224cba7bb89202\"" Jan 30 14:02:24.858344 containerd[2132]: time="2025-01-30T14:02:24.856683273Z" level=info msg="StartContainer for \"f30a446f28830025101adbe6c7f075e3ebe24a966f9a47e334224cba7bb89202\"" Jan 30 14:02:24.866118 kubelet[3058]: I0130 14:02:24.866080 3058 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-215" Jan 30 14:02:24.866776 kubelet[3058]: E0130 14:02:24.866728 3058 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.215:6443/api/v1/nodes\": dial tcp 172.31.23.215:6443: connect: connection refused" node="ip-172-31-23-215" Jan 30 14:02:25.039805 containerd[2132]: time="2025-01-30T14:02:25.039608994Z" level=info msg="StartContainer for \"c32bbb46f1c42fae627b81fd0627cc16f2b13d526453629048591d105510e46a\" returns successfully" Jan 30 14:02:25.048792 containerd[2132]: time="2025-01-30T14:02:25.048713610Z" level=info msg="StartContainer for \"e2c4f87ab947c826887f002f2946d42e816116f3c71ffe3a95b5507f4cc37127\" returns successfully" Jan 30 14:02:25.091420 containerd[2132]: time="2025-01-30T14:02:25.091057422Z" level=info msg="StartContainer for \"f30a446f28830025101adbe6c7f075e3ebe24a966f9a47e334224cba7bb89202\" returns successfully" Jan 30 14:02:26.473223 kubelet[3058]: I0130 14:02:26.470830 3058 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-215" Jan 30 14:02:28.935952 kubelet[3058]: E0130 14:02:28.935639 3058 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-215\" not found" node="ip-172-31-23-215" Jan 30 14:02:29.170215 kubelet[3058]: I0130 14:02:29.168273 3058 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-215" Jan 30 14:02:29.258665 kubelet[3058]: E0130 14:02:29.258310 3058 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:29.336411 kubelet[3058]: I0130 14:02:29.336100 3058 apiserver.go:52] "Watching apiserver" Jan 30 14:02:29.354297 kubelet[3058]: I0130 14:02:29.354259 3058 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:02:31.066333 systemd[1]: Reloading requested from client PID 3328 ('systemctl') (unit session-7.scope)... Jan 30 14:02:31.066365 systemd[1]: Reloading... Jan 30 14:02:31.265215 zram_generator::config[3374]: No configuration found. Jan 30 14:02:31.526416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:02:31.712327 systemd[1]: Reloading finished in 645 ms. Jan 30 14:02:31.782169 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:31.799853 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:02:31.800870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:31.811780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:32.124546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:32.143918 (kubelet)[3438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:02:32.247237 kubelet[3438]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:02:32.247237 kubelet[3438]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:02:32.247237 kubelet[3438]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:02:32.247237 kubelet[3438]: I0130 14:02:32.246657 3438 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:02:32.257770 kubelet[3438]: I0130 14:02:32.257412 3438 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:02:32.258256 kubelet[3438]: I0130 14:02:32.257915 3438 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:02:32.258443 kubelet[3438]: I0130 14:02:32.258421 3438 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:02:32.261159 kubelet[3438]: I0130 14:02:32.261118 3438 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:02:32.264659 kubelet[3438]: I0130 14:02:32.264017 3438 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:02:32.277336 kubelet[3438]: I0130 14:02:32.277170 3438 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:02:32.278930 kubelet[3438]: I0130 14:02:32.278348 3438 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:02:32.278930 kubelet[3438]: I0130 14:02:32.278410 3438 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:02:32.278930 kubelet[3438]: I0130 14:02:32.278695 3438 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:02:32.278930 kubelet[3438]: I0130 14:02:32.278714 3438 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:02:32.278930 kubelet[3438]: I0130 14:02:32.278772 3438 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:02:32.279871 sudo[3451]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 14:02:32.281231 kubelet[3438]: I0130 14:02:32.280628 3438 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:02:32.281231 kubelet[3438]: I0130 14:02:32.280712 3438 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:02:32.281231 kubelet[3438]: I0130 14:02:32.280779 3438 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:02:32.281231 kubelet[3438]: I0130 14:02:32.280821 3438 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:02:32.280879 sudo[3451]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 14:02:32.284291 kubelet[3438]: I0130 14:02:32.283711 3438 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:02:32.284291 kubelet[3438]: I0130 14:02:32.284000 3438 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:02:32.287243 kubelet[3438]: I0130 14:02:32.286057 3438 server.go:1264] "Started kubelet" Jan 30 14:02:32.292049 kubelet[3438]: I0130 14:02:32.292018 3438 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:02:32.305782 kubelet[3438]: I0130 14:02:32.305711 3438 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:02:32.335955 kubelet[3438]: I0130 14:02:32.335923 3438 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:02:32.341897 kubelet[3438]: I0130 14:02:32.314568 3438 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:02:32.362378 kubelet[3438]: I0130 14:02:32.307532 3438 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:02:32.383421 kubelet[3438]: I0130 14:02:32.383314 3438 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:02:32.384274 kubelet[3438]: I0130 14:02:32.314597 3438 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:02:32.384274 kubelet[3438]: I0130 14:02:32.365310 3438 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:02:32.386653 kubelet[3438]: I0130 14:02:32.381068 3438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:02:32.391943 kubelet[3438]: I0130 14:02:32.391903 3438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:02:32.392160 kubelet[3438]: I0130 14:02:32.392140 3438 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:02:32.392731 kubelet[3438]: I0130 14:02:32.392292 3438 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:02:32.392731 kubelet[3438]: E0130 14:02:32.392364 3438 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:02:32.408838 kubelet[3438]: I0130 14:02:32.407081 3438 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:02:32.409301 kubelet[3438]: I0130 14:02:32.409264 3438 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:02:32.423287 kubelet[3438]: E0130 14:02:32.422470 3438 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:02:32.428570 kubelet[3438]: I0130 14:02:32.428531 3438 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:02:32.429557 kubelet[3438]: E0130 14:02:32.429408 3438 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jan 30 14:02:32.433405 kubelet[3438]: I0130 14:02:32.433353 3438 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-215" Jan 30 14:02:32.461751 kubelet[3438]: I0130 14:02:32.460996 3438 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-215" Jan 30 14:02:32.465837 kubelet[3438]: I0130 14:02:32.464383 3438 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-215" Jan 30 14:02:32.493412 kubelet[3438]: E0130 14:02:32.493370 3438 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:02:32.605644 kubelet[3438]: I0130 14:02:32.605126 3438 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:02:32.605644 kubelet[3438]: I0130 14:02:32.605157 3438 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:02:32.605644 kubelet[3438]: I0130 14:02:32.605243 3438 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:02:32.605644 kubelet[3438]: I0130 14:02:32.605494 3438 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:02:32.605644 kubelet[3438]: I0130 14:02:32.605515 3438 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:02:32.605644 kubelet[3438]: I0130 14:02:32.605550 3438 policy_none.go:49] "None policy: Start" Jan 30 14:02:32.608289 kubelet[3438]: I0130 14:02:32.607866 3438 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:02:32.608289 kubelet[3438]: I0130 14:02:32.607922 3438 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:02:32.609176 kubelet[3438]: I0130 14:02:32.608630 3438 state_mem.go:75] "Updated machine memory state" Jan 30 14:02:32.618993 kubelet[3438]: I0130 14:02:32.618921 3438 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:02:32.619757 kubelet[3438]: I0130 14:02:32.619701 3438 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:02:32.620709 kubelet[3438]: I0130 14:02:32.620361 3438 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:02:32.694117 kubelet[3438]: I0130 14:02:32.694065 3438 topology_manager.go:215] "Topology Admit Handler" podUID="53cbdbd556dcddb3af03992eb9035735" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-215" Jan 30 14:02:32.697361 kubelet[3438]: I0130 14:02:32.694781 3438 topology_manager.go:215] "Topology Admit Handler" podUID="c1e8ddb5a0e4dd3018c2c7c0310b066d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:32.697361 kubelet[3438]: I0130 14:02:32.696934 3438 topology_manager.go:215] "Topology Admit Handler" podUID="f9a572f6597ed826b77284cf25209822" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-215" Jan 30 14:02:32.786343 kubelet[3438]: I0130 14:02:32.786280 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53cbdbd556dcddb3af03992eb9035735-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-215\" (UID: \"53cbdbd556dcddb3af03992eb9035735\") " pod="kube-system/kube-apiserver-ip-172-31-23-215" Jan 30 14:02:32.787036 kubelet[3438]: I0130 14:02:32.786734 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:32.787036 kubelet[3438]: I0130 14:02:32.786911 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:32.787604 kubelet[3438]: I0130 14:02:32.787288 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:32.787604 kubelet[3438]: I0130 14:02:32.787441 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9a572f6597ed826b77284cf25209822-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-215\" (UID: \"f9a572f6597ed826b77284cf25209822\") " pod="kube-system/kube-scheduler-ip-172-31-23-215" Jan 30 14:02:32.787921 kubelet[3438]: I0130 14:02:32.787694 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53cbdbd556dcddb3af03992eb9035735-ca-certs\") pod \"kube-apiserver-ip-172-31-23-215\" (UID: \"53cbdbd556dcddb3af03992eb9035735\") " pod="kube-system/kube-apiserver-ip-172-31-23-215" Jan 30 14:02:32.788277 kubelet[3438]: I0130 14:02:32.788006 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53cbdbd556dcddb3af03992eb9035735-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-215\" (UID: \"53cbdbd556dcddb3af03992eb9035735\") " pod="kube-system/kube-apiserver-ip-172-31-23-215" Jan 30 14:02:32.788277 kubelet[3438]: I0130 14:02:32.788113 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:32.788494 kubelet[3438]: I0130 14:02:32.788170 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1e8ddb5a0e4dd3018c2c7c0310b066d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-215\" (UID: \"c1e8ddb5a0e4dd3018c2c7c0310b066d\") " pod="kube-system/kube-controller-manager-ip-172-31-23-215" Jan 30 14:02:32.809305 update_engine[2108]: I20250130 14:02:32.809221 2108 update_attempter.cc:509] Updating boot flags... Jan 30 14:02:32.948390 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3486) Jan 30 14:02:33.295851 kubelet[3438]: I0130 14:02:33.295290 3438 apiserver.go:52] "Watching apiserver" Jan 30 14:02:33.384887 kubelet[3438]: I0130 14:02:33.384748 3438 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:02:33.397365 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3487) Jan 30 14:02:33.506467 kubelet[3438]: E0130 14:02:33.505464 3438 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-215\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-215" Jan 30 14:02:33.508591 sudo[3451]: pam_unix(sudo:session): session closed for user root Jan 30 14:02:33.595764 kubelet[3438]: I0130 14:02:33.594168 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-215" podStartSLOduration=1.594019001 podStartE2EDuration="1.594019001s" podCreationTimestamp="2025-01-30 14:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:33.574125881 +0000 UTC m=+1.419963321" watchObservedRunningTime="2025-01-30 14:02:33.594019001 +0000 UTC m=+1.439856453" Jan 30 14:02:33.597566 kubelet[3438]: I0130 14:02:33.596677 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-215" podStartSLOduration=1.5966516450000001 podStartE2EDuration="1.596651645s" podCreationTimestamp="2025-01-30 14:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:33.591262037 +0000 UTC m=+1.437099513" watchObservedRunningTime="2025-01-30 14:02:33.596651645 +0000 UTC m=+1.442489085" Jan 30 14:02:33.614217 kubelet[3438]: I0130 14:02:33.612919 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-215" podStartSLOduration=1.612896969 podStartE2EDuration="1.612896969s" podCreationTimestamp="2025-01-30 14:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:33.612621605 +0000 UTC m=+1.458459045" watchObservedRunningTime="2025-01-30 14:02:33.612896969 +0000 UTC m=+1.458734421" Jan 30 14:02:35.988277 sudo[2482]: pam_unix(sudo:session): session closed for user root Jan 30 14:02:36.012199 sshd[2478]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:36.021967 systemd[1]: sshd@6-172.31.23.215:22-139.178.89.65:52026.service: Deactivated successfully. Jan 30 14:02:36.028039 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:02:36.030313 systemd-logind[2104]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:02:36.033549 systemd-logind[2104]: Removed session 7. Jan 30 14:02:44.796891 kubelet[3438]: I0130 14:02:44.796830 3438 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:02:44.797735 containerd[2132]: time="2025-01-30T14:02:44.797466844Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:02:44.798537 kubelet[3438]: I0130 14:02:44.798496 3438 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:02:45.296748 kubelet[3438]: I0130 14:02:45.295343 3438 topology_manager.go:215] "Topology Admit Handler" podUID="bd400247-d467-48f0-bcad-eee40034d2b4" podNamespace="kube-system" podName="kube-proxy-z42c6" Jan 30 14:02:45.341742 kubelet[3438]: I0130 14:02:45.339118 3438 topology_manager.go:215] "Topology Admit Handler" podUID="ce0f887d-505e-4c99-9535-a24058f83355" podNamespace="kube-system" podName="cilium-llhbr" Jan 30 14:02:45.377482 kubelet[3438]: I0130 14:02:45.377404 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ppsz\" (UniqueName: \"kubernetes.io/projected/bd400247-d467-48f0-bcad-eee40034d2b4-kube-api-access-2ppsz\") pod \"kube-proxy-z42c6\" (UID: \"bd400247-d467-48f0-bcad-eee40034d2b4\") " pod="kube-system/kube-proxy-z42c6" Jan 30 14:02:45.377482 kubelet[3438]: I0130 14:02:45.377478 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-hostproc\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.377739 kubelet[3438]: I0130 14:02:45.377519 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce0f887d-505e-4c99-9535-a24058f83355-cilium-config-path\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.377739 kubelet[3438]: I0130 14:02:45.377568 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd400247-d467-48f0-bcad-eee40034d2b4-kube-proxy\") pod \"kube-proxy-z42c6\" (UID: \"bd400247-d467-48f0-bcad-eee40034d2b4\") " pod="kube-system/kube-proxy-z42c6" Jan 30 14:02:45.377739 kubelet[3438]: I0130 14:02:45.377607 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cilium-run\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.377739 kubelet[3438]: I0130 14:02:45.377641 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cilium-cgroup\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.377739 kubelet[3438]: I0130 14:02:45.377675 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-etc-cni-netd\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.377739 kubelet[3438]: I0130 14:02:45.377709 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-lib-modules\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.378062 kubelet[3438]: I0130 14:02:45.377746 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd400247-d467-48f0-bcad-eee40034d2b4-xtables-lock\") pod \"kube-proxy-z42c6\" (UID: \"bd400247-d467-48f0-bcad-eee40034d2b4\") " pod="kube-system/kube-proxy-z42c6" Jan 30 14:02:45.378062 kubelet[3438]: I0130 14:02:45.377782 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd400247-d467-48f0-bcad-eee40034d2b4-lib-modules\") pod \"kube-proxy-z42c6\" (UID: \"bd400247-d467-48f0-bcad-eee40034d2b4\") " pod="kube-system/kube-proxy-z42c6" Jan 30 14:02:45.378062 kubelet[3438]: I0130 14:02:45.377824 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-host-proc-sys-kernel\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.378062 kubelet[3438]: I0130 14:02:45.377859 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-host-proc-sys-net\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.378062 kubelet[3438]: I0130 14:02:45.377895 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-bpf-maps\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.378062 kubelet[3438]: I0130 14:02:45.377931 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cni-path\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.380656 kubelet[3438]: I0130 14:02:45.377965 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hchfk\" (UniqueName: \"kubernetes.io/projected/ce0f887d-505e-4c99-9535-a24058f83355-kube-api-access-hchfk\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.380656 kubelet[3438]: I0130 14:02:45.378002 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce0f887d-505e-4c99-9535-a24058f83355-clustermesh-secrets\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.380656 kubelet[3438]: I0130 14:02:45.378037 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-xtables-lock\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.380656 kubelet[3438]: I0130 14:02:45.378074 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce0f887d-505e-4c99-9535-a24058f83355-hubble-tls\") pod \"cilium-llhbr\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " pod="kube-system/cilium-llhbr" Jan 30 14:02:45.434645 kubelet[3438]: I0130 14:02:45.432623 3438 topology_manager.go:215] "Topology Admit Handler" podUID="72c3d39f-1f8e-4928-a86a-b12615530dbb" podNamespace="kube-system" podName="cilium-operator-599987898-558qq" Jan 30 14:02:45.478717 kubelet[3438]: I0130 14:02:45.478665 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc8vm\" (UniqueName: \"kubernetes.io/projected/72c3d39f-1f8e-4928-a86a-b12615530dbb-kube-api-access-vc8vm\") pod \"cilium-operator-599987898-558qq\" (UID: \"72c3d39f-1f8e-4928-a86a-b12615530dbb\") " pod="kube-system/cilium-operator-599987898-558qq" Jan 30 14:02:45.481275 kubelet[3438]: I0130 14:02:45.479716 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72c3d39f-1f8e-4928-a86a-b12615530dbb-cilium-config-path\") pod \"cilium-operator-599987898-558qq\" (UID: \"72c3d39f-1f8e-4928-a86a-b12615530dbb\") " pod="kube-system/cilium-operator-599987898-558qq" Jan 30 14:02:45.619681 containerd[2132]: time="2025-01-30T14:02:45.618404224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z42c6,Uid:bd400247-d467-48f0-bcad-eee40034d2b4,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:45.676218 containerd[2132]: time="2025-01-30T14:02:45.675958049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:45.676218 containerd[2132]: time="2025-01-30T14:02:45.676067969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:45.676218 containerd[2132]: time="2025-01-30T14:02:45.676136489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:45.679936 containerd[2132]: time="2025-01-30T14:02:45.679465181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:45.680720 containerd[2132]: time="2025-01-30T14:02:45.680635445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llhbr,Uid:ce0f887d-505e-4c99-9535-a24058f83355,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:45.746872 containerd[2132]: time="2025-01-30T14:02:45.746404133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:45.746872 containerd[2132]: time="2025-01-30T14:02:45.746511677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:45.746872 containerd[2132]: time="2025-01-30T14:02:45.746537525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:45.746872 containerd[2132]: time="2025-01-30T14:02:45.746711225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:45.754094 containerd[2132]: time="2025-01-30T14:02:45.753881621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-558qq,Uid:72c3d39f-1f8e-4928-a86a-b12615530dbb,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:45.762362 containerd[2132]: time="2025-01-30T14:02:45.762288389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z42c6,Uid:bd400247-d467-48f0-bcad-eee40034d2b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a2acb58e199ea2c2e5bffcea21fdd7a031874916ddb1599537c2974cc20fbd3\"" Jan 30 14:02:45.770396 containerd[2132]: time="2025-01-30T14:02:45.770335745Z" level=info msg="CreateContainer within sandbox \"2a2acb58e199ea2c2e5bffcea21fdd7a031874916ddb1599537c2974cc20fbd3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:02:45.829996 containerd[2132]: time="2025-01-30T14:02:45.829849877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:45.830764 containerd[2132]: time="2025-01-30T14:02:45.829962341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:45.830764 containerd[2132]: time="2025-01-30T14:02:45.830001305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:45.830764 containerd[2132]: time="2025-01-30T14:02:45.830170457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:45.830996 containerd[2132]: time="2025-01-30T14:02:45.830758877Z" level=info msg="CreateContainer within sandbox \"2a2acb58e199ea2c2e5bffcea21fdd7a031874916ddb1599537c2974cc20fbd3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e478ed97eb54f11b3e4fa54ce4dc31e082cd5ddc11454810a5e34349a7caeb5f\"" Jan 30 14:02:45.833646 containerd[2132]: time="2025-01-30T14:02:45.833491865Z" level=info msg="StartContainer for \"e478ed97eb54f11b3e4fa54ce4dc31e082cd5ddc11454810a5e34349a7caeb5f\"" Jan 30 14:02:45.849150 containerd[2132]: time="2025-01-30T14:02:45.848745702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llhbr,Uid:ce0f887d-505e-4c99-9535-a24058f83355,Namespace:kube-system,Attempt:0,} returns sandbox id \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\"" Jan 30 14:02:45.853233 containerd[2132]: time="2025-01-30T14:02:45.853110198Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 14:02:45.964610 containerd[2132]: time="2025-01-30T14:02:45.964389078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-558qq,Uid:72c3d39f-1f8e-4928-a86a-b12615530dbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\"" Jan 30 14:02:46.070551 containerd[2132]: time="2025-01-30T14:02:46.070481091Z" level=info msg="StartContainer for \"e478ed97eb54f11b3e4fa54ce4dc31e082cd5ddc11454810a5e34349a7caeb5f\" returns successfully" Jan 30 14:02:46.548713 kubelet[3438]: I0130 14:02:46.548600 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z42c6" podStartSLOduration=1.548576717 podStartE2EDuration="1.548576717s" podCreationTimestamp="2025-01-30 14:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:46.546926945 +0000 UTC m=+14.392764409" watchObservedRunningTime="2025-01-30 14:02:46.548576717 +0000 UTC m=+14.394414169" Jan 30 14:02:50.771609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828936221.mount: Deactivated successfully. Jan 30 14:02:53.436327 containerd[2132]: time="2025-01-30T14:02:53.436238951Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:53.438295 containerd[2132]: time="2025-01-30T14:02:53.438223943Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 14:02:53.440822 containerd[2132]: time="2025-01-30T14:02:53.440743883Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:53.444975 containerd[2132]: time="2025-01-30T14:02:53.444792047Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.591596085s" Jan 30 14:02:53.444975 containerd[2132]: time="2025-01-30T14:02:53.444853823Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 14:02:53.447435 containerd[2132]: time="2025-01-30T14:02:53.446742515Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 14:02:53.450426 containerd[2132]: time="2025-01-30T14:02:53.450223043Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:02:53.476659 containerd[2132]: time="2025-01-30T14:02:53.476606063Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\"" Jan 30 14:02:53.478373 containerd[2132]: time="2025-01-30T14:02:53.478271963Z" level=info msg="StartContainer for \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\"" Jan 30 14:02:53.597337 containerd[2132]: time="2025-01-30T14:02:53.597263148Z" level=info msg="StartContainer for \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\" returns successfully" Jan 30 14:02:54.466047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244-rootfs.mount: Deactivated successfully. Jan 30 14:02:54.937932 containerd[2132]: time="2025-01-30T14:02:54.937679175Z" level=info msg="shim disconnected" id=52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244 namespace=k8s.io Jan 30 14:02:54.937932 containerd[2132]: time="2025-01-30T14:02:54.937756491Z" level=warning msg="cleaning up after shim disconnected" id=52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244 namespace=k8s.io Jan 30 14:02:54.937932 containerd[2132]: time="2025-01-30T14:02:54.937777551Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:02:55.573364 containerd[2132]: time="2025-01-30T14:02:55.572997806Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:02:55.630868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876129438.mount: Deactivated successfully. Jan 30 14:02:55.640722 containerd[2132]: time="2025-01-30T14:02:55.639724742Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\"" Jan 30 14:02:55.642856 containerd[2132]: time="2025-01-30T14:02:55.642679862Z" level=info msg="StartContainer for \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\"" Jan 30 14:02:55.776942 containerd[2132]: time="2025-01-30T14:02:55.776781819Z" level=info msg="StartContainer for \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\" returns successfully" Jan 30 14:02:55.797056 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:02:55.798288 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:02:55.798411 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:02:55.807881 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:02:55.859559 containerd[2132]: time="2025-01-30T14:02:55.859394295Z" level=info msg="shim disconnected" id=764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017 namespace=k8s.io Jan 30 14:02:55.859559 containerd[2132]: time="2025-01-30T14:02:55.859493475Z" level=warning msg="cleaning up after shim disconnected" id=764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017 namespace=k8s.io Jan 30 14:02:55.859559 containerd[2132]: time="2025-01-30T14:02:55.859515531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:02:55.864436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:02:56.593275 containerd[2132]: time="2025-01-30T14:02:56.591445095Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:02:56.596001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2088417489.mount: Deactivated successfully. Jan 30 14:02:56.597918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017-rootfs.mount: Deactivated successfully. Jan 30 14:02:56.628789 containerd[2132]: time="2025-01-30T14:02:56.628659699Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\"" Jan 30 14:02:56.631590 containerd[2132]: time="2025-01-30T14:02:56.631459875Z" level=info msg="StartContainer for \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\"" Jan 30 14:02:56.763381 containerd[2132]: time="2025-01-30T14:02:56.759733960Z" level=info msg="StartContainer for \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\" returns successfully" Jan 30 14:02:56.839571 containerd[2132]: time="2025-01-30T14:02:56.839495236Z" level=info msg="shim disconnected" id=f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3 namespace=k8s.io Jan 30 14:02:56.839884 containerd[2132]: time="2025-01-30T14:02:56.839850832Z" level=warning msg="cleaning up after shim disconnected" id=f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3 namespace=k8s.io Jan 30 14:02:56.840011 containerd[2132]: time="2025-01-30T14:02:56.839983576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:02:57.314533 containerd[2132]: time="2025-01-30T14:02:57.314454134Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:57.316083 containerd[2132]: time="2025-01-30T14:02:57.316022774Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 14:02:57.317010 containerd[2132]: time="2025-01-30T14:02:57.316957886Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:02:57.320368 containerd[2132]: time="2025-01-30T14:02:57.320142710Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.873331987s" Jan 30 14:02:57.320368 containerd[2132]: time="2025-01-30T14:02:57.320231198Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 14:02:57.327567 containerd[2132]: time="2025-01-30T14:02:57.327398895Z" level=info msg="CreateContainer within sandbox \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 14:02:57.346219 containerd[2132]: time="2025-01-30T14:02:57.346114263Z" level=info msg="CreateContainer within sandbox \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\"" Jan 30 14:02:57.346905 containerd[2132]: time="2025-01-30T14:02:57.346862151Z" level=info msg="StartContainer for \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\"" Jan 30 14:02:57.431100 containerd[2132]: time="2025-01-30T14:02:57.430912767Z" level=info msg="StartContainer for \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\" returns successfully" Jan 30 14:02:57.606772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3-rootfs.mount: Deactivated successfully. Jan 30 14:02:57.624533 containerd[2132]: time="2025-01-30T14:02:57.623347276Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:02:57.646360 kubelet[3438]: I0130 14:02:57.638108 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-558qq" podStartSLOduration=1.295380159 podStartE2EDuration="12.638081092s" podCreationTimestamp="2025-01-30 14:02:45 +0000 UTC" firstStartedPulling="2025-01-30 14:02:45.97910925 +0000 UTC m=+13.824946690" lastFinishedPulling="2025-01-30 14:02:57.321810183 +0000 UTC m=+25.167647623" observedRunningTime="2025-01-30 14:02:57.62066908 +0000 UTC m=+25.466506532" watchObservedRunningTime="2025-01-30 14:02:57.638081092 +0000 UTC m=+25.483918532" Jan 30 14:02:57.673955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900420866.mount: Deactivated successfully. Jan 30 14:02:57.697823 containerd[2132]: time="2025-01-30T14:02:57.696042316Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\"" Jan 30 14:02:57.701263 containerd[2132]: time="2025-01-30T14:02:57.699241000Z" level=info msg="StartContainer for \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\"" Jan 30 14:02:57.711886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4124032244.mount: Deactivated successfully. Jan 30 14:02:57.974008 containerd[2132]: time="2025-01-30T14:02:57.973938114Z" level=info msg="StartContainer for \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\" returns successfully" Jan 30 14:02:58.116367 containerd[2132]: time="2025-01-30T14:02:58.116262218Z" level=info msg="shim disconnected" id=d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31 namespace=k8s.io Jan 30 14:02:58.116367 containerd[2132]: time="2025-01-30T14:02:58.116358590Z" level=warning msg="cleaning up after shim disconnected" id=d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31 namespace=k8s.io Jan 30 14:02:58.120373 containerd[2132]: time="2025-01-30T14:02:58.116380730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:02:58.684648 containerd[2132]: time="2025-01-30T14:02:58.681633509Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:02:58.719043 containerd[2132]: time="2025-01-30T14:02:58.718983509Z" level=info msg="CreateContainer within sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\"" Jan 30 14:02:58.720463 containerd[2132]: time="2025-01-30T14:02:58.719944097Z" level=info msg="StartContainer for \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\"" Jan 30 14:02:58.988378 containerd[2132]: time="2025-01-30T14:02:58.987669811Z" level=info msg="StartContainer for \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\" returns successfully" Jan 30 14:02:59.266680 kubelet[3438]: I0130 14:02:59.266532 3438 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:02:59.322113 kubelet[3438]: I0130 14:02:59.322047 3438 topology_manager.go:215] "Topology Admit Handler" podUID="02bfb0e8-8efe-4075-97d3-37003636ef02" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k92z5" Jan 30 14:02:59.332096 kubelet[3438]: I0130 14:02:59.330622 3438 topology_manager.go:215] "Topology Admit Handler" podUID="d2ad4396-2bc7-4a06-b45b-39deb086f985" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zk6c9" Jan 30 14:02:59.492930 kubelet[3438]: I0130 14:02:59.492652 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2ad4396-2bc7-4a06-b45b-39deb086f985-config-volume\") pod \"coredns-7db6d8ff4d-zk6c9\" (UID: \"d2ad4396-2bc7-4a06-b45b-39deb086f985\") " pod="kube-system/coredns-7db6d8ff4d-zk6c9" Jan 30 14:02:59.492930 kubelet[3438]: I0130 14:02:59.492724 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02bfb0e8-8efe-4075-97d3-37003636ef02-config-volume\") pod \"coredns-7db6d8ff4d-k92z5\" (UID: \"02bfb0e8-8efe-4075-97d3-37003636ef02\") " pod="kube-system/coredns-7db6d8ff4d-k92z5" Jan 30 14:02:59.492930 kubelet[3438]: I0130 14:02:59.492808 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vglvg\" (UniqueName: \"kubernetes.io/projected/d2ad4396-2bc7-4a06-b45b-39deb086f985-kube-api-access-vglvg\") pod \"coredns-7db6d8ff4d-zk6c9\" (UID: \"d2ad4396-2bc7-4a06-b45b-39deb086f985\") " pod="kube-system/coredns-7db6d8ff4d-zk6c9" Jan 30 14:02:59.492930 kubelet[3438]: I0130 14:02:59.492853 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c54hf\" (UniqueName: \"kubernetes.io/projected/02bfb0e8-8efe-4075-97d3-37003636ef02-kube-api-access-c54hf\") pod \"coredns-7db6d8ff4d-k92z5\" (UID: \"02bfb0e8-8efe-4075-97d3-37003636ef02\") " pod="kube-system/coredns-7db6d8ff4d-k92z5" Jan 30 14:02:59.655331 containerd[2132]: time="2025-01-30T14:02:59.655252878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k92z5,Uid:02bfb0e8-8efe-4075-97d3-37003636ef02,Namespace:kube-system,Attempt:0,}" Jan 30 14:02:59.962973 containerd[2132]: time="2025-01-30T14:02:59.960386780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zk6c9,Uid:d2ad4396-2bc7-4a06-b45b-39deb086f985,Namespace:kube-system,Attempt:0,}" Jan 30 14:03:01.992330 systemd-networkd[1692]: cilium_host: Link UP Jan 30 14:03:01.992758 systemd-networkd[1692]: cilium_net: Link UP Jan 30 14:03:01.993092 systemd-networkd[1692]: cilium_net: Gained carrier Jan 30 14:03:01.993866 systemd-networkd[1692]: cilium_host: Gained carrier Jan 30 14:03:01.996538 (udev-worker)[4442]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:03:02.002404 (udev-worker)[4405]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:03:02.174921 systemd-networkd[1692]: cilium_vxlan: Link UP Jan 30 14:03:02.174937 systemd-networkd[1692]: cilium_vxlan: Gained carrier Jan 30 14:03:02.663520 kernel: NET: Registered PF_ALG protocol family Jan 30 14:03:02.871373 systemd-networkd[1692]: cilium_net: Gained IPv6LL Jan 30 14:03:02.935953 systemd-networkd[1692]: cilium_host: Gained IPv6LL Jan 30 14:03:03.255808 systemd-networkd[1692]: cilium_vxlan: Gained IPv6LL Jan 30 14:03:03.985932 systemd-networkd[1692]: lxc_health: Link UP Jan 30 14:03:04.001663 systemd-networkd[1692]: lxc_health: Gained carrier Jan 30 14:03:04.324011 systemd-networkd[1692]: lxc45f7d9b94de7: Link UP Jan 30 14:03:04.330692 kernel: eth0: renamed from tmp810d6 Jan 30 14:03:04.337682 systemd-networkd[1692]: lxc45f7d9b94de7: Gained carrier Jan 30 14:03:04.566543 systemd-networkd[1692]: lxc5aab05d729b2: Link UP Jan 30 14:03:04.581219 kernel: eth0: renamed from tmp7c7bd Jan 30 14:03:04.588808 systemd-networkd[1692]: lxc5aab05d729b2: Gained carrier Jan 30 14:03:05.305143 systemd-networkd[1692]: lxc_health: Gained IPv6LL Jan 30 14:03:05.715527 kubelet[3438]: I0130 14:03:05.715391 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-llhbr" podStartSLOduration=13.120034059 podStartE2EDuration="20.715340268s" podCreationTimestamp="2025-01-30 14:02:45 +0000 UTC" firstStartedPulling="2025-01-30 14:02:45.85112667 +0000 UTC m=+13.696964098" lastFinishedPulling="2025-01-30 14:02:53.446432783 +0000 UTC m=+21.292270307" observedRunningTime="2025-01-30 14:02:59.810073855 +0000 UTC m=+27.655911319" watchObservedRunningTime="2025-01-30 14:03:05.715340268 +0000 UTC m=+33.561177708" Jan 30 14:03:06.008021 systemd-networkd[1692]: lxc45f7d9b94de7: Gained IPv6LL Jan 30 14:03:06.519446 systemd-networkd[1692]: lxc5aab05d729b2: Gained IPv6LL Jan 30 14:03:08.709156 ntpd[2081]: Listen normally on 6 cilium_host 192.168.0.109:123 Jan 30 14:03:08.710371 ntpd[2081]: 30 Jan 14:03:08 ntpd[2081]: Listen normally on 6 cilium_host 192.168.0.109:123 Jan 30 14:03:08.710371 ntpd[2081]: 30 Jan 14:03:08 ntpd[2081]: Listen normally on 7 cilium_net [fe80::3499:78ff:fe23:ab41%4]:123 Jan 30 14:03:08.710371 ntpd[2081]: 30 Jan 14:03:08 ntpd[2081]: Listen normally on 8 cilium_host [fe80::bc79:4aff:fe87:dfbb%5]:123 Jan 30 14:03:08.710371 ntpd[2081]: 30 Jan 14:03:08 ntpd[2081]: Listen normally on 9 cilium_vxlan [fe80::e0d8:85ff:fe46:a38b%6]:123 Jan 30 14:03:08.710371 ntpd[2081]: 30 Jan 14:03:08 ntpd[2081]: Listen normally on 10 lxc_health [fe80::812:5aff:fef6:b2ee%8]:123 Jan 30 14:03:08.710371 ntpd[2081]: 30 Jan 14:03:08 ntpd[2081]: Listen normally on 11 lxc45f7d9b94de7 [fe80::7cfe:54ff:fed5:8a94%10]:123 Jan 30 14:03:08.710371 ntpd[2081]: 30 Jan 14:03:08 ntpd[2081]: Listen normally on 12 lxc5aab05d729b2 [fe80::9c37:70ff:fe87:7ce5%12]:123 Jan 30 14:03:08.709328 ntpd[2081]: Listen normally on 7 cilium_net [fe80::3499:78ff:fe23:ab41%4]:123 Jan 30 14:03:08.709432 ntpd[2081]: Listen normally on 8 cilium_host [fe80::bc79:4aff:fe87:dfbb%5]:123 Jan 30 14:03:08.709503 ntpd[2081]: Listen normally on 9 cilium_vxlan [fe80::e0d8:85ff:fe46:a38b%6]:123 Jan 30 14:03:08.709581 ntpd[2081]: Listen normally on 10 lxc_health [fe80::812:5aff:fef6:b2ee%8]:123 Jan 30 14:03:08.709648 ntpd[2081]: Listen normally on 11 lxc45f7d9b94de7 [fe80::7cfe:54ff:fed5:8a94%10]:123 Jan 30 14:03:08.709715 ntpd[2081]: Listen normally on 12 lxc5aab05d729b2 [fe80::9c37:70ff:fe87:7ce5%12]:123 Jan 30 14:03:09.749324 kubelet[3438]: I0130 14:03:09.749160 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:03:12.788482 containerd[2132]: time="2025-01-30T14:03:12.786619855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:12.788482 containerd[2132]: time="2025-01-30T14:03:12.786720319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:12.788482 containerd[2132]: time="2025-01-30T14:03:12.786792655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:12.788482 containerd[2132]: time="2025-01-30T14:03:12.786993451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:12.906109 containerd[2132]: time="2025-01-30T14:03:12.902234672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:12.906109 containerd[2132]: time="2025-01-30T14:03:12.902376272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:12.906109 containerd[2132]: time="2025-01-30T14:03:12.902421812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:12.906109 containerd[2132]: time="2025-01-30T14:03:12.902602868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:13.080920 containerd[2132]: time="2025-01-30T14:03:13.078635405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k92z5,Uid:02bfb0e8-8efe-4075-97d3-37003636ef02,Namespace:kube-system,Attempt:0,} returns sandbox id \"810d6bc4dff26797ae40cc108fc65f5d02853f11d645e9440d27545283178829\"" Jan 30 14:03:13.091628 kubelet[3438]: E0130 14:03:13.091294 3438 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod02bfb0e8-8efe-4075-97d3-37003636ef02/810d6bc4dff26797ae40cc108fc65f5d02853f11d645e9440d27545283178829\": RecentStats: unable to find data in memory cache]" Jan 30 14:03:13.104869 containerd[2132]: time="2025-01-30T14:03:13.104385785Z" level=info msg="CreateContainer within sandbox \"810d6bc4dff26797ae40cc108fc65f5d02853f11d645e9440d27545283178829\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:03:13.154862 containerd[2132]: time="2025-01-30T14:03:13.154788833Z" level=info msg="CreateContainer within sandbox \"810d6bc4dff26797ae40cc108fc65f5d02853f11d645e9440d27545283178829\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7dc5d0fc94e15fecbad97a2e0bf6313b0a76ab8a6ce28b6d3544c1fb3b4f1d0c\"" Jan 30 14:03:13.157132 containerd[2132]: time="2025-01-30T14:03:13.155991785Z" level=info msg="StartContainer for \"7dc5d0fc94e15fecbad97a2e0bf6313b0a76ab8a6ce28b6d3544c1fb3b4f1d0c\"" Jan 30 14:03:13.161861 containerd[2132]: time="2025-01-30T14:03:13.161288657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zk6c9,Uid:d2ad4396-2bc7-4a06-b45b-39deb086f985,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c7bd3ef76975a4390bbe0ed97572722141fbc77f41dc3ea301ea2b5d3b32ed9\"" Jan 30 14:03:13.180814 containerd[2132]: time="2025-01-30T14:03:13.180728441Z" level=info msg="CreateContainer within sandbox \"7c7bd3ef76975a4390bbe0ed97572722141fbc77f41dc3ea301ea2b5d3b32ed9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:03:13.218619 containerd[2132]: time="2025-01-30T14:03:13.218355617Z" level=info msg="CreateContainer within sandbox \"7c7bd3ef76975a4390bbe0ed97572722141fbc77f41dc3ea301ea2b5d3b32ed9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f07efac85b251137caebd4f02482918dee98c005f36c30fcbf65dfdb68747ba4\"" Jan 30 14:03:13.221372 containerd[2132]: time="2025-01-30T14:03:13.220153457Z" level=info msg="StartContainer for \"f07efac85b251137caebd4f02482918dee98c005f36c30fcbf65dfdb68747ba4\"" Jan 30 14:03:13.322266 containerd[2132]: time="2025-01-30T14:03:13.322033266Z" level=info msg="StartContainer for \"7dc5d0fc94e15fecbad97a2e0bf6313b0a76ab8a6ce28b6d3544c1fb3b4f1d0c\" returns successfully" Jan 30 14:03:13.359464 containerd[2132]: time="2025-01-30T14:03:13.358933098Z" level=info msg="StartContainer for \"f07efac85b251137caebd4f02482918dee98c005f36c30fcbf65dfdb68747ba4\" returns successfully" Jan 30 14:03:13.808211 kubelet[3438]: I0130 14:03:13.803989 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zk6c9" podStartSLOduration=28.803968568 podStartE2EDuration="28.803968568s" podCreationTimestamp="2025-01-30 14:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:03:13.80376332 +0000 UTC m=+41.649600796" watchObservedRunningTime="2025-01-30 14:03:13.803968568 +0000 UTC m=+41.649806008" Jan 30 14:03:13.816462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704176406.mount: Deactivated successfully. Jan 30 14:03:13.870251 kubelet[3438]: I0130 14:03:13.870144 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k92z5" podStartSLOduration=28.870120705 podStartE2EDuration="28.870120705s" podCreationTimestamp="2025-01-30 14:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:03:13.841390941 +0000 UTC m=+41.687228405" watchObservedRunningTime="2025-01-30 14:03:13.870120705 +0000 UTC m=+41.715958169" Jan 30 14:03:16.602731 systemd[1]: Started sshd@7-172.31.23.215:22-139.178.89.65:55064.service - OpenSSH per-connection server daemon (139.178.89.65:55064). Jan 30 14:03:16.781765 sshd[4976]: Accepted publickey for core from 139.178.89.65 port 55064 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:16.784409 sshd[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:16.792512 systemd-logind[2104]: New session 8 of user core. Jan 30 14:03:16.802688 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:03:17.058506 sshd[4976]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:17.066505 systemd[1]: sshd@7-172.31.23.215:22-139.178.89.65:55064.service: Deactivated successfully. Jan 30 14:03:17.074655 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:03:17.074989 systemd-logind[2104]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:03:17.080156 systemd-logind[2104]: Removed session 8. Jan 30 14:03:22.091704 systemd[1]: Started sshd@8-172.31.23.215:22-139.178.89.65:43288.service - OpenSSH per-connection server daemon (139.178.89.65:43288). Jan 30 14:03:22.264291 sshd[4991]: Accepted publickey for core from 139.178.89.65 port 43288 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:22.267076 sshd[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:22.276491 systemd-logind[2104]: New session 9 of user core. Jan 30 14:03:22.286837 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:03:22.527026 sshd[4991]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:22.534415 systemd[1]: sshd@8-172.31.23.215:22-139.178.89.65:43288.service: Deactivated successfully. Jan 30 14:03:22.534677 systemd-logind[2104]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:03:22.542366 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:03:22.545435 systemd-logind[2104]: Removed session 9. Jan 30 14:03:27.557809 systemd[1]: Started sshd@9-172.31.23.215:22-139.178.89.65:43292.service - OpenSSH per-connection server daemon (139.178.89.65:43292). Jan 30 14:03:27.744240 sshd[5006]: Accepted publickey for core from 139.178.89.65 port 43292 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:27.747222 sshd[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:27.756141 systemd-logind[2104]: New session 10 of user core. Jan 30 14:03:27.765807 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:03:28.009124 sshd[5006]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:28.016709 systemd[1]: sshd@9-172.31.23.215:22-139.178.89.65:43292.service: Deactivated successfully. Jan 30 14:03:28.016959 systemd-logind[2104]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:03:28.023745 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:03:28.025669 systemd-logind[2104]: Removed session 10. Jan 30 14:03:33.038293 systemd[1]: Started sshd@10-172.31.23.215:22-139.178.89.65:44506.service - OpenSSH per-connection server daemon (139.178.89.65:44506). Jan 30 14:03:33.221389 sshd[5023]: Accepted publickey for core from 139.178.89.65 port 44506 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:33.224493 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:33.234643 systemd-logind[2104]: New session 11 of user core. Jan 30 14:03:33.246043 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:03:33.485892 sshd[5023]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:33.492766 systemd[1]: sshd@10-172.31.23.215:22-139.178.89.65:44506.service: Deactivated successfully. Jan 30 14:03:33.499742 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:03:33.500295 systemd-logind[2104]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:03:33.504389 systemd-logind[2104]: Removed session 11. Jan 30 14:03:38.518660 systemd[1]: Started sshd@11-172.31.23.215:22-139.178.89.65:44508.service - OpenSSH per-connection server daemon (139.178.89.65:44508). Jan 30 14:03:38.692657 sshd[5037]: Accepted publickey for core from 139.178.89.65 port 44508 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:38.695463 sshd[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:38.704689 systemd-logind[2104]: New session 12 of user core. Jan 30 14:03:38.713929 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:03:38.966410 sshd[5037]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:38.974148 systemd[1]: sshd@11-172.31.23.215:22-139.178.89.65:44508.service: Deactivated successfully. Jan 30 14:03:38.975716 systemd-logind[2104]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:03:38.985847 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:03:38.987840 systemd-logind[2104]: Removed session 12. Jan 30 14:03:38.999731 systemd[1]: Started sshd@12-172.31.23.215:22-139.178.89.65:44520.service - OpenSSH per-connection server daemon (139.178.89.65:44520). Jan 30 14:03:39.181463 sshd[5052]: Accepted publickey for core from 139.178.89.65 port 44520 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:39.185486 sshd[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:39.195675 systemd-logind[2104]: New session 13 of user core. Jan 30 14:03:39.202723 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:03:39.527610 sshd[5052]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:39.542697 systemd[1]: sshd@12-172.31.23.215:22-139.178.89.65:44520.service: Deactivated successfully. Jan 30 14:03:39.565937 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:03:39.570259 systemd-logind[2104]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:03:39.583772 systemd[1]: Started sshd@13-172.31.23.215:22-139.178.89.65:44526.service - OpenSSH per-connection server daemon (139.178.89.65:44526). Jan 30 14:03:39.586657 systemd-logind[2104]: Removed session 13. Jan 30 14:03:39.772111 sshd[5064]: Accepted publickey for core from 139.178.89.65 port 44526 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:39.774776 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:39.782449 systemd-logind[2104]: New session 14 of user core. Jan 30 14:03:39.790818 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:03:40.030259 sshd[5064]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:40.037723 systemd[1]: sshd@13-172.31.23.215:22-139.178.89.65:44526.service: Deactivated successfully. Jan 30 14:03:40.044245 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:03:40.045320 systemd-logind[2104]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:03:40.048433 systemd-logind[2104]: Removed session 14. Jan 30 14:03:45.062092 systemd[1]: Started sshd@14-172.31.23.215:22-139.178.89.65:43336.service - OpenSSH per-connection server daemon (139.178.89.65:43336). Jan 30 14:03:45.251463 sshd[5079]: Accepted publickey for core from 139.178.89.65 port 43336 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:45.254243 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:45.262050 systemd-logind[2104]: New session 15 of user core. Jan 30 14:03:45.268767 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:03:45.517674 sshd[5079]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:45.524061 systemd[1]: sshd@14-172.31.23.215:22-139.178.89.65:43336.service: Deactivated successfully. Jan 30 14:03:45.530217 systemd-logind[2104]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:03:45.530556 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:03:45.534283 systemd-logind[2104]: Removed session 15. Jan 30 14:03:50.550680 systemd[1]: Started sshd@15-172.31.23.215:22-139.178.89.65:43344.service - OpenSSH per-connection server daemon (139.178.89.65:43344). Jan 30 14:03:50.727948 sshd[5095]: Accepted publickey for core from 139.178.89.65 port 43344 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:50.731475 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:50.740897 systemd-logind[2104]: New session 16 of user core. Jan 30 14:03:50.748786 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:03:50.987766 sshd[5095]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:50.993566 systemd-logind[2104]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:03:50.994450 systemd[1]: sshd@15-172.31.23.215:22-139.178.89.65:43344.service: Deactivated successfully. Jan 30 14:03:51.003606 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:03:51.007027 systemd-logind[2104]: Removed session 16. Jan 30 14:03:56.017787 systemd[1]: Started sshd@16-172.31.23.215:22-139.178.89.65:52714.service - OpenSSH per-connection server daemon (139.178.89.65:52714). Jan 30 14:03:56.196384 sshd[5109]: Accepted publickey for core from 139.178.89.65 port 52714 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:56.198980 sshd[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:56.207626 systemd-logind[2104]: New session 17 of user core. Jan 30 14:03:56.211746 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:03:56.455551 sshd[5109]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:56.460487 systemd[1]: sshd@16-172.31.23.215:22-139.178.89.65:52714.service: Deactivated successfully. Jan 30 14:03:56.467851 systemd-logind[2104]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:03:56.468175 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:03:56.472641 systemd-logind[2104]: Removed session 17. Jan 30 14:03:56.487716 systemd[1]: Started sshd@17-172.31.23.215:22-139.178.89.65:52716.service - OpenSSH per-connection server daemon (139.178.89.65:52716). Jan 30 14:03:56.666237 sshd[5123]: Accepted publickey for core from 139.178.89.65 port 52716 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:56.669236 sshd[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:56.677430 systemd-logind[2104]: New session 18 of user core. Jan 30 14:03:56.687736 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:03:56.989389 sshd[5123]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:56.995530 systemd[1]: sshd@17-172.31.23.215:22-139.178.89.65:52716.service: Deactivated successfully. Jan 30 14:03:57.001614 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:03:57.001986 systemd-logind[2104]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:03:57.006216 systemd-logind[2104]: Removed session 18. Jan 30 14:03:57.017720 systemd[1]: Started sshd@18-172.31.23.215:22-139.178.89.65:52724.service - OpenSSH per-connection server daemon (139.178.89.65:52724). Jan 30 14:03:57.199219 sshd[5135]: Accepted publickey for core from 139.178.89.65 port 52724 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:57.201812 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:57.209532 systemd-logind[2104]: New session 19 of user core. Jan 30 14:03:57.214446 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:03:59.723823 sshd[5135]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:59.734468 systemd[1]: sshd@18-172.31.23.215:22-139.178.89.65:52724.service: Deactivated successfully. Jan 30 14:03:59.747513 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:03:59.754834 systemd-logind[2104]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:03:59.772393 systemd[1]: Started sshd@19-172.31.23.215:22-139.178.89.65:52738.service - OpenSSH per-connection server daemon (139.178.89.65:52738). Jan 30 14:03:59.774076 systemd-logind[2104]: Removed session 19. Jan 30 14:03:59.955145 sshd[5154]: Accepted publickey for core from 139.178.89.65 port 52738 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:59.957946 sshd[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:59.968103 systemd-logind[2104]: New session 20 of user core. Jan 30 14:03:59.977809 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:04:00.460465 sshd[5154]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:00.468488 systemd[1]: sshd@19-172.31.23.215:22-139.178.89.65:52738.service: Deactivated successfully. Jan 30 14:04:00.475680 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:04:00.477734 systemd-logind[2104]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:04:00.479828 systemd-logind[2104]: Removed session 20. Jan 30 14:04:00.490825 systemd[1]: Started sshd@20-172.31.23.215:22-139.178.89.65:52752.service - OpenSSH per-connection server daemon (139.178.89.65:52752). Jan 30 14:04:00.677612 sshd[5166]: Accepted publickey for core from 139.178.89.65 port 52752 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:00.680239 sshd[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:00.688279 systemd-logind[2104]: New session 21 of user core. Jan 30 14:04:00.696336 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:04:00.938519 sshd[5166]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:00.943125 systemd[1]: sshd@20-172.31.23.215:22-139.178.89.65:52752.service: Deactivated successfully. Jan 30 14:04:00.951524 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:04:00.953121 systemd-logind[2104]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:04:00.956425 systemd-logind[2104]: Removed session 21. Jan 30 14:04:05.968714 systemd[1]: Started sshd@21-172.31.23.215:22-139.178.89.65:49434.service - OpenSSH per-connection server daemon (139.178.89.65:49434). Jan 30 14:04:06.155178 sshd[5180]: Accepted publickey for core from 139.178.89.65 port 49434 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:06.157797 sshd[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:06.166295 systemd-logind[2104]: New session 22 of user core. Jan 30 14:04:06.175828 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:04:06.413213 sshd[5180]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:06.418478 systemd[1]: sshd@21-172.31.23.215:22-139.178.89.65:49434.service: Deactivated successfully. Jan 30 14:04:06.428046 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:04:06.432378 systemd-logind[2104]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:04:06.434268 systemd-logind[2104]: Removed session 22. Jan 30 14:04:11.448693 systemd[1]: Started sshd@22-172.31.23.215:22-139.178.89.65:45300.service - OpenSSH per-connection server daemon (139.178.89.65:45300). Jan 30 14:04:11.623974 sshd[5196]: Accepted publickey for core from 139.178.89.65 port 45300 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:11.626633 sshd[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:11.634744 systemd-logind[2104]: New session 23 of user core. Jan 30 14:04:11.639690 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:04:11.876733 sshd[5196]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:11.884841 systemd[1]: sshd@22-172.31.23.215:22-139.178.89.65:45300.service: Deactivated successfully. Jan 30 14:04:11.890673 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:04:11.891172 systemd-logind[2104]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:04:11.894638 systemd-logind[2104]: Removed session 23. Jan 30 14:04:16.912622 systemd[1]: Started sshd@23-172.31.23.215:22-139.178.89.65:45304.service - OpenSSH per-connection server daemon (139.178.89.65:45304). Jan 30 14:04:17.077891 sshd[5211]: Accepted publickey for core from 139.178.89.65 port 45304 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:17.080990 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:17.091313 systemd-logind[2104]: New session 24 of user core. Jan 30 14:04:17.097715 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:04:17.340421 sshd[5211]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:17.348655 systemd[1]: sshd@23-172.31.23.215:22-139.178.89.65:45304.service: Deactivated successfully. Jan 30 14:04:17.355554 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:04:17.357041 systemd-logind[2104]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:04:17.359577 systemd-logind[2104]: Removed session 24. Jan 30 14:04:22.370650 systemd[1]: Started sshd@24-172.31.23.215:22-139.178.89.65:49666.service - OpenSSH per-connection server daemon (139.178.89.65:49666). Jan 30 14:04:22.559358 sshd[5225]: Accepted publickey for core from 139.178.89.65 port 49666 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:22.561925 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:22.569877 systemd-logind[2104]: New session 25 of user core. Jan 30 14:04:22.575864 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:04:22.814392 sshd[5225]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:22.820729 systemd[1]: sshd@24-172.31.23.215:22-139.178.89.65:49666.service: Deactivated successfully. Jan 30 14:04:22.828959 systemd-logind[2104]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:04:22.829602 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:04:22.835293 systemd-logind[2104]: Removed session 25. Jan 30 14:04:22.843677 systemd[1]: Started sshd@25-172.31.23.215:22-139.178.89.65:49668.service - OpenSSH per-connection server daemon (139.178.89.65:49668). Jan 30 14:04:23.023308 sshd[5239]: Accepted publickey for core from 139.178.89.65 port 49668 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:23.025909 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:23.034832 systemd-logind[2104]: New session 26 of user core. Jan 30 14:04:23.042000 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:04:26.120645 containerd[2132]: time="2025-01-30T14:04:26.120557104Z" level=info msg="StopContainer for \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\" with timeout 30 (s)" Jan 30 14:04:26.127533 containerd[2132]: time="2025-01-30T14:04:26.124507696Z" level=info msg="Stop container \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\" with signal terminated" Jan 30 14:04:26.187930 containerd[2132]: time="2025-01-30T14:04:26.187854832Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:04:26.198717 containerd[2132]: time="2025-01-30T14:04:26.198654184Z" level=info msg="StopContainer for \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\" with timeout 2 (s)" Jan 30 14:04:26.201333 containerd[2132]: time="2025-01-30T14:04:26.200409028Z" level=info msg="Stop container \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\" with signal terminated" Jan 30 14:04:26.227794 systemd-networkd[1692]: lxc_health: Link DOWN Jan 30 14:04:26.227808 systemd-networkd[1692]: lxc_health: Lost carrier Jan 30 14:04:26.229917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2-rootfs.mount: Deactivated successfully. Jan 30 14:04:26.254218 containerd[2132]: time="2025-01-30T14:04:26.254117164Z" level=info msg="shim disconnected" id=b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2 namespace=k8s.io Jan 30 14:04:26.255315 containerd[2132]: time="2025-01-30T14:04:26.255254428Z" level=warning msg="cleaning up after shim disconnected" id=b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2 namespace=k8s.io Jan 30 14:04:26.255315 containerd[2132]: time="2025-01-30T14:04:26.255303628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:26.294691 containerd[2132]: time="2025-01-30T14:04:26.294618052Z" level=info msg="StopContainer for \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\" returns successfully" Jan 30 14:04:26.295691 containerd[2132]: time="2025-01-30T14:04:26.295645444Z" level=info msg="StopPodSandbox for \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\"" Jan 30 14:04:26.295939 containerd[2132]: time="2025-01-30T14:04:26.295845328Z" level=info msg="Container to stop \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:04:26.300222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1-shm.mount: Deactivated successfully. Jan 30 14:04:26.313765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b-rootfs.mount: Deactivated successfully. Jan 30 14:04:26.332774 containerd[2132]: time="2025-01-30T14:04:26.332568533Z" level=info msg="shim disconnected" id=2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b namespace=k8s.io Jan 30 14:04:26.332774 containerd[2132]: time="2025-01-30T14:04:26.332718389Z" level=warning msg="cleaning up after shim disconnected" id=2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b namespace=k8s.io Jan 30 14:04:26.332774 containerd[2132]: time="2025-01-30T14:04:26.332746397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:26.367646 containerd[2132]: time="2025-01-30T14:04:26.367383845Z" level=info msg="StopContainer for \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\" returns successfully" Jan 30 14:04:26.369290 containerd[2132]: time="2025-01-30T14:04:26.368708921Z" level=info msg="StopPodSandbox for \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\"" Jan 30 14:04:26.370337 containerd[2132]: time="2025-01-30T14:04:26.368779481Z" level=info msg="Container to stop \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:04:26.373306 containerd[2132]: time="2025-01-30T14:04:26.372382217Z" level=info msg="Container to stop \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:04:26.373306 containerd[2132]: time="2025-01-30T14:04:26.372438149Z" level=info msg="Container to stop \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:04:26.373306 containerd[2132]: time="2025-01-30T14:04:26.372463949Z" level=info msg="Container to stop \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:04:26.373306 containerd[2132]: time="2025-01-30T14:04:26.372487637Z" level=info msg="Container to stop \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:04:26.385116 containerd[2132]: time="2025-01-30T14:04:26.384761933Z" level=info msg="shim disconnected" id=88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1 namespace=k8s.io Jan 30 14:04:26.385116 containerd[2132]: time="2025-01-30T14:04:26.384841661Z" level=warning msg="cleaning up after shim disconnected" id=88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1 namespace=k8s.io Jan 30 14:04:26.385116 containerd[2132]: time="2025-01-30T14:04:26.384862193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:26.432905 containerd[2132]: time="2025-01-30T14:04:26.432665741Z" level=info msg="TearDown network for sandbox \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\" successfully" Jan 30 14:04:26.432905 containerd[2132]: time="2025-01-30T14:04:26.432714497Z" level=info msg="StopPodSandbox for \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\" returns successfully" Jan 30 14:04:26.449019 containerd[2132]: time="2025-01-30T14:04:26.448606445Z" level=info msg="shim disconnected" id=92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d namespace=k8s.io Jan 30 14:04:26.449019 containerd[2132]: time="2025-01-30T14:04:26.448887017Z" level=warning msg="cleaning up after shim disconnected" id=92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d namespace=k8s.io Jan 30 14:04:26.449019 containerd[2132]: time="2025-01-30T14:04:26.448944497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:26.468867 kubelet[3438]: I0130 14:04:26.468454 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc8vm\" (UniqueName: \"kubernetes.io/projected/72c3d39f-1f8e-4928-a86a-b12615530dbb-kube-api-access-vc8vm\") pod \"72c3d39f-1f8e-4928-a86a-b12615530dbb\" (UID: \"72c3d39f-1f8e-4928-a86a-b12615530dbb\") " Jan 30 14:04:26.468867 kubelet[3438]: I0130 14:04:26.468526 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72c3d39f-1f8e-4928-a86a-b12615530dbb-cilium-config-path\") pod \"72c3d39f-1f8e-4928-a86a-b12615530dbb\" (UID: \"72c3d39f-1f8e-4928-a86a-b12615530dbb\") " Jan 30 14:04:26.481484 kubelet[3438]: I0130 14:04:26.481428 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c3d39f-1f8e-4928-a86a-b12615530dbb-kube-api-access-vc8vm" (OuterVolumeSpecName: "kube-api-access-vc8vm") pod "72c3d39f-1f8e-4928-a86a-b12615530dbb" (UID: "72c3d39f-1f8e-4928-a86a-b12615530dbb"). InnerVolumeSpecName "kube-api-access-vc8vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:04:26.482060 kubelet[3438]: I0130 14:04:26.482012 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c3d39f-1f8e-4928-a86a-b12615530dbb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72c3d39f-1f8e-4928-a86a-b12615530dbb" (UID: "72c3d39f-1f8e-4928-a86a-b12615530dbb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:04:26.483393 containerd[2132]: time="2025-01-30T14:04:26.483342545Z" level=info msg="TearDown network for sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" successfully" Jan 30 14:04:26.483600 containerd[2132]: time="2025-01-30T14:04:26.483554273Z" level=info msg="StopPodSandbox for \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" returns successfully" Jan 30 14:04:26.569392 kubelet[3438]: I0130 14:04:26.569344 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-bpf-maps\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.569877 kubelet[3438]: I0130 14:04:26.569653 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cni-path\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.569877 kubelet[3438]: I0130 14:04:26.569477 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.569877 kubelet[3438]: I0130 14:04:26.569731 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce0f887d-505e-4c99-9535-a24058f83355-clustermesh-secrets\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.569877 kubelet[3438]: I0130 14:04:26.569781 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cni-path" (OuterVolumeSpecName: "cni-path") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.569877 kubelet[3438]: I0130 14:04:26.569798 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-hostproc\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.569877 kubelet[3438]: I0130 14:04:26.569836 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-host-proc-sys-kernel\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571445 kubelet[3438]: I0130 14:04:26.570288 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cilium-run\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571445 kubelet[3438]: I0130 14:04:26.570336 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-xtables-lock\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571445 kubelet[3438]: I0130 14:04:26.570376 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce0f887d-505e-4c99-9535-a24058f83355-cilium-config-path\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571445 kubelet[3438]: I0130 14:04:26.570409 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-etc-cni-netd\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571445 kubelet[3438]: I0130 14:04:26.570440 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-lib-modules\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571445 kubelet[3438]: I0130 14:04:26.570472 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-host-proc-sys-net\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571766 kubelet[3438]: I0130 14:04:26.570504 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cilium-cgroup\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571766 kubelet[3438]: I0130 14:04:26.570542 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce0f887d-505e-4c99-9535-a24058f83355-hubble-tls\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571766 kubelet[3438]: I0130 14:04:26.570578 3438 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hchfk\" (UniqueName: \"kubernetes.io/projected/ce0f887d-505e-4c99-9535-a24058f83355-kube-api-access-hchfk\") pod \"ce0f887d-505e-4c99-9535-a24058f83355\" (UID: \"ce0f887d-505e-4c99-9535-a24058f83355\") " Jan 30 14:04:26.571766 kubelet[3438]: I0130 14:04:26.570644 3438 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cni-path\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.571766 kubelet[3438]: I0130 14:04:26.570667 3438 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-bpf-maps\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.571766 kubelet[3438]: I0130 14:04:26.570690 3438 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vc8vm\" (UniqueName: \"kubernetes.io/projected/72c3d39f-1f8e-4928-a86a-b12615530dbb-kube-api-access-vc8vm\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.571766 kubelet[3438]: I0130 14:04:26.570714 3438 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72c3d39f-1f8e-4928-a86a-b12615530dbb-cilium-config-path\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.574391 kubelet[3438]: I0130 14:04:26.574323 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce0f887d-505e-4c99-9535-a24058f83355-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:04:26.576957 kubelet[3438]: I0130 14:04:26.576360 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce0f887d-505e-4c99-9535-a24058f83355-kube-api-access-hchfk" (OuterVolumeSpecName: "kube-api-access-hchfk") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "kube-api-access-hchfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:04:26.576957 kubelet[3438]: I0130 14:04:26.576438 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.576957 kubelet[3438]: I0130 14:04:26.576477 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.576957 kubelet[3438]: I0130 14:04:26.576517 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.576957 kubelet[3438]: I0130 14:04:26.576554 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.581247 kubelet[3438]: I0130 14:04:26.581169 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce0f887d-505e-4c99-9535-a24058f83355-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:04:26.581890 kubelet[3438]: I0130 14:04:26.581009 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce0f887d-505e-4c99-9535-a24058f83355-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:04:26.582448 kubelet[3438]: I0130 14:04:26.582282 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-hostproc" (OuterVolumeSpecName: "hostproc") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.582448 kubelet[3438]: I0130 14:04:26.582336 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.582448 kubelet[3438]: I0130 14:04:26.582376 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.582448 kubelet[3438]: I0130 14:04:26.582412 3438 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ce0f887d-505e-4c99-9535-a24058f83355" (UID: "ce0f887d-505e-4c99-9535-a24058f83355"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:04:26.671844 kubelet[3438]: I0130 14:04:26.671571 3438 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cilium-cgroup\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.671844 kubelet[3438]: I0130 14:04:26.671612 3438 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce0f887d-505e-4c99-9535-a24058f83355-hubble-tls\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.671844 kubelet[3438]: I0130 14:04:26.671635 3438 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hchfk\" (UniqueName: \"kubernetes.io/projected/ce0f887d-505e-4c99-9535-a24058f83355-kube-api-access-hchfk\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.671844 kubelet[3438]: I0130 14:04:26.671656 3438 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce0f887d-505e-4c99-9535-a24058f83355-clustermesh-secrets\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.671844 kubelet[3438]: I0130 14:04:26.671676 3438 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-hostproc\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.671844 kubelet[3438]: I0130 14:04:26.671695 3438 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-host-proc-sys-kernel\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.671844 kubelet[3438]: I0130 14:04:26.671714 3438 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-cilium-run\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.671844 kubelet[3438]: I0130 14:04:26.671733 3438 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-xtables-lock\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.672389 kubelet[3438]: I0130 14:04:26.671751 3438 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce0f887d-505e-4c99-9535-a24058f83355-cilium-config-path\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.672389 kubelet[3438]: I0130 14:04:26.671771 3438 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-etc-cni-netd\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.672389 kubelet[3438]: I0130 14:04:26.671789 3438 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-lib-modules\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.672389 kubelet[3438]: I0130 14:04:26.671807 3438 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce0f887d-505e-4c99-9535-a24058f83355-host-proc-sys-net\") on node \"ip-172-31-23-215\" DevicePath \"\"" Jan 30 14:04:26.978683 kubelet[3438]: I0130 14:04:26.977175 3438 scope.go:117] "RemoveContainer" containerID="b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2" Jan 30 14:04:26.984505 containerd[2132]: time="2025-01-30T14:04:26.984257120Z" level=info msg="RemoveContainer for \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\"" Jan 30 14:04:27.001292 containerd[2132]: time="2025-01-30T14:04:27.001168144Z" level=info msg="RemoveContainer for \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\" returns successfully" Jan 30 14:04:27.002103 kubelet[3438]: I0130 14:04:27.001965 3438 scope.go:117] "RemoveContainer" containerID="b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2" Jan 30 14:04:27.004908 containerd[2132]: time="2025-01-30T14:04:27.004018636Z" level=error msg="ContainerStatus for \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\": not found" Jan 30 14:04:27.006210 kubelet[3438]: E0130 14:04:27.006119 3438 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\": not found" containerID="b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2" Jan 30 14:04:27.007048 kubelet[3438]: I0130 14:04:27.006724 3438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2"} err="failed to get container status \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\": rpc error: code = NotFound desc = an error occurred when try to find container \"b81f8eb42fd9466bc000e5b0789f2876ff498f03d07f05b8d53944ce670dbab2\": not found" Jan 30 14:04:27.007173 kubelet[3438]: I0130 14:04:27.007050 3438 scope.go:117] "RemoveContainer" containerID="2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b" Jan 30 14:04:27.016558 containerd[2132]: time="2025-01-30T14:04:27.016384144Z" level=info msg="RemoveContainer for \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\"" Jan 30 14:04:27.023344 containerd[2132]: time="2025-01-30T14:04:27.023229208Z" level=info msg="RemoveContainer for \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\" returns successfully" Jan 30 14:04:27.024158 kubelet[3438]: I0130 14:04:27.023723 3438 scope.go:117] "RemoveContainer" containerID="d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31" Jan 30 14:04:27.028671 containerd[2132]: time="2025-01-30T14:04:27.028086544Z" level=info msg="RemoveContainer for \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\"" Jan 30 14:04:27.037840 containerd[2132]: time="2025-01-30T14:04:27.037725484Z" level=info msg="RemoveContainer for \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\" returns successfully" Jan 30 14:04:27.038606 kubelet[3438]: I0130 14:04:27.038575 3438 scope.go:117] "RemoveContainer" containerID="f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3" Jan 30 14:04:27.040903 containerd[2132]: time="2025-01-30T14:04:27.040818292Z" level=info msg="RemoveContainer for \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\"" Jan 30 14:04:27.047533 containerd[2132]: time="2025-01-30T14:04:27.047460760Z" level=info msg="RemoveContainer for \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\" returns successfully" Jan 30 14:04:27.047855 kubelet[3438]: I0130 14:04:27.047799 3438 scope.go:117] "RemoveContainer" containerID="764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017" Jan 30 14:04:27.050044 containerd[2132]: time="2025-01-30T14:04:27.049990012Z" level=info msg="RemoveContainer for \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\"" Jan 30 14:04:27.056274 containerd[2132]: time="2025-01-30T14:04:27.056160412Z" level=info msg="RemoveContainer for \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\" returns successfully" Jan 30 14:04:27.056713 kubelet[3438]: I0130 14:04:27.056573 3438 scope.go:117] "RemoveContainer" containerID="52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244" Jan 30 14:04:27.058646 containerd[2132]: time="2025-01-30T14:04:27.058595200Z" level=info msg="RemoveContainer for \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\"" Jan 30 14:04:27.064519 containerd[2132]: time="2025-01-30T14:04:27.064453000Z" level=info msg="RemoveContainer for \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\" returns successfully" Jan 30 14:04:27.064827 kubelet[3438]: I0130 14:04:27.064779 3438 scope.go:117] "RemoveContainer" containerID="2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b" Jan 30 14:04:27.065369 containerd[2132]: time="2025-01-30T14:04:27.065135440Z" level=error msg="ContainerStatus for \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\": not found" Jan 30 14:04:27.065660 kubelet[3438]: E0130 14:04:27.065603 3438 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\": not found" containerID="2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b" Jan 30 14:04:27.065751 kubelet[3438]: I0130 14:04:27.065655 3438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b"} err="failed to get container status \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e91864a5bd46a8b7a8833aa3889bd6acb139b62187bf3029858864c913eab0b\": not found" Jan 30 14:04:27.065751 kubelet[3438]: I0130 14:04:27.065696 3438 scope.go:117] "RemoveContainer" containerID="d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31" Jan 30 14:04:27.066452 containerd[2132]: time="2025-01-30T14:04:27.066273556Z" level=error msg="ContainerStatus for \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\": not found" Jan 30 14:04:27.066748 kubelet[3438]: E0130 14:04:27.066689 3438 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\": not found" containerID="d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31" Jan 30 14:04:27.066855 kubelet[3438]: I0130 14:04:27.066739 3438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31"} err="failed to get container status \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7b9a6742db03d8d3cde9412b148d050a3d9d64762da387002b8171aa91c5e31\": not found" Jan 30 14:04:27.066855 kubelet[3438]: I0130 14:04:27.066777 3438 scope.go:117] "RemoveContainer" containerID="f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3" Jan 30 14:04:27.067418 containerd[2132]: time="2025-01-30T14:04:27.067353484Z" level=error msg="ContainerStatus for \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\": not found" Jan 30 14:04:27.067685 kubelet[3438]: E0130 14:04:27.067643 3438 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\": not found" containerID="f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3" Jan 30 14:04:27.067824 kubelet[3438]: I0130 14:04:27.067697 3438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3"} err="failed to get container status \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f96d8b8d6077e9ce21407e2a9d7cf8f19f2cda27f531535e0bc1287dfa064be3\": not found" Jan 30 14:04:27.067824 kubelet[3438]: I0130 14:04:27.067734 3438 scope.go:117] "RemoveContainer" containerID="764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017" Jan 30 14:04:27.068099 containerd[2132]: time="2025-01-30T14:04:27.068043256Z" level=error msg="ContainerStatus for \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\": not found" Jan 30 14:04:27.068321 kubelet[3438]: E0130 14:04:27.068277 3438 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\": not found" containerID="764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017" Jan 30 14:04:27.068457 kubelet[3438]: I0130 14:04:27.068325 3438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017"} err="failed to get container status \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\": rpc error: code = NotFound desc = an error occurred when try to find container \"764f032fcde9082311866613fe29269031e6fd6f093c7700c88fca403cc79017\": not found" Jan 30 14:04:27.068457 kubelet[3438]: I0130 14:04:27.068358 3438 scope.go:117] "RemoveContainer" containerID="52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244" Jan 30 14:04:27.068938 containerd[2132]: time="2025-01-30T14:04:27.068689912Z" level=error msg="ContainerStatus for \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\": not found" Jan 30 14:04:27.069025 kubelet[3438]: E0130 14:04:27.068953 3438 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\": not found" containerID="52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244" Jan 30 14:04:27.069085 kubelet[3438]: I0130 14:04:27.069025 3438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244"} err="failed to get container status \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\": rpc error: code = NotFound desc = an error occurred when try to find container \"52215b8d70ccf81a9d17c6a957c58aed22a84b73bf4eab6f106f01d8a4726244\": not found" Jan 30 14:04:27.133541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1-rootfs.mount: Deactivated successfully. Jan 30 14:04:27.133815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d-rootfs.mount: Deactivated successfully. Jan 30 14:04:27.134029 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d-shm.mount: Deactivated successfully. Jan 30 14:04:27.134321 systemd[1]: var-lib-kubelet-pods-72c3d39f\x2d1f8e\x2d4928\x2da86a\x2db12615530dbb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvc8vm.mount: Deactivated successfully. Jan 30 14:04:27.134552 systemd[1]: var-lib-kubelet-pods-ce0f887d\x2d505e\x2d4c99\x2d9535\x2da24058f83355-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhchfk.mount: Deactivated successfully. Jan 30 14:04:27.134773 systemd[1]: var-lib-kubelet-pods-ce0f887d\x2d505e\x2d4c99\x2d9535\x2da24058f83355-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 14:04:27.135006 systemd[1]: var-lib-kubelet-pods-ce0f887d\x2d505e\x2d4c99\x2d9535\x2da24058f83355-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 14:04:27.666766 kubelet[3438]: E0130 14:04:27.666710 3438 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:04:28.057015 sshd[5239]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:28.063816 systemd[1]: sshd@25-172.31.23.215:22-139.178.89.65:49668.service: Deactivated successfully. Jan 30 14:04:28.072653 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:04:28.074372 systemd-logind[2104]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:04:28.076546 systemd-logind[2104]: Removed session 26. Jan 30 14:04:28.090061 systemd[1]: Started sshd@26-172.31.23.215:22-139.178.89.65:49676.service - OpenSSH per-connection server daemon (139.178.89.65:49676). Jan 30 14:04:28.275526 sshd[5407]: Accepted publickey for core from 139.178.89.65 port 49676 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:28.278220 sshd[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:28.287488 systemd-logind[2104]: New session 27 of user core. Jan 30 14:04:28.293853 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:04:28.402297 kubelet[3438]: I0130 14:04:28.400950 3438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c3d39f-1f8e-4928-a86a-b12615530dbb" path="/var/lib/kubelet/pods/72c3d39f-1f8e-4928-a86a-b12615530dbb/volumes" Jan 30 14:04:28.402297 kubelet[3438]: I0130 14:04:28.402061 3438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce0f887d-505e-4c99-9535-a24058f83355" path="/var/lib/kubelet/pods/ce0f887d-505e-4c99-9535-a24058f83355/volumes" Jan 30 14:04:28.709167 ntpd[2081]: Deleting interface #10 lxc_health, fe80::812:5aff:fef6:b2ee%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Jan 30 14:04:28.710153 ntpd[2081]: 30 Jan 14:04:28 ntpd[2081]: Deleting interface #10 lxc_health, fe80::812:5aff:fef6:b2ee%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Jan 30 14:04:29.897304 sshd[5407]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:29.910426 systemd[1]: sshd@26-172.31.23.215:22-139.178.89.65:49676.service: Deactivated successfully. Jan 30 14:04:29.926982 systemd-logind[2104]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:04:29.935168 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:04:29.950682 systemd[1]: Started sshd@27-172.31.23.215:22-139.178.89.65:49690.service - OpenSSH per-connection server daemon (139.178.89.65:49690). Jan 30 14:04:29.952363 systemd-logind[2104]: Removed session 27. Jan 30 14:04:30.006327 kubelet[3438]: I0130 14:04:30.002053 3438 topology_manager.go:215] "Topology Admit Handler" podUID="ce5bc221-fa9a-4cfc-98ec-017de058adfb" podNamespace="kube-system" podName="cilium-28m49" Jan 30 14:04:30.006327 kubelet[3438]: E0130 14:04:30.002134 3438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce0f887d-505e-4c99-9535-a24058f83355" containerName="apply-sysctl-overwrites" Jan 30 14:04:30.006327 kubelet[3438]: E0130 14:04:30.002153 3438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce0f887d-505e-4c99-9535-a24058f83355" containerName="mount-bpf-fs" Jan 30 14:04:30.006327 kubelet[3438]: E0130 14:04:30.002169 3438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72c3d39f-1f8e-4928-a86a-b12615530dbb" containerName="cilium-operator" Jan 30 14:04:30.006327 kubelet[3438]: E0130 14:04:30.002213 3438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce0f887d-505e-4c99-9535-a24058f83355" containerName="cilium-agent" Jan 30 14:04:30.006327 kubelet[3438]: E0130 14:04:30.002234 3438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce0f887d-505e-4c99-9535-a24058f83355" containerName="mount-cgroup" Jan 30 14:04:30.006327 kubelet[3438]: E0130 14:04:30.002252 3438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce0f887d-505e-4c99-9535-a24058f83355" containerName="clean-cilium-state" Jan 30 14:04:30.006327 kubelet[3438]: I0130 14:04:30.002294 3438 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce0f887d-505e-4c99-9535-a24058f83355" containerName="cilium-agent" Jan 30 14:04:30.006327 kubelet[3438]: I0130 14:04:30.002310 3438 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c3d39f-1f8e-4928-a86a-b12615530dbb" containerName="cilium-operator" Jan 30 14:04:30.092103 kubelet[3438]: I0130 14:04:30.091941 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-lib-modules\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.092457 kubelet[3438]: I0130 14:04:30.092065 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-xtables-lock\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.092457 kubelet[3438]: I0130 14:04:30.092408 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce5bc221-fa9a-4cfc-98ec-017de058adfb-cilium-config-path\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.092844 kubelet[3438]: I0130 14:04:30.092659 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4m4h\" (UniqueName: \"kubernetes.io/projected/ce5bc221-fa9a-4cfc-98ec-017de058adfb-kube-api-access-v4m4h\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.092844 kubelet[3438]: I0130 14:04:30.092762 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-bpf-maps\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.093083 kubelet[3438]: I0130 14:04:30.092939 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce5bc221-fa9a-4cfc-98ec-017de058adfb-hubble-tls\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.093568 kubelet[3438]: I0130 14:04:30.093230 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-host-proc-sys-net\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.094020 kubelet[3438]: I0130 14:04:30.093399 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce5bc221-fa9a-4cfc-98ec-017de058adfb-clustermesh-secrets\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.094020 kubelet[3438]: I0130 14:04:30.093792 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-hostproc\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.094668 kubelet[3438]: I0130 14:04:30.094348 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-etc-cni-netd\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.094668 kubelet[3438]: I0130 14:04:30.094626 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce5bc221-fa9a-4cfc-98ec-017de058adfb-cilium-ipsec-secrets\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.094968 kubelet[3438]: I0130 14:04:30.094825 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-host-proc-sys-kernel\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.094968 kubelet[3438]: I0130 14:04:30.094931 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-cilium-run\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.095367 kubelet[3438]: I0130 14:04:30.095159 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-cilium-cgroup\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.095367 kubelet[3438]: I0130 14:04:30.095265 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce5bc221-fa9a-4cfc-98ec-017de058adfb-cni-path\") pod \"cilium-28m49\" (UID: \"ce5bc221-fa9a-4cfc-98ec-017de058adfb\") " pod="kube-system/cilium-28m49" Jan 30 14:04:30.187806 sshd[5420]: Accepted publickey for core from 139.178.89.65 port 49690 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:30.189354 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:30.211293 systemd-logind[2104]: New session 28 of user core. Jan 30 14:04:30.216654 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:04:30.348566 containerd[2132]: time="2025-01-30T14:04:30.348518661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-28m49,Uid:ce5bc221-fa9a-4cfc-98ec-017de058adfb,Namespace:kube-system,Attempt:0,}" Jan 30 14:04:30.372446 sshd[5420]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:30.386690 systemd[1]: sshd@27-172.31.23.215:22-139.178.89.65:49690.service: Deactivated successfully. Jan 30 14:04:30.398579 systemd-logind[2104]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:04:30.398953 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:04:30.413709 systemd[1]: Started sshd@28-172.31.23.215:22-139.178.89.65:49700.service - OpenSSH per-connection server daemon (139.178.89.65:49700). Jan 30 14:04:30.416739 systemd-logind[2104]: Removed session 28. Jan 30 14:04:30.419684 containerd[2132]: time="2025-01-30T14:04:30.419241453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:30.421566 containerd[2132]: time="2025-01-30T14:04:30.419984697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:30.421566 containerd[2132]: time="2025-01-30T14:04:30.420101901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:30.421566 containerd[2132]: time="2025-01-30T14:04:30.421439421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:30.496278 containerd[2132]: time="2025-01-30T14:04:30.495734781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-28m49,Uid:ce5bc221-fa9a-4cfc-98ec-017de058adfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\"" Jan 30 14:04:30.510887 containerd[2132]: time="2025-01-30T14:04:30.510594213Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:04:30.533400 containerd[2132]: time="2025-01-30T14:04:30.533320437Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7bb274b2ae84b5bb458b8ddd5cc4c4459e4bc8dd90881d7825d76c11fbea8a64\"" Jan 30 14:04:30.534590 containerd[2132]: time="2025-01-30T14:04:30.534529749Z" level=info msg="StartContainer for \"7bb274b2ae84b5bb458b8ddd5cc4c4459e4bc8dd90881d7825d76c11fbea8a64\"" Jan 30 14:04:30.615246 sshd[5445]: Accepted publickey for core from 139.178.89.65 port 49700 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:30.620449 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:30.645722 systemd-logind[2104]: New session 29 of user core. Jan 30 14:04:30.649776 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 14:04:30.664646 containerd[2132]: time="2025-01-30T14:04:30.664570750Z" level=info msg="StartContainer for \"7bb274b2ae84b5bb458b8ddd5cc4c4459e4bc8dd90881d7825d76c11fbea8a64\" returns successfully" Jan 30 14:04:30.716651 containerd[2132]: time="2025-01-30T14:04:30.716430802Z" level=info msg="shim disconnected" id=7bb274b2ae84b5bb458b8ddd5cc4c4459e4bc8dd90881d7825d76c11fbea8a64 namespace=k8s.io Jan 30 14:04:30.716901 containerd[2132]: time="2025-01-30T14:04:30.716662030Z" level=warning msg="cleaning up after shim disconnected" id=7bb274b2ae84b5bb458b8ddd5cc4c4459e4bc8dd90881d7825d76c11fbea8a64 namespace=k8s.io Jan 30 14:04:30.716901 containerd[2132]: time="2025-01-30T14:04:30.716684914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:31.023310 containerd[2132]: time="2025-01-30T14:04:31.023231936Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:04:31.048586 containerd[2132]: time="2025-01-30T14:04:31.048477200Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51ecfe2beea219501a2ad697c30a4aea03117d32384374f34b7bd0f9595cc356\"" Jan 30 14:04:31.049512 containerd[2132]: time="2025-01-30T14:04:31.049419452Z" level=info msg="StartContainer for \"51ecfe2beea219501a2ad697c30a4aea03117d32384374f34b7bd0f9595cc356\"" Jan 30 14:04:31.143601 containerd[2132]: time="2025-01-30T14:04:31.143548209Z" level=info msg="StartContainer for \"51ecfe2beea219501a2ad697c30a4aea03117d32384374f34b7bd0f9595cc356\" returns successfully" Jan 30 14:04:31.196347 containerd[2132]: time="2025-01-30T14:04:31.196210425Z" level=info msg="shim disconnected" id=51ecfe2beea219501a2ad697c30a4aea03117d32384374f34b7bd0f9595cc356 namespace=k8s.io Jan 30 14:04:31.196347 containerd[2132]: time="2025-01-30T14:04:31.196281909Z" level=warning msg="cleaning up after shim disconnected" id=51ecfe2beea219501a2ad697c30a4aea03117d32384374f34b7bd0f9595cc356 namespace=k8s.io Jan 30 14:04:31.196347 containerd[2132]: time="2025-01-30T14:04:31.196303941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:32.026205 containerd[2132]: time="2025-01-30T14:04:32.026114229Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:04:32.071213 containerd[2132]: time="2025-01-30T14:04:32.070546473Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"67ddf65fe1c1d40c3170ad556194792e4d7664ad66d0557d49dd99eb562cefda\"" Jan 30 14:04:32.096752 containerd[2132]: time="2025-01-30T14:04:32.089453673Z" level=info msg="StartContainer for \"67ddf65fe1c1d40c3170ad556194792e4d7664ad66d0557d49dd99eb562cefda\"" Jan 30 14:04:32.269691 containerd[2132]: time="2025-01-30T14:04:32.269636554Z" level=info msg="StartContainer for \"67ddf65fe1c1d40c3170ad556194792e4d7664ad66d0557d49dd99eb562cefda\" returns successfully" Jan 30 14:04:32.318796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67ddf65fe1c1d40c3170ad556194792e4d7664ad66d0557d49dd99eb562cefda-rootfs.mount: Deactivated successfully. Jan 30 14:04:32.322096 containerd[2132]: time="2025-01-30T14:04:32.321771838Z" level=info msg="shim disconnected" id=67ddf65fe1c1d40c3170ad556194792e4d7664ad66d0557d49dd99eb562cefda namespace=k8s.io Jan 30 14:04:32.322096 containerd[2132]: time="2025-01-30T14:04:32.321843718Z" level=warning msg="cleaning up after shim disconnected" id=67ddf65fe1c1d40c3170ad556194792e4d7664ad66d0557d49dd99eb562cefda namespace=k8s.io Jan 30 14:04:32.322096 containerd[2132]: time="2025-01-30T14:04:32.321863590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:32.427018 containerd[2132]: time="2025-01-30T14:04:32.426943499Z" level=info msg="StopPodSandbox for \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\"" Jan 30 14:04:32.427439 containerd[2132]: time="2025-01-30T14:04:32.427385171Z" level=info msg="TearDown network for sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" successfully" Jan 30 14:04:32.427531 containerd[2132]: time="2025-01-30T14:04:32.427428371Z" level=info msg="StopPodSandbox for \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" returns successfully" Jan 30 14:04:32.428932 containerd[2132]: time="2025-01-30T14:04:32.428852759Z" level=info msg="RemovePodSandbox for \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\"" Jan 30 14:04:32.429072 containerd[2132]: time="2025-01-30T14:04:32.428930879Z" level=info msg="Forcibly stopping sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\"" Jan 30 14:04:32.429230 containerd[2132]: time="2025-01-30T14:04:32.429155627Z" level=info msg="TearDown network for sandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" successfully" Jan 30 14:04:32.435792 containerd[2132]: time="2025-01-30T14:04:32.435723767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:04:32.435929 containerd[2132]: time="2025-01-30T14:04:32.435824387Z" level=info msg="RemovePodSandbox \"92410f455e7cc820d030d57ffaf1b2c54e34bbd60fc3796a2fb757258e1cb29d\" returns successfully" Jan 30 14:04:32.436698 containerd[2132]: time="2025-01-30T14:04:32.436537223Z" level=info msg="StopPodSandbox for \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\"" Jan 30 14:04:32.436698 containerd[2132]: time="2025-01-30T14:04:32.436673339Z" level=info msg="TearDown network for sandbox \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\" successfully" Jan 30 14:04:32.436698 containerd[2132]: time="2025-01-30T14:04:32.436697339Z" level=info msg="StopPodSandbox for \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\" returns successfully" Jan 30 14:04:32.437731 containerd[2132]: time="2025-01-30T14:04:32.437479919Z" level=info msg="RemovePodSandbox for \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\"" Jan 30 14:04:32.437731 containerd[2132]: time="2025-01-30T14:04:32.437526971Z" level=info msg="Forcibly stopping sandbox \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\"" Jan 30 14:04:32.437731 containerd[2132]: time="2025-01-30T14:04:32.437618483Z" level=info msg="TearDown network for sandbox \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\" successfully" Jan 30 14:04:32.443551 containerd[2132]: time="2025-01-30T14:04:32.443485211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:04:32.443692 containerd[2132]: time="2025-01-30T14:04:32.443568779Z" level=info msg="RemovePodSandbox \"88f8dd3edcba2539d3352aed93a3854c1bf2f026000cb17d316924caaa08bdd1\" returns successfully" Jan 30 14:04:32.668805 kubelet[3438]: E0130 14:04:32.668762 3438 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:04:33.035238 containerd[2132]: time="2025-01-30T14:04:33.034873606Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:04:33.067486 containerd[2132]: time="2025-01-30T14:04:33.067270894Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f829b132e513b45935cd06ca6dbdf36c497ab309ddfad14120fd3d21b62cbf78\"" Jan 30 14:04:33.069293 containerd[2132]: time="2025-01-30T14:04:33.068508586Z" level=info msg="StartContainer for \"f829b132e513b45935cd06ca6dbdf36c497ab309ddfad14120fd3d21b62cbf78\"" Jan 30 14:04:33.165596 containerd[2132]: time="2025-01-30T14:04:33.165461951Z" level=info msg="StartContainer for \"f829b132e513b45935cd06ca6dbdf36c497ab309ddfad14120fd3d21b62cbf78\" returns successfully" Jan 30 14:04:33.208672 containerd[2132]: time="2025-01-30T14:04:33.208525211Z" level=info msg="shim disconnected" id=f829b132e513b45935cd06ca6dbdf36c497ab309ddfad14120fd3d21b62cbf78 namespace=k8s.io Jan 30 14:04:33.208931 containerd[2132]: time="2025-01-30T14:04:33.208676699Z" level=warning msg="cleaning up after shim disconnected" id=f829b132e513b45935cd06ca6dbdf36c497ab309ddfad14120fd3d21b62cbf78 namespace=k8s.io Jan 30 14:04:33.208931 containerd[2132]: time="2025-01-30T14:04:33.208699271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:33.315766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f829b132e513b45935cd06ca6dbdf36c497ab309ddfad14120fd3d21b62cbf78-rootfs.mount: Deactivated successfully. Jan 30 14:04:34.050866 containerd[2132]: time="2025-01-30T14:04:34.050435135Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:04:34.083843 containerd[2132]: time="2025-01-30T14:04:34.083696687Z" level=info msg="CreateContainer within sandbox \"ba1d50352e3957cf635f1d59235c9d751acca5df47c154b762d83eb59b2c2abe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e680c9183ef02971b456b51b7c8423e07a732ebad523cac946f01461d710670b\"" Jan 30 14:04:34.084587 containerd[2132]: time="2025-01-30T14:04:34.084501431Z" level=info msg="StartContainer for \"e680c9183ef02971b456b51b7c8423e07a732ebad523cac946f01461d710670b\"" Jan 30 14:04:34.198771 containerd[2132]: time="2025-01-30T14:04:34.196578096Z" level=info msg="StartContainer for \"e680c9183ef02971b456b51b7c8423e07a732ebad523cac946f01461d710670b\" returns successfully" Jan 30 14:04:34.923263 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 14:04:35.163104 kubelet[3438]: I0130 14:04:35.159174 3438 setters.go:580] "Node became not ready" node="ip-172-31-23-215" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T14:04:35Z","lastTransitionTime":"2025-01-30T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 14:04:37.393565 kubelet[3438]: E0130 14:04:37.392627 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-k92z5" podUID="02bfb0e8-8efe-4075-97d3-37003636ef02" Jan 30 14:04:37.468731 kubelet[3438]: E0130 14:04:37.468145 3438 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:40206->127.0.0.1:46055: write tcp 172.31.23.215:10250->172.31.23.215:48146: write: connection reset by peer Jan 30 14:04:39.093792 systemd-networkd[1692]: lxc_health: Link UP Jan 30 14:04:39.096925 systemd-networkd[1692]: lxc_health: Gained carrier Jan 30 14:04:39.115011 (udev-worker)[6258]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:04:39.688011 systemd[1]: run-containerd-runc-k8s.io-e680c9183ef02971b456b51b7c8423e07a732ebad523cac946f01461d710670b-runc.LNksES.mount: Deactivated successfully. Jan 30 14:04:39.866847 kubelet[3438]: E0130 14:04:39.866791 3438 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40210->127.0.0.1:46055: write tcp 127.0.0.1:40210->127.0.0.1:46055: write: broken pipe Jan 30 14:04:40.389237 kubelet[3438]: I0130 14:04:40.387416 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-28m49" podStartSLOduration=11.387395118 podStartE2EDuration="11.387395118s" podCreationTimestamp="2025-01-30 14:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:04:35.082079004 +0000 UTC m=+122.927916456" watchObservedRunningTime="2025-01-30 14:04:40.387395118 +0000 UTC m=+128.233232570" Jan 30 14:04:41.111429 systemd-networkd[1692]: lxc_health: Gained IPv6LL Jan 30 14:04:43.709694 ntpd[2081]: Listen normally on 13 lxc_health [fe80::301a:96ff:fe67:76fb%14]:123 Jan 30 14:04:43.710487 ntpd[2081]: 30 Jan 14:04:43 ntpd[2081]: Listen normally on 13 lxc_health [fe80::301a:96ff:fe67:76fb%14]:123 Jan 30 14:04:44.569136 sshd[5445]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:44.575454 systemd-logind[2104]: Session 29 logged out. Waiting for processes to exit. Jan 30 14:04:44.584444 systemd[1]: sshd@28-172.31.23.215:22-139.178.89.65:49700.service: Deactivated successfully. Jan 30 14:04:44.596291 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 14:04:44.609540 systemd-logind[2104]: Removed session 29. Jan 30 14:04:59.393667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2c4f87ab947c826887f002f2946d42e816116f3c71ffe3a95b5507f4cc37127-rootfs.mount: Deactivated successfully. Jan 30 14:04:59.427104 containerd[2132]: time="2025-01-30T14:04:59.426930061Z" level=info msg="shim disconnected" id=e2c4f87ab947c826887f002f2946d42e816116f3c71ffe3a95b5507f4cc37127 namespace=k8s.io Jan 30 14:04:59.427104 containerd[2132]: time="2025-01-30T14:04:59.427041973Z" level=warning msg="cleaning up after shim disconnected" id=e2c4f87ab947c826887f002f2946d42e816116f3c71ffe3a95b5507f4cc37127 namespace=k8s.io Jan 30 14:04:59.427104 containerd[2132]: time="2025-01-30T14:04:59.427064197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:05:00.125659 kubelet[3438]: I0130 14:05:00.124273 3438 scope.go:117] "RemoveContainer" containerID="e2c4f87ab947c826887f002f2946d42e816116f3c71ffe3a95b5507f4cc37127" Jan 30 14:05:00.130061 containerd[2132]: time="2025-01-30T14:05:00.129798072Z" level=info msg="CreateContainer within sandbox \"34bd861f22c00ddc2c87f4cc92916accb316cef357d2969b50551dac73549be5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 14:05:00.154727 containerd[2132]: time="2025-01-30T14:05:00.154653781Z" level=info msg="CreateContainer within sandbox \"34bd861f22c00ddc2c87f4cc92916accb316cef357d2969b50551dac73549be5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bc5c6162b823c7a4025130e95f426003b5e2446a64f5236b026cfd883dd24ce7\"" Jan 30 14:05:00.155487 containerd[2132]: time="2025-01-30T14:05:00.155385337Z" level=info msg="StartContainer for \"bc5c6162b823c7a4025130e95f426003b5e2446a64f5236b026cfd883dd24ce7\"" Jan 30 14:05:00.269393 containerd[2132]: time="2025-01-30T14:05:00.269243413Z" level=info msg="StartContainer for \"bc5c6162b823c7a4025130e95f426003b5e2446a64f5236b026cfd883dd24ce7\" returns successfully" Jan 30 14:05:04.446506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c32bbb46f1c42fae627b81fd0627cc16f2b13d526453629048591d105510e46a-rootfs.mount: Deactivated successfully. Jan 30 14:05:04.457489 containerd[2132]: time="2025-01-30T14:05:04.457396818Z" level=info msg="shim disconnected" id=c32bbb46f1c42fae627b81fd0627cc16f2b13d526453629048591d105510e46a namespace=k8s.io Jan 30 14:05:04.457489 containerd[2132]: time="2025-01-30T14:05:04.457477674Z" level=warning msg="cleaning up after shim disconnected" id=c32bbb46f1c42fae627b81fd0627cc16f2b13d526453629048591d105510e46a namespace=k8s.io Jan 30 14:05:04.458263 containerd[2132]: time="2025-01-30T14:05:04.457499718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:05:04.477434 containerd[2132]: time="2025-01-30T14:05:04.477323430Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:05:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:05:05.146262 kubelet[3438]: I0130 14:05:05.146165 3438 scope.go:117] "RemoveContainer" containerID="c32bbb46f1c42fae627b81fd0627cc16f2b13d526453629048591d105510e46a" Jan 30 14:05:05.149646 containerd[2132]: time="2025-01-30T14:05:05.149578625Z" level=info msg="CreateContainer within sandbox \"b9e423a4edf0ea55f8b62f701592b73f17a95f36fba486675dd0bc178f4f32ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 14:05:05.175423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967403759.mount: Deactivated successfully. Jan 30 14:05:05.179545 containerd[2132]: time="2025-01-30T14:05:05.179377542Z" level=info msg="CreateContainer within sandbox \"b9e423a4edf0ea55f8b62f701592b73f17a95f36fba486675dd0bc178f4f32ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"74741a6d2160b405aaf1042783eeeb7858160bace60cb33ca42a9f0bba28a414\"" Jan 30 14:05:05.180601 containerd[2132]: time="2025-01-30T14:05:05.180356658Z" level=info msg="StartContainer for \"74741a6d2160b405aaf1042783eeeb7858160bace60cb33ca42a9f0bba28a414\"" Jan 30 14:05:05.293162 containerd[2132]: time="2025-01-30T14:05:05.293047470Z" level=info msg="StartContainer for \"74741a6d2160b405aaf1042783eeeb7858160bace60cb33ca42a9f0bba28a414\" returns successfully" Jan 30 14:05:06.060934 kubelet[3438]: E0130 14:05:06.060589 3438 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-215?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"