Jan 30 14:00:29.187884 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 30 14:00:29.187930 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:00:29.187955 kernel: KASLR disabled due to lack of seed Jan 30 14:00:29.187971 kernel: efi: EFI v2.7 by EDK II Jan 30 14:00:29.187987 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 30 14:00:29.188002 kernel: ACPI: Early table checksum verification disabled Jan 30 14:00:29.188020 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 30 14:00:29.188036 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 30 14:00:29.188096 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 14:00:29.188114 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 30 14:00:29.188137 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 14:00:29.188153 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 30 14:00:29.188169 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 30 14:00:29.188185 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 30 14:00:29.188203 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 14:00:29.188224 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 30 14:00:29.188242 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 30 14:00:29.188258 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 30 14:00:29.188274 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 30 14:00:29.188291 kernel: printk: bootconsole [uart0] enabled Jan 30 14:00:29.188308 kernel: NUMA: Failed to initialise from firmware Jan 30 14:00:29.188325 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 14:00:29.188342 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 30 14:00:29.188396 kernel: Zone ranges: Jan 30 14:00:29.188419 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 14:00:29.188437 kernel: DMA32 empty Jan 30 14:00:29.188459 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 30 14:00:29.188476 kernel: Movable zone start for each node Jan 30 14:00:29.188492 kernel: Early memory node ranges Jan 30 14:00:29.188509 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 30 14:00:29.188525 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 30 14:00:29.188541 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 30 14:00:29.188557 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 30 14:00:29.188574 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 30 14:00:29.188590 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 30 14:00:29.188606 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 30 14:00:29.188622 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 30 14:00:29.188638 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 14:00:29.188659 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 30 14:00:29.188676 kernel: psci: probing for conduit method from ACPI. Jan 30 14:00:29.188699 kernel: psci: PSCIv1.0 detected in firmware. Jan 30 14:00:29.188716 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:00:29.188734 kernel: psci: Trusted OS migration not required Jan 30 14:00:29.188755 kernel: psci: SMC Calling Convention v1.1 Jan 30 14:00:29.188772 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:00:29.188789 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:00:29.188807 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:00:29.188824 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:00:29.188841 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:00:29.188858 kernel: CPU features: detected: Spectre-v2 Jan 30 14:00:29.188875 kernel: CPU features: detected: Spectre-v3a Jan 30 14:00:29.188893 kernel: CPU features: detected: Spectre-BHB Jan 30 14:00:29.188910 kernel: CPU features: detected: ARM erratum 1742098 Jan 30 14:00:29.188927 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 30 14:00:29.188949 kernel: alternatives: applying boot alternatives Jan 30 14:00:29.188969 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:00:29.188987 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:00:29.189005 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:00:29.189022 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:00:29.189039 kernel: Fallback order for Node 0: 0 Jan 30 14:00:29.192185 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 30 14:00:29.192212 kernel: Policy zone: Normal Jan 30 14:00:29.192230 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:00:29.192248 kernel: software IO TLB: area num 2. Jan 30 14:00:29.192266 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 30 14:00:29.192292 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 30 14:00:29.192310 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:00:29.192327 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:00:29.192345 kernel: rcu: RCU event tracing is enabled. Jan 30 14:00:29.192363 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:00:29.192381 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:00:29.192399 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:00:29.192416 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:00:29.192433 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:00:29.192451 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:00:29.192468 kernel: GICv3: 96 SPIs implemented Jan 30 14:00:29.192489 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:00:29.192507 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:00:29.192524 kernel: GICv3: GICv3 features: 16 PPIs Jan 30 14:00:29.192541 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 30 14:00:29.192558 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 30 14:00:29.192576 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 14:00:29.192594 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 30 14:00:29.192611 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 30 14:00:29.192628 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 30 14:00:29.192645 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 30 14:00:29.192663 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:00:29.192680 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 30 14:00:29.192702 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 30 14:00:29.192720 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 30 14:00:29.192737 kernel: Console: colour dummy device 80x25 Jan 30 14:00:29.192755 kernel: printk: console [tty1] enabled Jan 30 14:00:29.192773 kernel: ACPI: Core revision 20230628 Jan 30 14:00:29.192791 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 30 14:00:29.192809 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:00:29.192827 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:00:29.192844 kernel: landlock: Up and running. Jan 30 14:00:29.192867 kernel: SELinux: Initializing. Jan 30 14:00:29.192886 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:00:29.192904 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:00:29.192921 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:00:29.192939 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:00:29.192957 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:00:29.192975 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:00:29.192992 kernel: Platform MSI: ITS@0x10080000 domain created Jan 30 14:00:29.193010 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 30 14:00:29.193032 kernel: Remapping and enabling EFI services. Jan 30 14:00:29.193069 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:00:29.193091 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:00:29.193109 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 30 14:00:29.193127 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 30 14:00:29.193145 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 30 14:00:29.193163 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:00:29.193180 kernel: SMP: Total of 2 processors activated. Jan 30 14:00:29.193198 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:00:29.193221 kernel: CPU features: detected: 32-bit EL1 Support Jan 30 14:00:29.193239 kernel: CPU features: detected: CRC32 instructions Jan 30 14:00:29.193257 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:00:29.193287 kernel: alternatives: applying system-wide alternatives Jan 30 14:00:29.193310 kernel: devtmpfs: initialized Jan 30 14:00:29.193328 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:00:29.193347 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:00:29.193365 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:00:29.193383 kernel: SMBIOS 3.0.0 present. Jan 30 14:00:29.193402 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 30 14:00:29.193425 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:00:29.193443 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:00:29.193462 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:00:29.193481 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:00:29.193499 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:00:29.193518 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Jan 30 14:00:29.193536 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:00:29.193559 kernel: cpuidle: using governor menu Jan 30 14:00:29.193578 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:00:29.193596 kernel: ASID allocator initialised with 65536 entries Jan 30 14:00:29.193615 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:00:29.193633 kernel: Serial: AMBA PL011 UART driver Jan 30 14:00:29.193651 kernel: Modules: 17520 pages in range for non-PLT usage Jan 30 14:00:29.193670 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:00:29.193688 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:00:29.193706 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:00:29.193729 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:00:29.193747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:00:29.193766 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:00:29.193784 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:00:29.193803 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:00:29.193821 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:00:29.193839 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:00:29.193857 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:00:29.193876 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:00:29.193898 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:00:29.193917 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:00:29.193935 kernel: ACPI: Interpreter enabled Jan 30 14:00:29.193957 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:00:29.193984 kernel: ACPI: MCFG table detected, 1 entries Jan 30 14:00:29.194002 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 30 14:00:29.195360 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:00:29.195603 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 14:00:29.195811 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 14:00:29.196026 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 30 14:00:29.196265 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 30 14:00:29.196292 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 30 14:00:29.196311 kernel: acpiphp: Slot [1] registered Jan 30 14:00:29.196330 kernel: acpiphp: Slot [2] registered Jan 30 14:00:29.196348 kernel: acpiphp: Slot [3] registered Jan 30 14:00:29.196367 kernel: acpiphp: Slot [4] registered Jan 30 14:00:29.196391 kernel: acpiphp: Slot [5] registered Jan 30 14:00:29.196410 kernel: acpiphp: Slot [6] registered Jan 30 14:00:29.196428 kernel: acpiphp: Slot [7] registered Jan 30 14:00:29.196446 kernel: acpiphp: Slot [8] registered Jan 30 14:00:29.196465 kernel: acpiphp: Slot [9] registered Jan 30 14:00:29.196483 kernel: acpiphp: Slot [10] registered Jan 30 14:00:29.196502 kernel: acpiphp: Slot [11] registered Jan 30 14:00:29.196520 kernel: acpiphp: Slot [12] registered Jan 30 14:00:29.196538 kernel: acpiphp: Slot [13] registered Jan 30 14:00:29.196556 kernel: acpiphp: Slot [14] registered Jan 30 14:00:29.196579 kernel: acpiphp: Slot [15] registered Jan 30 14:00:29.196597 kernel: acpiphp: Slot [16] registered Jan 30 14:00:29.196615 kernel: acpiphp: Slot [17] registered Jan 30 14:00:29.196633 kernel: acpiphp: Slot [18] registered Jan 30 14:00:29.196652 kernel: acpiphp: Slot [19] registered Jan 30 14:00:29.196670 kernel: acpiphp: Slot [20] registered Jan 30 14:00:29.196688 kernel: acpiphp: Slot [21] registered Jan 30 14:00:29.196706 kernel: acpiphp: Slot [22] registered Jan 30 14:00:29.196724 kernel: acpiphp: Slot [23] registered Jan 30 14:00:29.196747 kernel: acpiphp: Slot [24] registered Jan 30 14:00:29.196766 kernel: acpiphp: Slot [25] registered Jan 30 14:00:29.196784 kernel: acpiphp: Slot [26] registered Jan 30 14:00:29.196802 kernel: acpiphp: Slot [27] registered Jan 30 14:00:29.196820 kernel: acpiphp: Slot [28] registered Jan 30 14:00:29.196839 kernel: acpiphp: Slot [29] registered Jan 30 14:00:29.196857 kernel: acpiphp: Slot [30] registered Jan 30 14:00:29.196875 kernel: acpiphp: Slot [31] registered Jan 30 14:00:29.196893 kernel: PCI host bridge to bus 0000:00 Jan 30 14:00:29.197587 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 30 14:00:29.197812 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 14:00:29.198009 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 30 14:00:29.198248 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 30 14:00:29.198497 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 30 14:00:29.198724 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 30 14:00:29.198937 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 30 14:00:29.199208 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 14:00:29.199466 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 30 14:00:29.199688 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:00:29.199919 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 14:00:29.200207 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 30 14:00:29.200427 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 30 14:00:29.200648 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 30 14:00:29.200866 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:00:29.201117 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 30 14:00:29.201330 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 30 14:00:29.201542 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 30 14:00:29.201745 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 30 14:00:29.201959 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 30 14:00:29.202209 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 30 14:00:29.202401 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 14:00:29.202595 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 30 14:00:29.202624 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 14:00:29.202644 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 14:00:29.202664 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 14:00:29.202683 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 14:00:29.202702 kernel: iommu: Default domain type: Translated Jan 30 14:00:29.202721 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:00:29.202749 kernel: efivars: Registered efivars operations Jan 30 14:00:29.202768 kernel: vgaarb: loaded Jan 30 14:00:29.202787 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:00:29.202806 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:00:29.202825 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:00:29.202844 kernel: pnp: PnP ACPI init Jan 30 14:00:29.203622 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 30 14:00:29.203659 kernel: pnp: PnP ACPI: found 1 devices Jan 30 14:00:29.203688 kernel: NET: Registered PF_INET protocol family Jan 30 14:00:29.203708 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:00:29.203727 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:00:29.203746 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:00:29.203765 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:00:29.203783 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:00:29.203802 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:00:29.203820 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:00:29.203839 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:00:29.203862 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:00:29.203881 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:00:29.203899 kernel: kvm [1]: HYP mode not available Jan 30 14:00:29.203917 kernel: Initialise system trusted keyrings Jan 30 14:00:29.203936 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:00:29.203954 kernel: Key type asymmetric registered Jan 30 14:00:29.203972 kernel: Asymmetric key parser 'x509' registered Jan 30 14:00:29.203991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:00:29.204009 kernel: io scheduler mq-deadline registered Jan 30 14:00:29.204033 kernel: io scheduler kyber registered Jan 30 14:00:29.205632 kernel: io scheduler bfq registered Jan 30 14:00:29.205920 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 30 14:00:29.205952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 14:00:29.205972 kernel: ACPI: button: Power Button [PWRB] Jan 30 14:00:29.205991 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 30 14:00:29.206011 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 14:00:29.206030 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:00:29.206141 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 14:00:29.206407 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 30 14:00:29.206438 kernel: printk: console [ttyS0] disabled Jan 30 14:00:29.206459 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 30 14:00:29.206484 kernel: printk: console [ttyS0] enabled Jan 30 14:00:29.206504 kernel: printk: bootconsole [uart0] disabled Jan 30 14:00:29.206523 kernel: thunder_xcv, ver 1.0 Jan 30 14:00:29.206543 kernel: thunder_bgx, ver 1.0 Jan 30 14:00:29.206562 kernel: nicpf, ver 1.0 Jan 30 14:00:29.206589 kernel: nicvf, ver 1.0 Jan 30 14:00:29.206821 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:00:29.207029 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:00:28 UTC (1738245628) Jan 30 14:00:29.210540 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:00:29.210580 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 30 14:00:29.210601 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:00:29.210619 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:00:29.210639 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:00:29.210667 kernel: Segment Routing with IPv6 Jan 30 14:00:29.210687 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:00:29.210706 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:00:29.210724 kernel: Key type dns_resolver registered Jan 30 14:00:29.210743 kernel: registered taskstats version 1 Jan 30 14:00:29.210761 kernel: Loading compiled-in X.509 certificates Jan 30 14:00:29.210781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:00:29.210799 kernel: Key type .fscrypt registered Jan 30 14:00:29.210817 kernel: Key type fscrypt-provisioning registered Jan 30 14:00:29.210840 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:00:29.210862 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:00:29.210880 kernel: ima: No architecture policies found Jan 30 14:00:29.210898 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:00:29.210917 kernel: clk: Disabling unused clocks Jan 30 14:00:29.210935 kernel: Freeing unused kernel memory: 39360K Jan 30 14:00:29.210954 kernel: Run /init as init process Jan 30 14:00:29.210972 kernel: with arguments: Jan 30 14:00:29.210990 kernel: /init Jan 30 14:00:29.211008 kernel: with environment: Jan 30 14:00:29.211031 kernel: HOME=/ Jan 30 14:00:29.211074 kernel: TERM=linux Jan 30 14:00:29.211168 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:00:29.211199 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:00:29.211228 systemd[1]: Detected virtualization amazon. Jan 30 14:00:29.211254 systemd[1]: Detected architecture arm64. Jan 30 14:00:29.211276 systemd[1]: Running in initrd. Jan 30 14:00:29.211304 systemd[1]: No hostname configured, using default hostname. Jan 30 14:00:29.211324 systemd[1]: Hostname set to . Jan 30 14:00:29.211345 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:00:29.211365 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:00:29.211386 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:00:29.211406 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:00:29.211445 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:00:29.211470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:00:29.211497 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:00:29.211518 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:00:29.211542 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:00:29.211563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:00:29.211584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:00:29.211605 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:00:29.211625 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:00:29.211651 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:00:29.211671 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:00:29.211691 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:00:29.211711 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:00:29.211732 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:00:29.211752 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:00:29.211772 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:00:29.211793 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:00:29.211813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:00:29.211838 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:00:29.211859 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:00:29.211879 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:00:29.211900 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:00:29.211920 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:00:29.211940 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:00:29.211960 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:00:29.211981 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:00:29.212006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:29.212026 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:00:29.213075 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:00:29.213108 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:00:29.213131 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:00:29.213205 systemd-journald[251]: Collecting audit messages is disabled. Jan 30 14:00:29.213251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:29.213272 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:00:29.213297 systemd-journald[251]: Journal started Jan 30 14:00:29.213335 systemd-journald[251]: Runtime Journal (/run/log/journal/ec26d31a6f7a78cc0da0a3c28ce9769c) is 8.0M, max 75.3M, 67.3M free. Jan 30 14:00:29.183293 systemd-modules-load[252]: Inserted module 'overlay' Jan 30 14:00:29.221956 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:00:29.222038 kernel: Bridge firewalling registered Jan 30 14:00:29.220932 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 30 14:00:29.232100 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:00:29.232866 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:00:29.239128 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:00:29.249772 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:29.254334 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:00:29.266434 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:00:29.293612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:00:29.302756 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:00:29.321134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:29.326308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:29.341479 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:00:29.350322 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:00:29.370444 dracut-cmdline[286]: dracut-dracut-053 Jan 30 14:00:29.377064 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:00:29.432589 systemd-resolved[287]: Positive Trust Anchors: Jan 30 14:00:29.432625 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:00:29.432687 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:00:29.538071 kernel: SCSI subsystem initialized Jan 30 14:00:29.543078 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:00:29.556087 kernel: iscsi: registered transport (tcp) Jan 30 14:00:29.578328 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:00:29.578415 kernel: QLogic iSCSI HBA Driver Jan 30 14:00:29.665076 kernel: random: crng init done Jan 30 14:00:29.665321 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 30 14:00:29.668850 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:00:29.672959 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:00:29.694952 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:00:29.714503 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:00:29.748101 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:00:29.748187 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:00:29.748215 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:00:29.814096 kernel: raid6: neonx8 gen() 6735 MB/s Jan 30 14:00:29.831084 kernel: raid6: neonx4 gen() 6584 MB/s Jan 30 14:00:29.848086 kernel: raid6: neonx2 gen() 5466 MB/s Jan 30 14:00:29.865082 kernel: raid6: neonx1 gen() 3960 MB/s Jan 30 14:00:29.882086 kernel: raid6: int64x8 gen() 3827 MB/s Jan 30 14:00:29.899085 kernel: raid6: int64x4 gen() 3729 MB/s Jan 30 14:00:29.916083 kernel: raid6: int64x2 gen() 3615 MB/s Jan 30 14:00:29.933850 kernel: raid6: int64x1 gen() 2764 MB/s Jan 30 14:00:29.933885 kernel: raid6: using algorithm neonx8 gen() 6735 MB/s Jan 30 14:00:29.951856 kernel: raid6: .... xor() 4814 MB/s, rmw enabled Jan 30 14:00:29.951895 kernel: raid6: using neon recovery algorithm Jan 30 14:00:29.960306 kernel: xor: measuring software checksum speed Jan 30 14:00:29.960357 kernel: 8regs : 10586 MB/sec Jan 30 14:00:29.961445 kernel: 32regs : 11942 MB/sec Jan 30 14:00:29.962739 kernel: arm64_neon : 9564 MB/sec Jan 30 14:00:29.962772 kernel: xor: using function: 32regs (11942 MB/sec) Jan 30 14:00:30.047097 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:00:30.065012 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:00:30.084423 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:00:30.122879 systemd-udevd[469]: Using default interface naming scheme 'v255'. Jan 30 14:00:30.132360 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:00:30.147924 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:00:30.179211 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jan 30 14:00:30.236122 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:00:30.256882 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:00:30.368738 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:00:30.382504 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:00:30.437897 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:00:30.442228 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:00:30.446613 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:00:30.448840 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:00:30.473465 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:00:30.502136 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:00:30.567030 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 14:00:30.567116 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 30 14:00:30.585663 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 14:00:30.585943 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 14:00:30.586245 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:f6:80:25:84:5b Jan 30 14:00:30.592682 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:00:30.592945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:30.611093 (udev-worker)[517]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:00:30.623812 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:00:30.635228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:00:30.635556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:30.640301 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:30.661104 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 14:00:30.662628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:30.678957 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 14:00:30.679326 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 14:00:30.679565 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:00:30.681320 kernel: GPT:9289727 != 16777215 Jan 30 14:00:30.681384 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:00:30.681410 kernel: GPT:9289727 != 16777215 Jan 30 14:00:30.681435 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:00:30.681459 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:30.717574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:30.729661 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:00:30.777715 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (531) Jan 30 14:00:30.787097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:30.827153 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (532) Jan 30 14:00:30.905406 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 14:00:30.949676 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 14:00:30.967070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 14:00:30.982386 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 14:00:30.987017 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 14:00:31.004373 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:00:31.017897 disk-uuid[662]: Primary Header is updated. Jan 30 14:00:31.017897 disk-uuid[662]: Secondary Entries is updated. Jan 30 14:00:31.017897 disk-uuid[662]: Secondary Header is updated. Jan 30 14:00:31.030085 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:31.042104 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:31.049098 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:32.051183 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:00:32.053448 disk-uuid[663]: The operation has completed successfully. Jan 30 14:00:32.234308 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:00:32.236108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:00:32.285382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:00:32.293887 sh[1004]: Success Jan 30 14:00:32.319083 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:00:32.433025 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:00:32.440283 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:00:32.466367 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:00:32.496223 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:00:32.496287 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:00:32.496314 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:00:32.496340 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:00:32.498455 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:00:32.633097 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:00:32.665302 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:00:32.668002 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:00:32.684427 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:00:32.691358 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:00:32.722313 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:32.722395 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:00:32.722427 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:00:32.732117 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:00:32.751154 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:00:32.753470 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:32.763312 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:00:32.774405 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:00:32.873634 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:00:32.884393 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:00:32.939008 systemd-networkd[1196]: lo: Link UP Jan 30 14:00:32.939023 systemd-networkd[1196]: lo: Gained carrier Jan 30 14:00:32.944724 systemd-networkd[1196]: Enumeration completed Jan 30 14:00:32.946483 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:00:32.949404 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:32.949411 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:00:32.952766 systemd[1]: Reached target network.target - Network. Jan 30 14:00:32.958934 systemd-networkd[1196]: eth0: Link UP Jan 30 14:00:32.958942 systemd-networkd[1196]: eth0: Gained carrier Jan 30 14:00:32.958960 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:32.977187 systemd-networkd[1196]: eth0: DHCPv4 address 172.31.25.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 14:00:33.163958 ignition[1109]: Ignition 2.19.0 Jan 30 14:00:33.163984 ignition[1109]: Stage: fetch-offline Jan 30 14:00:33.165568 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:33.165600 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:33.171980 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:00:33.167235 ignition[1109]: Ignition finished successfully Jan 30 14:00:33.183378 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:00:33.209320 ignition[1206]: Ignition 2.19.0 Jan 30 14:00:33.209351 ignition[1206]: Stage: fetch Jan 30 14:00:33.210963 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:33.210991 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:33.211741 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:33.243390 ignition[1206]: PUT result: OK Jan 30 14:00:33.246274 ignition[1206]: parsed url from cmdline: "" Jan 30 14:00:33.246296 ignition[1206]: no config URL provided Jan 30 14:00:33.246311 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:00:33.246338 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:00:33.246369 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:33.250114 ignition[1206]: PUT result: OK Jan 30 14:00:33.251782 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 14:00:33.257400 ignition[1206]: GET result: OK Jan 30 14:00:33.258574 ignition[1206]: parsing config with SHA512: b4ecae0993500513531cb8fc156ec99e48c6cb411b1bee22616d128d8efd3f10a703f851565e205ecc46dfb23b823b5c840350235d6b5cd6606a79c98fb8d600 Jan 30 14:00:33.265697 unknown[1206]: fetched base config from "system" Jan 30 14:00:33.265928 unknown[1206]: fetched base config from "system" Jan 30 14:00:33.265943 unknown[1206]: fetched user config from "aws" Jan 30 14:00:33.268344 ignition[1206]: fetch: fetch complete Jan 30 14:00:33.268360 ignition[1206]: fetch: fetch passed Jan 30 14:00:33.268449 ignition[1206]: Ignition finished successfully Jan 30 14:00:33.278132 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:00:33.290351 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:00:33.318809 ignition[1213]: Ignition 2.19.0 Jan 30 14:00:33.318837 ignition[1213]: Stage: kargs Jan 30 14:00:33.320495 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:33.320521 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:33.320683 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:33.323791 ignition[1213]: PUT result: OK Jan 30 14:00:33.331719 ignition[1213]: kargs: kargs passed Jan 30 14:00:33.331825 ignition[1213]: Ignition finished successfully Jan 30 14:00:33.337008 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:00:33.347350 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:00:33.375526 ignition[1219]: Ignition 2.19.0 Jan 30 14:00:33.375552 ignition[1219]: Stage: disks Jan 30 14:00:33.376625 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:33.376651 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:33.376797 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:33.378553 ignition[1219]: PUT result: OK Jan 30 14:00:33.388290 ignition[1219]: disks: disks passed Jan 30 14:00:33.389626 ignition[1219]: Ignition finished successfully Jan 30 14:00:33.392662 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:00:33.397736 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:00:33.402176 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:00:33.404720 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:00:33.406636 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:00:33.408604 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:00:33.424545 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:00:33.465076 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:00:33.473162 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:00:33.481210 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:00:33.587117 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:00:33.586978 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:00:33.591244 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:00:33.605244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:00:33.611274 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:00:33.615007 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 14:00:33.615167 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:00:33.615223 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:00:33.640092 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247) Jan 30 14:00:33.646743 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:33.646802 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:00:33.648425 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:00:33.653228 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:00:33.660406 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:00:33.665367 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:00:33.667741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:00:33.970731 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:00:33.979965 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:00:33.988380 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:00:33.997030 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:00:34.023714 systemd-networkd[1196]: eth0: Gained IPv6LL Jan 30 14:00:34.212461 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:00:34.224280 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:00:34.242454 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:00:34.256344 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:00:34.263095 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:34.304139 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:00:34.310363 ignition[1360]: INFO : Ignition 2.19.0 Jan 30 14:00:34.313376 ignition[1360]: INFO : Stage: mount Jan 30 14:00:34.315858 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:34.315858 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:34.315858 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:34.323169 ignition[1360]: INFO : PUT result: OK Jan 30 14:00:34.326668 ignition[1360]: INFO : mount: mount passed Jan 30 14:00:34.329282 ignition[1360]: INFO : Ignition finished successfully Jan 30 14:00:34.333032 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:00:34.341291 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:00:34.593433 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:00:34.628889 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1371) Jan 30 14:00:34.628952 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:00:34.628979 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:00:34.631596 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:00:34.637285 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:00:34.639897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:00:34.681701 ignition[1388]: INFO : Ignition 2.19.0 Jan 30 14:00:34.681701 ignition[1388]: INFO : Stage: files Jan 30 14:00:34.685925 ignition[1388]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:34.685925 ignition[1388]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:34.685925 ignition[1388]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:34.685925 ignition[1388]: INFO : PUT result: OK Jan 30 14:00:34.696809 ignition[1388]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:00:34.700010 ignition[1388]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:00:34.700010 ignition[1388]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:00:34.732855 ignition[1388]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:00:34.735691 ignition[1388]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:00:34.738381 ignition[1388]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:00:34.738242 unknown[1388]: wrote ssh authorized keys file for user: core Jan 30 14:00:34.746308 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:00:34.746308 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:00:34.845849 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:00:35.046461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:00:35.050065 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:00:35.050065 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 14:00:35.521218 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 14:00:35.631149 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:00:35.636075 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 14:00:36.096762 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 14:00:36.434168 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 14:00:36.438130 ignition[1388]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 14:00:36.441118 ignition[1388]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:00:36.441118 ignition[1388]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:00:36.441118 ignition[1388]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 14:00:36.441118 ignition[1388]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:00:36.441118 ignition[1388]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:00:36.441118 ignition[1388]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:00:36.441118 ignition[1388]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:00:36.441118 ignition[1388]: INFO : files: files passed Jan 30 14:00:36.441118 ignition[1388]: INFO : Ignition finished successfully Jan 30 14:00:36.466626 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:00:36.478462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:00:36.489384 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:00:36.500909 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:00:36.503290 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:00:36.520376 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:00:36.520376 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:00:36.526473 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:00:36.532290 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:00:36.536830 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:00:36.553964 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:00:36.600612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:00:36.601029 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:00:36.607464 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:00:36.609423 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:00:36.611368 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:00:36.628693 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:00:36.655124 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:00:36.666511 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:00:36.693487 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:00:36.696406 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:00:36.700211 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:00:36.705145 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:00:36.707110 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:00:36.711667 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:00:36.713796 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:00:36.718762 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:00:36.720946 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:00:36.723910 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:00:36.729894 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:00:36.732436 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:00:36.737837 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:00:36.743913 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:00:36.760776 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:00:36.764007 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:00:36.764254 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:00:36.766848 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:00:36.769147 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:00:36.772121 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:00:36.776096 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:00:36.778507 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:00:36.778721 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:00:36.783367 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:00:36.784190 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:00:36.789013 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:00:36.789935 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:00:36.814196 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:00:36.819121 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:00:36.822611 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:00:36.827410 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:00:36.832678 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:00:36.833560 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:00:36.853680 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:00:36.859316 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:00:36.875135 ignition[1441]: INFO : Ignition 2.19.0 Jan 30 14:00:36.875135 ignition[1441]: INFO : Stage: umount Jan 30 14:00:36.882191 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:36.882191 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:00:36.882191 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:00:36.875840 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:00:36.895527 ignition[1441]: INFO : PUT result: OK Jan 30 14:00:36.886820 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:00:36.887031 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:00:36.902891 ignition[1441]: INFO : umount: umount passed Jan 30 14:00:36.904575 ignition[1441]: INFO : Ignition finished successfully Jan 30 14:00:36.908551 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:00:36.910882 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:00:36.914694 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:00:36.914787 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:00:36.916746 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:00:36.916827 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:00:36.918688 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:00:36.918763 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:00:36.920648 systemd[1]: Stopped target network.target - Network. Jan 30 14:00:36.922312 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:00:36.922390 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:00:36.924592 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:00:36.926948 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:00:36.930343 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:00:36.946006 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:00:36.961133 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:00:36.965976 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:00:36.970784 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:00:36.972643 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:00:36.972712 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:00:36.974557 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:00:36.974636 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:00:36.976461 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:00:36.976536 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:00:36.978433 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:00:36.978506 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:00:36.980648 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:00:36.982620 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:00:36.996106 systemd-networkd[1196]: eth0: DHCPv6 lease lost Jan 30 14:00:37.005509 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:00:37.009129 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:00:37.017004 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:00:37.019269 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:00:37.025480 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:00:37.025573 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:00:37.038230 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:00:37.043272 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:00:37.043396 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:00:37.045846 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:00:37.045929 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:37.048325 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:00:37.048401 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:00:37.050629 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:00:37.050704 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:00:37.053348 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:00:37.090710 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:00:37.091202 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:00:37.098032 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:00:37.098209 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:00:37.101726 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:00:37.101805 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:00:37.105548 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:00:37.105639 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:00:37.108255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:00:37.108340 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:00:37.126437 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:00:37.126538 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:37.138382 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:00:37.145436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:00:37.145570 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:00:37.149781 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:00:37.149875 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:00:37.152681 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:00:37.152757 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:00:37.159235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:00:37.159310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:37.176740 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:00:37.177311 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:00:37.220689 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:00:37.221267 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:00:37.225592 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:00:37.244470 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:00:37.261558 systemd[1]: Switching root. Jan 30 14:00:37.297812 systemd-journald[251]: Journal stopped Jan 30 14:00:39.308831 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 30 14:00:39.308975 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:00:39.309029 kernel: SELinux: policy capability open_perms=1 Jan 30 14:00:39.309102 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:00:39.309138 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:00:39.309168 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:00:39.309204 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:00:39.309235 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:00:39.309264 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:00:39.309294 kernel: audit: type=1403 audit(1738245637.761:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:00:39.309335 systemd[1]: Successfully loaded SELinux policy in 48.391ms. Jan 30 14:00:39.309380 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.370ms. Jan 30 14:00:39.309415 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:00:39.309448 systemd[1]: Detected virtualization amazon. Jan 30 14:00:39.309481 systemd[1]: Detected architecture arm64. Jan 30 14:00:39.309515 systemd[1]: Detected first boot. Jan 30 14:00:39.309549 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:00:39.309581 zram_generator::config[1483]: No configuration found. Jan 30 14:00:39.309616 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:00:39.309649 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:00:39.309679 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:00:39.309710 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:00:39.309743 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:00:39.309781 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:00:39.309813 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:00:39.309846 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:00:39.309876 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:00:39.309908 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:00:39.309940 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:00:39.309972 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:00:39.310013 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:00:39.310043 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:00:39.310101 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:00:39.310135 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:00:39.310168 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:00:39.310203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:00:39.310232 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:00:39.310261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:00:39.310293 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:00:39.310334 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:00:39.310364 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:00:39.310398 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:00:39.310431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:00:39.310462 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:00:39.310491 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:00:39.310525 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:00:39.310554 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:00:39.310583 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:00:39.310616 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:00:39.310680 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:00:39.310714 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:00:39.310745 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:00:39.310776 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:00:39.310806 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:00:39.310835 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:00:39.310867 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:00:39.310897 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:00:39.310926 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:00:39.310962 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:00:39.310995 systemd[1]: Reached target machines.target - Containers. Jan 30 14:00:39.311025 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:00:39.311090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:39.311125 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:00:39.311156 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:00:39.311188 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:00:39.311218 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:00:39.311252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:00:39.311285 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:00:39.311314 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:00:39.311344 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:00:39.311376 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:00:39.311421 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:00:39.311458 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:00:39.311491 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:00:39.311521 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:00:39.311554 kernel: loop: module loaded Jan 30 14:00:39.311585 kernel: fuse: init (API version 7.39) Jan 30 14:00:39.311615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:00:39.311645 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:00:39.311674 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:00:39.311704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:00:39.311735 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:00:39.311765 systemd[1]: Stopped verity-setup.service. Jan 30 14:00:39.311796 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:00:39.311830 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:00:39.311860 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:00:39.311892 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:00:39.311922 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:00:39.311957 kernel: ACPI: bus type drm_connector registered Jan 30 14:00:39.311986 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:00:39.312019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:00:39.314068 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:00:39.314123 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:00:39.314154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:00:39.314183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:00:39.314215 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:00:39.314245 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:00:39.314283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:00:39.314313 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:00:39.314342 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:00:39.314375 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:00:39.314412 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:00:39.314443 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:00:39.314479 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:00:39.314516 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:00:39.314605 systemd-journald[1569]: Collecting audit messages is disabled. Jan 30 14:00:39.314664 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:00:39.314702 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:00:39.314730 systemd-journald[1569]: Journal started Jan 30 14:00:39.314778 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec26d31a6f7a78cc0da0a3c28ce9769c) is 8.0M, max 75.3M, 67.3M free. Jan 30 14:00:38.732189 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:00:38.759768 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 14:00:38.760584 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:00:39.331098 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:00:39.347776 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:00:39.352023 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:00:39.352121 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:00:39.361116 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:00:39.377100 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:00:39.393089 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:00:39.399115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:39.412906 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:00:39.419888 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:00:39.426624 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:00:39.432257 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:00:39.444115 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:39.462086 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:00:39.482308 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:00:39.490419 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:00:39.493148 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:00:39.495734 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:00:39.511892 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:00:39.515278 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:00:39.561154 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:00:39.566523 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:00:39.575079 kernel: loop0: detected capacity change from 0 to 52536 Jan 30 14:00:39.587396 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:00:39.601860 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:00:39.640109 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:00:39.644756 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec26d31a6f7a78cc0da0a3c28ce9769c is 120.109ms for 917 entries. Jan 30 14:00:39.644756 systemd-journald[1569]: System Journal (/var/log/journal/ec26d31a6f7a78cc0da0a3c28ce9769c) is 8.0M, max 195.6M, 187.6M free. Jan 30 14:00:39.780572 systemd-journald[1569]: Received client request to flush runtime journal. Jan 30 14:00:39.780675 kernel: loop1: detected capacity change from 0 to 189592 Jan 30 14:00:39.672277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:39.684806 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:00:39.686671 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:00:39.695122 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jan 30 14:00:39.695161 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jan 30 14:00:39.695285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:00:39.718652 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:00:39.738736 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:00:39.755377 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:00:39.778258 udevadm[1628]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:00:39.787323 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:00:39.837839 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:00:39.848092 kernel: loop2: detected capacity change from 0 to 114328 Jan 30 14:00:39.866232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:00:39.909107 kernel: loop3: detected capacity change from 0 to 114432 Jan 30 14:00:39.916250 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jan 30 14:00:39.916903 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jan 30 14:00:39.931265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:00:39.957091 kernel: loop4: detected capacity change from 0 to 52536 Jan 30 14:00:39.989746 kernel: loop5: detected capacity change from 0 to 189592 Jan 30 14:00:40.023252 kernel: loop6: detected capacity change from 0 to 114328 Jan 30 14:00:40.048089 kernel: loop7: detected capacity change from 0 to 114432 Jan 30 14:00:40.075203 (sd-merge)[1644]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 14:00:40.076306 (sd-merge)[1644]: Merged extensions into '/usr'. Jan 30 14:00:40.089456 systemd[1]: Reloading requested from client PID 1595 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:00:40.089489 systemd[1]: Reloading... Jan 30 14:00:40.330598 zram_generator::config[1670]: No configuration found. Jan 30 14:00:40.397954 ldconfig[1591]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:00:40.618263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:00:40.736422 systemd[1]: Reloading finished in 645 ms. Jan 30 14:00:40.776913 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:00:40.786572 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:00:40.802512 systemd[1]: Starting ensure-sysext.service... Jan 30 14:00:40.813428 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:00:40.821559 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:00:40.836407 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:00:40.842961 systemd[1]: Reloading requested from client PID 1722 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:00:40.842990 systemd[1]: Reloading... Jan 30 14:00:40.870772 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:00:40.871504 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:00:40.875451 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:00:40.875996 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Jan 30 14:00:40.878262 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Jan 30 14:00:40.889470 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:00:40.889495 systemd-tmpfiles[1723]: Skipping /boot Jan 30 14:00:40.917786 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:00:40.917817 systemd-tmpfiles[1723]: Skipping /boot Jan 30 14:00:40.943620 systemd-udevd[1726]: Using default interface naming scheme 'v255'. Jan 30 14:00:41.126100 zram_generator::config[1775]: No configuration found. Jan 30 14:00:41.183422 (udev-worker)[1777]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:00:41.457815 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1777) Jan 30 14:00:41.478894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:00:41.630103 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 14:00:41.630379 systemd[1]: Reloading finished in 786 ms. Jan 30 14:00:41.658416 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:00:41.663149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:00:41.717226 systemd[1]: Finished ensure-sysext.service. Jan 30 14:00:41.730147 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:00:41.758002 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 14:00:41.767443 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:00:41.782524 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:00:41.785652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:41.788373 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:00:41.802454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:00:41.815457 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:00:41.822385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:00:41.827905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:00:41.831102 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:41.841983 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:00:41.848411 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:00:41.866618 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:00:41.874391 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:00:41.877695 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:00:41.884267 lvm[1922]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:00:41.888381 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:00:41.894425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:41.898607 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:00:41.898970 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:00:41.903915 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:00:41.904969 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:00:41.945269 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:00:41.952583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:00:41.955160 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:00:41.958652 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:00:41.961362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:00:41.968000 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:00:41.968212 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:00:42.002148 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:00:42.014438 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:00:42.030836 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:00:42.056244 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:00:42.071022 augenrules[1958]: No rules Jan 30 14:00:42.071379 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:00:42.079673 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:00:42.082558 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:00:42.090849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:00:42.102545 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:00:42.103311 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:00:42.122226 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:00:42.131096 lvm[1967]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:00:42.160735 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:00:42.167266 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:00:42.236734 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:42.278887 systemd-networkd[1935]: lo: Link UP Jan 30 14:00:42.278908 systemd-networkd[1935]: lo: Gained carrier Jan 30 14:00:42.281651 systemd-networkd[1935]: Enumeration completed Jan 30 14:00:42.281835 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:00:42.284010 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:42.284018 systemd-networkd[1935]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:00:42.286978 systemd-networkd[1935]: eth0: Link UP Jan 30 14:00:42.287320 systemd-networkd[1935]: eth0: Gained carrier Jan 30 14:00:42.287354 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:42.293741 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:00:42.295583 systemd-networkd[1935]: eth0: DHCPv4 address 172.31.25.125/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 14:00:42.317582 systemd-resolved[1936]: Positive Trust Anchors: Jan 30 14:00:42.317621 systemd-resolved[1936]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:00:42.317683 systemd-resolved[1936]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:00:42.325857 systemd-resolved[1936]: Defaulting to hostname 'linux'. Jan 30 14:00:42.328923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:00:42.331107 systemd[1]: Reached target network.target - Network. Jan 30 14:00:42.332770 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:00:42.335030 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:00:42.337211 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:00:42.339560 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:00:42.342183 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:00:42.344467 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:00:42.346820 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:00:42.349121 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:00:42.349182 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:00:42.350934 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:00:42.354142 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:00:42.358915 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:00:42.369516 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:00:42.372603 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:00:42.374962 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:00:42.376993 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:00:42.378744 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:00:42.378800 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:00:42.381129 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:00:42.388417 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:00:42.394320 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:00:42.402374 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:00:42.409636 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:00:42.410599 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:00:42.422497 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:00:42.430408 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 14:00:42.436175 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:00:42.442473 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 14:00:42.450464 jq[1987]: false Jan 30 14:00:42.448425 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:00:42.462409 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:00:42.488461 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:00:42.492241 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:00:42.495202 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:00:42.500407 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:00:42.504165 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:00:42.509862 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:00:42.510224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:00:42.544258 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:00:42.550247 extend-filesystems[1988]: Found loop4 Jan 30 14:00:42.550247 extend-filesystems[1988]: Found loop5 Jan 30 14:00:42.550247 extend-filesystems[1988]: Found loop6 Jan 30 14:00:42.550247 extend-filesystems[1988]: Found loop7 Jan 30 14:00:42.550247 extend-filesystems[1988]: Found nvme0n1 Jan 30 14:00:42.550247 extend-filesystems[1988]: Found nvme0n1p1 Jan 30 14:00:42.550247 extend-filesystems[1988]: Found nvme0n1p2 Jan 30 14:00:42.550247 extend-filesystems[1988]: Found nvme0n1p3 Jan 30 14:00:42.550247 extend-filesystems[1988]: Found usr Jan 30 14:00:42.550247 extend-filesystems[1988]: Found nvme0n1p4 Jan 30 14:00:42.550247 extend-filesystems[1988]: Found nvme0n1p6 Jan 30 14:00:42.617330 extend-filesystems[1988]: Found nvme0n1p7 Jan 30 14:00:42.617330 extend-filesystems[1988]: Found nvme0n1p9 Jan 30 14:00:42.617330 extend-filesystems[1988]: Checking size of /dev/nvme0n1p9 Jan 30 14:00:42.590993 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:00:42.591434 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:00:42.646168 jq[2000]: true Jan 30 14:00:42.676879 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:00:42.679207 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: ---------------------------------------------------- Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: corporation. Support and training for ntp-4 are Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: available at https://www.nwtime.org/support Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: ---------------------------------------------------- Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: proto: precision = 0.096 usec (-23) Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: basedate set to 2025-01-17 Jan 30 14:00:42.690802 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: gps base set to 2025-01-19 (week 2350) Jan 30 14:00:42.679270 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:00:42.679292 ntpd[1990]: ---------------------------------------------------- Jan 30 14:00:42.679312 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:00:42.679331 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:00:42.679351 ntpd[1990]: corporation. Support and training for ntp-4 are Jan 30 14:00:42.679371 ntpd[1990]: available at https://www.nwtime.org/support Jan 30 14:00:42.679390 ntpd[1990]: ---------------------------------------------------- Jan 30 14:00:42.715540 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:00:42.715540 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:00:42.697364 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:00:42.716123 tar[2007]: linux-arm64/helm Jan 30 14:00:42.685777 ntpd[1990]: proto: precision = 0.096 usec (-23) Jan 30 14:00:42.705202 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:00:42.686228 ntpd[1990]: basedate set to 2025-01-17 Jan 30 14:00:42.705252 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:00:42.686255 ntpd[1990]: gps base set to 2025-01-19 (week 2350) Jan 30 14:00:42.707690 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:00:42.697012 dbus-daemon[1986]: [system] SELinux support is enabled Jan 30 14:00:42.735836 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:00:42.735836 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: Listen normally on 3 eth0 172.31.25.125:123 Jan 30 14:00:42.735836 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: Listen normally on 4 lo [::1]:123 Jan 30 14:00:42.735836 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: bind(21) AF_INET6 fe80::4f6:80ff:fe25:845b%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 14:00:42.735836 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: unable to create socket on eth0 (5) for fe80::4f6:80ff:fe25:845b%2#123 Jan 30 14:00:42.735836 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: failed to init interface for address fe80::4f6:80ff:fe25:845b%2 Jan 30 14:00:42.735836 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Jan 30 14:00:42.707729 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:00:42.736306 extend-filesystems[1988]: Resized partition /dev/nvme0n1p9 Jan 30 14:00:42.711600 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:00:42.765323 extend-filesystems[2033]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:00:42.832211 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 14:00:42.711688 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:00:42.802368 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 14:00:42.832519 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:00:42.832519 ntpd[1990]: 30 Jan 14:00:42 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:00:42.733781 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:00:42.805905 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:00:42.733853 ntpd[1990]: Listen normally on 3 eth0 172.31.25.125:123 Jan 30 14:00:42.808140 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:00:42.733922 ntpd[1990]: Listen normally on 4 lo [::1]:123 Jan 30 14:00:42.734004 ntpd[1990]: bind(21) AF_INET6 fe80::4f6:80ff:fe25:845b%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 14:00:42.734044 ntpd[1990]: unable to create socket on eth0 (5) for fe80::4f6:80ff:fe25:845b%2#123 Jan 30 14:00:42.734100 ntpd[1990]: failed to init interface for address fe80::4f6:80ff:fe25:845b%2 Jan 30 14:00:42.734159 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Jan 30 14:00:42.734705 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1935 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 14:00:42.787824 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:00:42.787872 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:00:42.872031 update_engine[1999]: I20250130 14:00:42.871857 1999 main.cc:92] Flatcar Update Engine starting Jan 30 14:00:42.878966 jq[2025]: true Jan 30 14:00:42.903092 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 14:00:42.898840 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:00:42.903268 update_engine[1999]: I20250130 14:00:42.897648 1999 update_check_scheduler.cc:74] Next update check in 11m11s Jan 30 14:00:42.904446 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:00:42.919279 extend-filesystems[2033]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 14:00:42.919279 extend-filesystems[2033]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 14:00:42.919279 extend-filesystems[2033]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 14:00:42.933364 extend-filesystems[1988]: Resized filesystem in /dev/nvme0n1p9 Jan 30 14:00:42.923667 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:00:42.926250 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:00:42.936165 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 14:00:42.965578 coreos-metadata[1985]: Jan 30 14:00:42.965 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 14:00:42.968403 coreos-metadata[1985]: Jan 30 14:00:42.968 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 14:00:42.974127 coreos-metadata[1985]: Jan 30 14:00:42.972 INFO Fetch successful Jan 30 14:00:42.974127 coreos-metadata[1985]: Jan 30 14:00:42.972 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 14:00:42.974395 coreos-metadata[1985]: Jan 30 14:00:42.974 INFO Fetch successful Jan 30 14:00:42.974395 coreos-metadata[1985]: Jan 30 14:00:42.974 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 14:00:42.974800 coreos-metadata[1985]: Jan 30 14:00:42.974 INFO Fetch successful Jan 30 14:00:42.974800 coreos-metadata[1985]: Jan 30 14:00:42.974 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 14:00:42.978326 coreos-metadata[1985]: Jan 30 14:00:42.978 INFO Fetch successful Jan 30 14:00:42.978326 coreos-metadata[1985]: Jan 30 14:00:42.978 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 14:00:42.981990 systemd-logind[1996]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 14:00:42.982828 coreos-metadata[1985]: Jan 30 14:00:42.982 INFO Fetch failed with 404: resource not found Jan 30 14:00:42.982828 coreos-metadata[1985]: Jan 30 14:00:42.982 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 14:00:42.982974 coreos-metadata[1985]: Jan 30 14:00:42.982 INFO Fetch successful Jan 30 14:00:42.982974 coreos-metadata[1985]: Jan 30 14:00:42.982 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 14:00:42.984153 systemd-logind[1996]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 30 14:00:42.984663 coreos-metadata[1985]: Jan 30 14:00:42.984 INFO Fetch successful Jan 30 14:00:42.984663 coreos-metadata[1985]: Jan 30 14:00:42.984 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 14:00:42.986078 systemd-logind[1996]: New seat seat0. Jan 30 14:00:42.989039 coreos-metadata[1985]: Jan 30 14:00:42.987 INFO Fetch successful Jan 30 14:00:42.989039 coreos-metadata[1985]: Jan 30 14:00:42.987 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 14:00:42.989490 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:00:42.992972 coreos-metadata[1985]: Jan 30 14:00:42.992 INFO Fetch successful Jan 30 14:00:42.993091 coreos-metadata[1985]: Jan 30 14:00:42.993 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 14:00:42.997752 coreos-metadata[1985]: Jan 30 14:00:42.997 INFO Fetch successful Jan 30 14:00:43.117425 bash[2072]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:00:43.126416 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:00:43.140520 systemd[1]: Starting sshkeys.service... Jan 30 14:00:43.144072 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:00:43.147990 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:00:43.228109 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1762) Jan 30 14:00:43.229853 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:00:43.236025 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:00:43.388707 containerd[2015]: time="2025-01-30T14:00:43.388506861Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:00:43.589708 locksmithd[2042]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:00:43.621097 containerd[2015]: time="2025-01-30T14:00:43.620111531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.621236 coreos-metadata[2080]: Jan 30 14:00:43.621 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 14:00:43.624895 coreos-metadata[2080]: Jan 30 14:00:43.623 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 14:00:43.625253 coreos-metadata[2080]: Jan 30 14:00:43.625 INFO Fetch successful Jan 30 14:00:43.625253 coreos-metadata[2080]: Jan 30 14:00:43.625 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 14:00:43.630488 coreos-metadata[2080]: Jan 30 14:00:43.630 INFO Fetch successful Jan 30 14:00:43.632196 unknown[2080]: wrote ssh authorized keys file for user: core Jan 30 14:00:43.638705 containerd[2015]: time="2025-01-30T14:00:43.633153911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.638705 containerd[2015]: time="2025-01-30T14:00:43.633218831Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:00:43.638705 containerd[2015]: time="2025-01-30T14:00:43.633256391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:00:43.640765 containerd[2015]: time="2025-01-30T14:00:43.640292195Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:00:43.640765 containerd[2015]: time="2025-01-30T14:00:43.640386059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.640765 containerd[2015]: time="2025-01-30T14:00:43.640628843Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.640765 containerd[2015]: time="2025-01-30T14:00:43.640661927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.644834 containerd[2015]: time="2025-01-30T14:00:43.641617859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.644834 containerd[2015]: time="2025-01-30T14:00:43.641694911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.644834 containerd[2015]: time="2025-01-30T14:00:43.644153519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.644834 containerd[2015]: time="2025-01-30T14:00:43.644220743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.648473 containerd[2015]: time="2025-01-30T14:00:43.644751983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.648840 containerd[2015]: time="2025-01-30T14:00:43.648767579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:43.650629 containerd[2015]: time="2025-01-30T14:00:43.650578979Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:43.653452 containerd[2015]: time="2025-01-30T14:00:43.653108891Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:00:43.654084 containerd[2015]: time="2025-01-30T14:00:43.653810651Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:00:43.654084 containerd[2015]: time="2025-01-30T14:00:43.653983103Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:00:43.662145 containerd[2015]: time="2025-01-30T14:00:43.661839887Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:00:43.662145 containerd[2015]: time="2025-01-30T14:00:43.661984931Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:00:43.662381 containerd[2015]: time="2025-01-30T14:00:43.662337467Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:00:43.662506 containerd[2015]: time="2025-01-30T14:00:43.662476655Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:00:43.665297 containerd[2015]: time="2025-01-30T14:00:43.662596703Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:00:43.665297 containerd[2015]: time="2025-01-30T14:00:43.662896367Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:00:43.667030 containerd[2015]: time="2025-01-30T14:00:43.665822267Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669238955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669316103Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669371387Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669412439Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669472643Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669544463Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669580703Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669637835Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669713807Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.669799 containerd[2015]: time="2025-01-30T14:00:43.669753743Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.670383 containerd[2015]: time="2025-01-30T14:00:43.670312799Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:00:43.670518 containerd[2015]: time="2025-01-30T14:00:43.670487435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.670646 containerd[2015]: time="2025-01-30T14:00:43.670619255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.670776 containerd[2015]: time="2025-01-30T14:00:43.670748567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.671163 containerd[2015]: time="2025-01-30T14:00:43.671115575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673097195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673166519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673199207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673258367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673294127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673354247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673386419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673448723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673481507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673544159Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:00:43.673978 containerd[2015]: time="2025-01-30T14:00:43.673616387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.673649423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.674543339Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.674896307Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.676112243Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.676187555Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.676229771Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.676288559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.676360187Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.676389455Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:00:43.676539 containerd[2015]: time="2025-01-30T14:00:43.676416539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:00:43.678100 containerd[2015]: time="2025-01-30T14:00:43.677631755Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:00:43.679926 ntpd[1990]: bind(24) AF_INET6 fe80::4f6:80ff:fe25:845b%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.680124815Z" level=info msg="Connect containerd service" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.680276159Z" level=info msg="using legacy CRI server" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.680298647Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.680524451Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.686603231Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.686847167Z" level=info msg="Start subscribing containerd event" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.686919239Z" level=info msg="Start recovering state" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.687110183Z" level=info msg="Start event monitor" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.687137375Z" level=info msg="Start snapshots syncer" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.687158783Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.687179327Z" level=info msg="Start streaming server" Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.690498899Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.690605951Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:00:43.696171 containerd[2015]: time="2025-01-30T14:00:43.695183915Z" level=info msg="containerd successfully booted in 0.321366s" Jan 30 14:00:43.696773 ntpd[1990]: 30 Jan 14:00:43 ntpd[1990]: bind(24) AF_INET6 fe80::4f6:80ff:fe25:845b%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 14:00:43.696773 ntpd[1990]: 30 Jan 14:00:43 ntpd[1990]: unable to create socket on eth0 (6) for fe80::4f6:80ff:fe25:845b%2#123 Jan 30 14:00:43.696773 ntpd[1990]: 30 Jan 14:00:43 ntpd[1990]: failed to init interface for address fe80::4f6:80ff:fe25:845b%2 Jan 30 14:00:43.679989 ntpd[1990]: unable to create socket on eth0 (6) for fe80::4f6:80ff:fe25:845b%2#123 Jan 30 14:00:43.680019 ntpd[1990]: failed to init interface for address fe80::4f6:80ff:fe25:845b%2 Jan 30 14:00:43.680882 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 14:00:43.684355 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2035 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 14:00:43.697979 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:00:43.701282 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 14:00:43.716117 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 14:00:43.721141 update-ssh-keys[2171]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:00:43.722847 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:00:43.730688 systemd[1]: Finished sshkeys.service. Jan 30 14:00:43.790985 polkitd[2173]: Started polkitd version 121 Jan 30 14:00:43.824268 polkitd[2173]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 14:00:43.826184 polkitd[2173]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 14:00:43.828089 polkitd[2173]: Finished loading, compiling and executing 2 rules Jan 30 14:00:43.830024 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 14:00:43.831877 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 14:00:43.836165 polkitd[2173]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 14:00:43.870670 systemd-resolved[1936]: System hostname changed to 'ip-172-31-25-125'. Jan 30 14:00:43.870677 systemd-hostnamed[2035]: Hostname set to (transient) Jan 30 14:00:43.945197 sshd_keygen[2031]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:00:44.000150 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:00:44.012401 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:00:44.018461 systemd[1]: Started sshd@0-172.31.25.125:22-139.178.89.65:39554.service - OpenSSH per-connection server daemon (139.178.89.65:39554). Jan 30 14:00:44.056694 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:00:44.058162 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:00:44.072432 systemd-networkd[1935]: eth0: Gained IPv6LL Jan 30 14:00:44.072667 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:00:44.090038 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:00:44.094027 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:00:44.109727 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 14:00:44.116555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:44.124555 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:00:44.131761 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:00:44.147617 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:00:44.161040 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:00:44.164544 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:00:44.243224 amazon-ssm-agent[2206]: Initializing new seelog logger Jan 30 14:00:44.243688 amazon-ssm-agent[2206]: New Seelog Logger Creation Complete Jan 30 14:00:44.243688 amazon-ssm-agent[2206]: 2025/01/30 14:00:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:44.243688 amazon-ssm-agent[2206]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:44.245357 amazon-ssm-agent[2206]: 2025/01/30 14:00:44 processing appconfig overrides Jan 30 14:00:44.250095 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO Proxy environment variables: Jan 30 14:00:44.250095 amazon-ssm-agent[2206]: 2025/01/30 14:00:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:44.250095 amazon-ssm-agent[2206]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:44.251482 amazon-ssm-agent[2206]: 2025/01/30 14:00:44 processing appconfig overrides Jan 30 14:00:44.251482 amazon-ssm-agent[2206]: 2025/01/30 14:00:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:44.251482 amazon-ssm-agent[2206]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:44.251482 amazon-ssm-agent[2206]: 2025/01/30 14:00:44 processing appconfig overrides Jan 30 14:00:44.258318 amazon-ssm-agent[2206]: 2025/01/30 14:00:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:44.258318 amazon-ssm-agent[2206]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:00:44.258499 amazon-ssm-agent[2206]: 2025/01/30 14:00:44 processing appconfig overrides Jan 30 14:00:44.266343 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:00:44.289183 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 39554 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:44.294235 sshd[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:44.321592 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:00:44.331520 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:00:44.341792 systemd-logind[1996]: New session 1 of user core. Jan 30 14:00:44.352764 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO https_proxy: Jan 30 14:00:44.383437 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:00:44.400722 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:00:44.421274 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:00:44.455097 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO http_proxy: Jan 30 14:00:44.554631 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO no_proxy: Jan 30 14:00:44.657257 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO Checking if agent identity type OnPrem can be assumed Jan 30 14:00:44.727573 tar[2007]: linux-arm64/LICENSE Jan 30 14:00:44.727573 tar[2007]: linux-arm64/README.md Jan 30 14:00:44.757126 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO Checking if agent identity type EC2 can be assumed Jan 30 14:00:44.756947 systemd[2227]: Queued start job for default target default.target. Jan 30 14:00:44.764785 systemd[2227]: Created slice app.slice - User Application Slice. Jan 30 14:00:44.764896 systemd[2227]: Reached target paths.target - Paths. Jan 30 14:00:44.764930 systemd[2227]: Reached target timers.target - Timers. Jan 30 14:00:44.772300 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:00:44.782179 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:00:44.815323 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:00:44.816345 systemd[2227]: Reached target sockets.target - Sockets. Jan 30 14:00:44.816379 systemd[2227]: Reached target basic.target - Basic System. Jan 30 14:00:44.816475 systemd[2227]: Reached target default.target - Main User Target. Jan 30 14:00:44.816544 systemd[2227]: Startup finished in 380ms. Jan 30 14:00:44.816570 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:00:44.827362 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:00:44.852685 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO Agent will take identity from EC2 Jan 30 14:00:44.952413 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:00:44.997602 systemd[1]: Started sshd@1-172.31.25.125:22-139.178.89.65:44966.service - OpenSSH per-connection server daemon (139.178.89.65:44966). Jan 30 14:00:45.051477 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:00:45.151695 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:00:45.206121 sshd[2244]: Accepted publickey for core from 139.178.89.65 port 44966 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:45.210003 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:45.222522 systemd-logind[1996]: New session 2 of user core. Jan 30 14:00:45.227017 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO [Registrar] Starting registrar module Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:44 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:45 INFO [EC2Identity] EC2 registration was successful. Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:45 INFO [CredentialRefresher] credentialRefresher has started Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:45 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 14:00:45.233409 amazon-ssm-agent[2206]: 2025-01-30 14:00:45 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 14:00:45.251457 amazon-ssm-agent[2206]: 2025-01-30 14:00:45 INFO [CredentialRefresher] Next credential rotation will be in 32.0749360968 minutes Jan 30 14:00:45.358101 sshd[2244]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:45.365803 systemd[1]: sshd@1-172.31.25.125:22-139.178.89.65:44966.service: Deactivated successfully. Jan 30 14:00:45.371422 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:00:45.373141 systemd-logind[1996]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:00:45.375762 systemd-logind[1996]: Removed session 2. Jan 30 14:00:45.395804 systemd[1]: Started sshd@2-172.31.25.125:22-139.178.89.65:44974.service - OpenSSH per-connection server daemon (139.178.89.65:44974). Jan 30 14:00:45.576095 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 44974 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:45.578505 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:45.588170 systemd-logind[1996]: New session 3 of user core. Jan 30 14:00:45.594646 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:00:45.727184 sshd[2251]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:45.736443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:45.740366 systemd[1]: sshd@2-172.31.25.125:22-139.178.89.65:44974.service: Deactivated successfully. Jan 30 14:00:45.740847 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:00:45.746040 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:00:45.748374 systemd-logind[1996]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:00:45.752478 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:00:45.755758 systemd[1]: Startup finished in 1.159s (kernel) + 8.963s (initrd) + 8.041s (userspace) = 18.163s. Jan 30 14:00:45.767161 systemd-logind[1996]: Removed session 3. Jan 30 14:00:46.266905 amazon-ssm-agent[2206]: 2025-01-30 14:00:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 14:00:46.367923 amazon-ssm-agent[2206]: 2025-01-30 14:00:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2272) started Jan 30 14:00:46.468270 amazon-ssm-agent[2206]: 2025-01-30 14:00:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 14:00:46.607706 kubelet[2260]: E0130 14:00:46.607583 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:00:46.612042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:00:46.612426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:00:46.613000 systemd[1]: kubelet.service: Consumed 1.238s CPU time. Jan 30 14:00:46.679929 ntpd[1990]: Listen normally on 7 eth0 [fe80::4f6:80ff:fe25:845b%2]:123 Jan 30 14:00:46.680406 ntpd[1990]: 30 Jan 14:00:46 ntpd[1990]: Listen normally on 7 eth0 [fe80::4f6:80ff:fe25:845b%2]:123 Jan 30 14:00:50.050698 systemd-resolved[1936]: Clock change detected. Flushing caches. Jan 30 14:00:56.132123 systemd[1]: Started sshd@3-172.31.25.125:22-139.178.89.65:59194.service - OpenSSH per-connection server daemon (139.178.89.65:59194). Jan 30 14:00:56.313250 sshd[2285]: Accepted publickey for core from 139.178.89.65 port 59194 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:56.315824 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:56.324228 systemd-logind[1996]: New session 4 of user core. Jan 30 14:00:56.331254 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:00:56.457900 sshd[2285]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:56.465108 systemd[1]: sshd@3-172.31.25.125:22-139.178.89.65:59194.service: Deactivated successfully. Jan 30 14:00:56.469329 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:00:56.470962 systemd-logind[1996]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:00:56.472657 systemd-logind[1996]: Removed session 4. Jan 30 14:00:56.499497 systemd[1]: Started sshd@4-172.31.25.125:22-139.178.89.65:59200.service - OpenSSH per-connection server daemon (139.178.89.65:59200). Jan 30 14:00:56.665118 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 59200 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:56.667680 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:56.674968 systemd-logind[1996]: New session 5 of user core. Jan 30 14:00:56.685255 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:00:56.802138 sshd[2292]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:56.808156 systemd[1]: sshd@4-172.31.25.125:22-139.178.89.65:59200.service: Deactivated successfully. Jan 30 14:00:56.811467 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:00:56.813914 systemd-logind[1996]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:00:56.815660 systemd-logind[1996]: Removed session 5. Jan 30 14:00:56.834174 systemd[1]: Started sshd@5-172.31.25.125:22-139.178.89.65:59204.service - OpenSSH per-connection server daemon (139.178.89.65:59204). Jan 30 14:00:57.012053 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 59204 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:57.014668 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:57.015968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:00:57.023756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:57.029374 systemd-logind[1996]: New session 6 of user core. Jan 30 14:00:57.033084 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:00:57.168374 sshd[2299]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:57.174762 systemd[1]: sshd@5-172.31.25.125:22-139.178.89.65:59204.service: Deactivated successfully. Jan 30 14:00:57.175162 systemd-logind[1996]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:00:57.182923 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:00:57.186669 systemd-logind[1996]: Removed session 6. Jan 30 14:00:57.213565 systemd[1]: Started sshd@6-172.31.25.125:22-139.178.89.65:59208.service - OpenSSH per-connection server daemon (139.178.89.65:59208). Jan 30 14:00:57.336888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:57.353106 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:00:57.385734 sshd[2309]: Accepted publickey for core from 139.178.89.65 port 59208 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:57.389247 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:57.399094 systemd-logind[1996]: New session 7 of user core. Jan 30 14:00:57.404295 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:00:57.430159 kubelet[2316]: E0130 14:00:57.429963 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:00:57.437315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:00:57.437676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:00:57.523036 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:00:57.523650 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:57.543590 sudo[2325]: pam_unix(sudo:session): session closed for user root Jan 30 14:00:57.568440 sshd[2309]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:57.574190 systemd-logind[1996]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:00:57.576373 systemd[1]: sshd@6-172.31.25.125:22-139.178.89.65:59208.service: Deactivated successfully. Jan 30 14:00:57.579343 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:00:57.582728 systemd-logind[1996]: Removed session 7. Jan 30 14:00:57.604512 systemd[1]: Started sshd@7-172.31.25.125:22-139.178.89.65:59210.service - OpenSSH per-connection server daemon (139.178.89.65:59210). Jan 30 14:00:57.784279 sshd[2330]: Accepted publickey for core from 139.178.89.65 port 59210 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:57.786972 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:57.794574 systemd-logind[1996]: New session 8 of user core. Jan 30 14:00:57.807244 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:00:57.912162 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:00:57.912821 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:57.919118 sudo[2334]: pam_unix(sudo:session): session closed for user root Jan 30 14:00:57.929308 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:00:57.929935 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:57.950954 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:00:57.970076 auditctl[2337]: No rules Jan 30 14:00:57.972421 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:00:57.972825 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:00:57.979710 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:00:58.025269 augenrules[2355]: No rules Jan 30 14:00:58.027501 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:00:58.031300 sudo[2333]: pam_unix(sudo:session): session closed for user root Jan 30 14:00:58.054778 sshd[2330]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:58.060646 systemd[1]: sshd@7-172.31.25.125:22-139.178.89.65:59210.service: Deactivated successfully. Jan 30 14:00:58.063776 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:00:58.069377 systemd-logind[1996]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:00:58.071340 systemd-logind[1996]: Removed session 8. Jan 30 14:00:58.094508 systemd[1]: Started sshd@8-172.31.25.125:22-139.178.89.65:59224.service - OpenSSH per-connection server daemon (139.178.89.65:59224). Jan 30 14:00:58.261929 sshd[2363]: Accepted publickey for core from 139.178.89.65 port 59224 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:00:58.264559 sshd[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:58.273100 systemd-logind[1996]: New session 9 of user core. Jan 30 14:00:58.279256 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:00:58.383404 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:00:58.384074 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:58.801777 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:00:58.818480 (dockerd)[2381]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:00:59.161119 dockerd[2381]: time="2025-01-30T14:00:59.160793233Z" level=info msg="Starting up" Jan 30 14:00:59.313713 dockerd[2381]: time="2025-01-30T14:00:59.313637666Z" level=info msg="Loading containers: start." Jan 30 14:00:59.462031 kernel: Initializing XFRM netlink socket Jan 30 14:00:59.494685 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:00:59.576099 systemd-networkd[1935]: docker0: Link UP Jan 30 14:00:59.600435 dockerd[2381]: time="2025-01-30T14:00:59.600273147Z" level=info msg="Loading containers: done." Jan 30 14:00:59.624173 dockerd[2381]: time="2025-01-30T14:00:59.624048231Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:00:59.624422 dockerd[2381]: time="2025-01-30T14:00:59.624253263Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:00:59.624485 dockerd[2381]: time="2025-01-30T14:00:59.624466131Z" level=info msg="Daemon has completed initialization" Jan 30 14:00:59.683706 dockerd[2381]: time="2025-01-30T14:00:59.682682391Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:00:59.684022 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:01:00.806833 containerd[2015]: time="2025-01-30T14:01:00.806654705Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 14:01:01.448541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961877709.mount: Deactivated successfully. Jan 30 14:01:03.716497 containerd[2015]: time="2025-01-30T14:01:03.716437963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:03.719451 containerd[2015]: time="2025-01-30T14:01:03.719344399Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618070" Jan 30 14:01:03.722026 containerd[2015]: time="2025-01-30T14:01:03.721010899Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:03.731185 containerd[2015]: time="2025-01-30T14:01:03.731112043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:03.733625 containerd[2015]: time="2025-01-30T14:01:03.733568959Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.92685177s" Jan 30 14:01:03.733805 containerd[2015]: time="2025-01-30T14:01:03.733773175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 30 14:01:03.735510 containerd[2015]: time="2025-01-30T14:01:03.735445183Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 14:01:05.996017 containerd[2015]: time="2025-01-30T14:01:05.994243931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:05.997289 containerd[2015]: time="2025-01-30T14:01:05.997244867Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469467" Jan 30 14:01:06.001633 containerd[2015]: time="2025-01-30T14:01:06.001544587Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:06.012514 containerd[2015]: time="2025-01-30T14:01:06.012451819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:06.014668 containerd[2015]: time="2025-01-30T14:01:06.014605159Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 2.278792212s" Jan 30 14:01:06.014807 containerd[2015]: time="2025-01-30T14:01:06.014663911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 30 14:01:06.015582 containerd[2015]: time="2025-01-30T14:01:06.015372475Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 14:01:07.470913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:01:07.483294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:07.820459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:07.828551 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:01:07.949732 kubelet[2591]: E0130 14:01:07.949638 2591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:01:07.954797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:01:07.956215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:01:08.112459 containerd[2015]: time="2025-01-30T14:01:08.112210149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:08.114504 containerd[2015]: time="2025-01-30T14:01:08.114410565Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024217" Jan 30 14:01:08.115547 containerd[2015]: time="2025-01-30T14:01:08.115472577Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:08.122522 containerd[2015]: time="2025-01-30T14:01:08.122441397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:08.124804 containerd[2015]: time="2025-01-30T14:01:08.124610409Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 2.109178906s" Jan 30 14:01:08.124804 containerd[2015]: time="2025-01-30T14:01:08.124668285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 30 14:01:08.125882 containerd[2015]: time="2025-01-30T14:01:08.125516349Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 14:01:09.394506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929592441.mount: Deactivated successfully. Jan 30 14:01:09.940183 containerd[2015]: time="2025-01-30T14:01:09.940097822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:09.941777 containerd[2015]: time="2025-01-30T14:01:09.941691266Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 30 14:01:09.945024 containerd[2015]: time="2025-01-30T14:01:09.942705050Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:09.946757 containerd[2015]: time="2025-01-30T14:01:09.946693970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:09.948360 containerd[2015]: time="2025-01-30T14:01:09.948283082Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.822714749s" Jan 30 14:01:09.948506 containerd[2015]: time="2025-01-30T14:01:09.948361550Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 30 14:01:09.949587 containerd[2015]: time="2025-01-30T14:01:09.949402682Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:01:10.501958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460257895.mount: Deactivated successfully. Jan 30 14:01:11.515025 containerd[2015]: time="2025-01-30T14:01:11.513066134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:11.516391 containerd[2015]: time="2025-01-30T14:01:11.516311090Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 30 14:01:11.524023 containerd[2015]: time="2025-01-30T14:01:11.522801110Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:11.529201 containerd[2015]: time="2025-01-30T14:01:11.529129574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:11.531894 containerd[2015]: time="2025-01-30T14:01:11.531829022Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.582319396s" Jan 30 14:01:11.532156 containerd[2015]: time="2025-01-30T14:01:11.531890726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:01:11.532576 containerd[2015]: time="2025-01-30T14:01:11.532508042Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 14:01:12.035959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount847261781.mount: Deactivated successfully. Jan 30 14:01:12.042962 containerd[2015]: time="2025-01-30T14:01:12.042572653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:12.044172 containerd[2015]: time="2025-01-30T14:01:12.044124433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 30 14:01:12.044931 containerd[2015]: time="2025-01-30T14:01:12.044680117Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:12.050173 containerd[2015]: time="2025-01-30T14:01:12.050076157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:12.051850 containerd[2015]: time="2025-01-30T14:01:12.051649585Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 519.085911ms" Jan 30 14:01:12.051850 containerd[2015]: time="2025-01-30T14:01:12.051706705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 14:01:12.052665 containerd[2015]: time="2025-01-30T14:01:12.052602541Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 14:01:12.656015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063435242.mount: Deactivated successfully. Jan 30 14:01:14.269754 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 14:01:17.256300 containerd[2015]: time="2025-01-30T14:01:17.256232047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:17.258661 containerd[2015]: time="2025-01-30T14:01:17.258606619Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 30 14:01:17.259684 containerd[2015]: time="2025-01-30T14:01:17.259106047Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:17.265629 containerd[2015]: time="2025-01-30T14:01:17.265531615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:17.268250 containerd[2015]: time="2025-01-30T14:01:17.268202071Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 5.215538426s" Jan 30 14:01:17.268868 containerd[2015]: time="2025-01-30T14:01:17.268380703Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 30 14:01:17.970912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:01:17.980585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:18.282445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:18.286557 (kubelet)[2737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:01:18.366008 kubelet[2737]: E0130 14:01:18.364286 2737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:01:18.368518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:01:18.369595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:01:24.200570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:24.210511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:24.275351 systemd[1]: Reloading requested from client PID 2752 ('systemctl') (unit session-9.scope)... Jan 30 14:01:24.275385 systemd[1]: Reloading... Jan 30 14:01:24.526042 zram_generator::config[2795]: No configuration found. Jan 30 14:01:24.736620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:01:24.904171 systemd[1]: Reloading finished in 628 ms. Jan 30 14:01:24.984899 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:01:24.985126 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:01:24.985643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:24.994928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:25.267029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:25.281575 (kubelet)[2854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:01:25.351690 kubelet[2854]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:25.352525 kubelet[2854]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:01:25.352525 kubelet[2854]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:25.354050 kubelet[2854]: I0130 14:01:25.353064 2854 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:01:27.075015 kubelet[2854]: I0130 14:01:27.074043 2854 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 14:01:27.075015 kubelet[2854]: I0130 14:01:27.074090 2854 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:01:27.075015 kubelet[2854]: I0130 14:01:27.074493 2854 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 14:01:27.111097 kubelet[2854]: E0130 14:01:27.111033 2854 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.125:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:27.114312 kubelet[2854]: I0130 14:01:27.114252 2854 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:01:27.128485 kubelet[2854]: E0130 14:01:27.128425 2854 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:01:27.128736 kubelet[2854]: I0130 14:01:27.128712 2854 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:01:27.135739 kubelet[2854]: I0130 14:01:27.135681 2854 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:01:27.139073 kubelet[2854]: I0130 14:01:27.137686 2854 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 14:01:27.139073 kubelet[2854]: I0130 14:01:27.138074 2854 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:01:27.139073 kubelet[2854]: I0130 14:01:27.138117 2854 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:01:27.139073 kubelet[2854]: I0130 14:01:27.138620 2854 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:01:27.139448 kubelet[2854]: I0130 14:01:27.138642 2854 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 14:01:27.139448 kubelet[2854]: I0130 14:01:27.138856 2854 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:27.142400 kubelet[2854]: I0130 14:01:27.142337 2854 kubelet.go:408] "Attempting to sync node with API server" Jan 30 14:01:27.142560 kubelet[2854]: I0130 14:01:27.142445 2854 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:01:27.142560 kubelet[2854]: I0130 14:01:27.142497 2854 kubelet.go:314] "Adding apiserver pod source" Jan 30 14:01:27.142560 kubelet[2854]: I0130 14:01:27.142521 2854 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:01:27.154333 kubelet[2854]: W0130 14:01:27.154244 2854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-125&limit=500&resourceVersion=0": dial tcp 172.31.25.125:6443: connect: connection refused Jan 30 14:01:27.154494 kubelet[2854]: E0130 14:01:27.154348 2854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-125&limit=500&resourceVersion=0\": dial tcp 172.31.25.125:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:27.155019 kubelet[2854]: I0130 14:01:27.154955 2854 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:01:27.158068 kubelet[2854]: I0130 14:01:27.158006 2854 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:01:27.159253 kubelet[2854]: W0130 14:01:27.159212 2854 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:01:27.160180 kubelet[2854]: W0130 14:01:27.160070 2854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.125:6443: connect: connection refused Jan 30 14:01:27.160180 kubelet[2854]: E0130 14:01:27.160175 2854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.125:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:27.162209 kubelet[2854]: I0130 14:01:27.162157 2854 server.go:1269] "Started kubelet" Jan 30 14:01:27.167634 kubelet[2854]: I0130 14:01:27.167572 2854 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:01:27.168588 kubelet[2854]: I0130 14:01:27.168490 2854 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:01:27.169493 kubelet[2854]: I0130 14:01:27.169058 2854 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:01:27.169799 kubelet[2854]: I0130 14:01:27.169762 2854 server.go:460] "Adding debug handlers to kubelet server" Jan 30 14:01:27.171770 kubelet[2854]: E0130 14:01:27.169329 2854 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.125:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.125:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-125.181f7d3e2ba6fdbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-125,UID:ip-172-31-25-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-125,},FirstTimestamp:2025-01-30 14:01:27.162109372 +0000 UTC m=+1.873414462,LastTimestamp:2025-01-30 14:01:27.162109372 +0000 UTC m=+1.873414462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-125,}" Jan 30 14:01:27.175482 kubelet[2854]: I0130 14:01:27.175237 2854 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:01:27.177136 kubelet[2854]: E0130 14:01:27.175784 2854 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:01:27.177136 kubelet[2854]: I0130 14:01:27.176073 2854 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:01:27.183027 kubelet[2854]: I0130 14:01:27.182975 2854 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 14:01:27.183397 kubelet[2854]: I0130 14:01:27.183371 2854 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 14:01:27.183584 kubelet[2854]: I0130 14:01:27.183564 2854 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:01:27.184300 kubelet[2854]: E0130 14:01:27.184256 2854 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-125\" not found" Jan 30 14:01:27.184962 kubelet[2854]: E0130 14:01:27.184885 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-125?timeout=10s\": dial tcp 172.31.25.125:6443: connect: connection refused" interval="200ms" Jan 30 14:01:27.185688 kubelet[2854]: W0130 14:01:27.185037 2854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.125:6443: connect: connection refused Jan 30 14:01:27.185688 kubelet[2854]: E0130 14:01:27.185168 2854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.125:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:27.185688 kubelet[2854]: I0130 14:01:27.185424 2854 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:01:27.185688 kubelet[2854]: I0130 14:01:27.185565 2854 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:01:27.190314 kubelet[2854]: I0130 14:01:27.190280 2854 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:01:27.230709 kubelet[2854]: I0130 14:01:27.230454 2854 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:01:27.230709 kubelet[2854]: I0130 14:01:27.230501 2854 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:01:27.230709 kubelet[2854]: I0130 14:01:27.230537 2854 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:27.232214 kubelet[2854]: I0130 14:01:27.232042 2854 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:01:27.234626 kubelet[2854]: I0130 14:01:27.234181 2854 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:01:27.234626 kubelet[2854]: I0130 14:01:27.234220 2854 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:01:27.234626 kubelet[2854]: I0130 14:01:27.234258 2854 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 14:01:27.234626 kubelet[2854]: E0130 14:01:27.234426 2854 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:01:27.237094 kubelet[2854]: I0130 14:01:27.235609 2854 policy_none.go:49] "None policy: Start" Jan 30 14:01:27.237649 kubelet[2854]: I0130 14:01:27.237597 2854 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:01:27.237649 kubelet[2854]: I0130 14:01:27.237649 2854 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:01:27.244844 kubelet[2854]: W0130 14:01:27.244757 2854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.125:6443: connect: connection refused Jan 30 14:01:27.245032 kubelet[2854]: E0130 14:01:27.244841 2854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.125:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:27.255863 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:01:27.274839 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:01:27.281923 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:01:27.285646 kubelet[2854]: E0130 14:01:27.285587 2854 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-125\" not found" Jan 30 14:01:27.291413 kubelet[2854]: I0130 14:01:27.290592 2854 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:01:27.291413 kubelet[2854]: I0130 14:01:27.290897 2854 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:01:27.291413 kubelet[2854]: I0130 14:01:27.290916 2854 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:01:27.291413 kubelet[2854]: I0130 14:01:27.291258 2854 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:01:27.296722 kubelet[2854]: E0130 14:01:27.296280 2854 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-125\" not found" Jan 30 14:01:27.352666 systemd[1]: Created slice kubepods-burstable-pod9dd01c1bc71107a5f9ebb5d6f5180a45.slice - libcontainer container kubepods-burstable-pod9dd01c1bc71107a5f9ebb5d6f5180a45.slice. Jan 30 14:01:27.372918 systemd[1]: Created slice kubepods-burstable-pod7f8e1ad1f506f680fed276a9a618aaf6.slice - libcontainer container kubepods-burstable-pod7f8e1ad1f506f680fed276a9a618aaf6.slice. Jan 30 14:01:27.383136 systemd[1]: Created slice kubepods-burstable-pod90ec48d293fda880ab28cf54f1004555.slice - libcontainer container kubepods-burstable-pod90ec48d293fda880ab28cf54f1004555.slice. Jan 30 14:01:27.386064 kubelet[2854]: I0130 14:01:27.385515 2854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dd01c1bc71107a5f9ebb5d6f5180a45-ca-certs\") pod \"kube-apiserver-ip-172-31-25-125\" (UID: \"9dd01c1bc71107a5f9ebb5d6f5180a45\") " pod="kube-system/kube-apiserver-ip-172-31-25-125" Jan 30 14:01:27.386064 kubelet[2854]: I0130 14:01:27.385610 2854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dd01c1bc71107a5f9ebb5d6f5180a45-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-125\" (UID: \"9dd01c1bc71107a5f9ebb5d6f5180a45\") " pod="kube-system/kube-apiserver-ip-172-31-25-125" Jan 30 14:01:27.386064 kubelet[2854]: I0130 14:01:27.385684 2854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:27.386064 kubelet[2854]: I0130 14:01:27.385757 2854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:27.386064 kubelet[2854]: I0130 14:01:27.385822 2854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:27.386389 kubelet[2854]: I0130 14:01:27.385872 2854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90ec48d293fda880ab28cf54f1004555-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-125\" (UID: \"90ec48d293fda880ab28cf54f1004555\") " pod="kube-system/kube-scheduler-ip-172-31-25-125" Jan 30 14:01:27.386389 kubelet[2854]: I0130 14:01:27.385936 2854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dd01c1bc71107a5f9ebb5d6f5180a45-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-125\" (UID: \"9dd01c1bc71107a5f9ebb5d6f5180a45\") " pod="kube-system/kube-apiserver-ip-172-31-25-125" Jan 30 14:01:27.386389 kubelet[2854]: I0130 14:01:27.386029 2854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:27.386389 kubelet[2854]: I0130 14:01:27.386124 2854 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:27.387418 kubelet[2854]: E0130 14:01:27.386514 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-125?timeout=10s\": dial tcp 172.31.25.125:6443: connect: connection refused" interval="400ms" Jan 30 14:01:27.393806 kubelet[2854]: I0130 14:01:27.393765 2854 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-125" Jan 30 14:01:27.394369 kubelet[2854]: E0130 14:01:27.394318 2854 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.125:6443/api/v1/nodes\": dial tcp 172.31.25.125:6443: connect: connection refused" node="ip-172-31-25-125" Jan 30 14:01:27.596567 kubelet[2854]: I0130 14:01:27.596477 2854 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-125" Jan 30 14:01:27.597011 kubelet[2854]: E0130 14:01:27.596944 2854 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.125:6443/api/v1/nodes\": dial tcp 172.31.25.125:6443: connect: connection refused" node="ip-172-31-25-125" Jan 30 14:01:27.669213 containerd[2015]: time="2025-01-30T14:01:27.668915190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-125,Uid:9dd01c1bc71107a5f9ebb5d6f5180a45,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:27.681277 containerd[2015]: time="2025-01-30T14:01:27.680919198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-125,Uid:7f8e1ad1f506f680fed276a9a618aaf6,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:27.690073 containerd[2015]: time="2025-01-30T14:01:27.689666142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-125,Uid:90ec48d293fda880ab28cf54f1004555,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:27.787460 kubelet[2854]: E0130 14:01:27.787356 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-125?timeout=10s\": dial tcp 172.31.25.125:6443: connect: connection refused" interval="800ms" Jan 30 14:01:27.999429 kubelet[2854]: I0130 14:01:27.999388 2854 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-125" Jan 30 14:01:28.000026 kubelet[2854]: E0130 14:01:27.999848 2854 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.125:6443/api/v1/nodes\": dial tcp 172.31.25.125:6443: connect: connection refused" node="ip-172-31-25-125" Jan 30 14:01:28.086070 kubelet[2854]: W0130 14:01:28.085928 2854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.125:6443: connect: connection refused Jan 30 14:01:28.086070 kubelet[2854]: E0130 14:01:28.086065 2854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.125:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:28.210607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3978455241.mount: Deactivated successfully. Jan 30 14:01:28.228693 containerd[2015]: time="2025-01-30T14:01:28.228621761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:28.230857 containerd[2015]: time="2025-01-30T14:01:28.230789477Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:28.232737 containerd[2015]: time="2025-01-30T14:01:28.232640621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 14:01:28.235060 containerd[2015]: time="2025-01-30T14:01:28.234976577Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:01:28.236953 containerd[2015]: time="2025-01-30T14:01:28.236902493Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:28.240058 containerd[2015]: time="2025-01-30T14:01:28.239863757Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:28.241543 containerd[2015]: time="2025-01-30T14:01:28.241440725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:01:28.246273 containerd[2015]: time="2025-01-30T14:01:28.245997929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:01:28.251336 containerd[2015]: time="2025-01-30T14:01:28.250940525Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.863167ms" Jan 30 14:01:28.255439 containerd[2015]: time="2025-01-30T14:01:28.255356981Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 586.309575ms" Jan 30 14:01:28.270831 containerd[2015]: time="2025-01-30T14:01:28.270773273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.994607ms" Jan 30 14:01:28.271967 kubelet[2854]: W0130 14:01:28.271847 2854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-125&limit=500&resourceVersion=0": dial tcp 172.31.25.125:6443: connect: connection refused Jan 30 14:01:28.271967 kubelet[2854]: E0130 14:01:28.271951 2854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-125&limit=500&resourceVersion=0\": dial tcp 172.31.25.125:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:28.277777 kubelet[2854]: W0130 14:01:28.277715 2854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.125:6443: connect: connection refused Jan 30 14:01:28.279459 kubelet[2854]: E0130 14:01:28.277789 2854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.125:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:28.467720 containerd[2015]: time="2025-01-30T14:01:28.466183122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:28.467720 containerd[2015]: time="2025-01-30T14:01:28.466278642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:28.467720 containerd[2015]: time="2025-01-30T14:01:28.466304166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:28.467720 containerd[2015]: time="2025-01-30T14:01:28.466471038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:28.473915 containerd[2015]: time="2025-01-30T14:01:28.473603946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:28.473915 containerd[2015]: time="2025-01-30T14:01:28.473709774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:28.473915 containerd[2015]: time="2025-01-30T14:01:28.473746938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:28.475249 containerd[2015]: time="2025-01-30T14:01:28.474829758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:28.481431 containerd[2015]: time="2025-01-30T14:01:28.478257270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:28.481884 containerd[2015]: time="2025-01-30T14:01:28.481681818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:28.481884 containerd[2015]: time="2025-01-30T14:01:28.481757910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:28.482606 containerd[2015]: time="2025-01-30T14:01:28.482398302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:28.507609 kubelet[2854]: W0130 14:01:28.505874 2854 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.125:6443: connect: connection refused Jan 30 14:01:28.507609 kubelet[2854]: E0130 14:01:28.506011 2854 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.125:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:01:28.515349 systemd[1]: Started cri-containerd-99083c6d8e82c27b607abd3e8483092d34f1153468bdcf50ca493adecbc6222b.scope - libcontainer container 99083c6d8e82c27b607abd3e8483092d34f1153468bdcf50ca493adecbc6222b. Jan 30 14:01:28.533544 systemd[1]: Started cri-containerd-b877e14dfdbe9ba92855967b4346e64e53e1cecd7ba9ebf9ff2091fa665fc94f.scope - libcontainer container b877e14dfdbe9ba92855967b4346e64e53e1cecd7ba9ebf9ff2091fa665fc94f. Jan 30 14:01:28.549676 systemd[1]: Started cri-containerd-62962888546e14b7d5ae35ac58a5066f9db0b0ed3d2b24f4fc112137e83a008c.scope - libcontainer container 62962888546e14b7d5ae35ac58a5066f9db0b0ed3d2b24f4fc112137e83a008c. Jan 30 14:01:28.589591 kubelet[2854]: E0130 14:01:28.589318 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-125?timeout=10s\": dial tcp 172.31.25.125:6443: connect: connection refused" interval="1.6s" Jan 30 14:01:28.634548 containerd[2015]: time="2025-01-30T14:01:28.634492123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-125,Uid:7f8e1ad1f506f680fed276a9a618aaf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"99083c6d8e82c27b607abd3e8483092d34f1153468bdcf50ca493adecbc6222b\"" Jan 30 14:01:28.650868 containerd[2015]: time="2025-01-30T14:01:28.650665723Z" level=info msg="CreateContainer within sandbox \"99083c6d8e82c27b607abd3e8483092d34f1153468bdcf50ca493adecbc6222b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:01:28.671915 containerd[2015]: time="2025-01-30T14:01:28.671745607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-125,Uid:90ec48d293fda880ab28cf54f1004555,Namespace:kube-system,Attempt:0,} returns sandbox id \"b877e14dfdbe9ba92855967b4346e64e53e1cecd7ba9ebf9ff2091fa665fc94f\"" Jan 30 14:01:28.677746 containerd[2015]: time="2025-01-30T14:01:28.677473159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-125,Uid:9dd01c1bc71107a5f9ebb5d6f5180a45,Namespace:kube-system,Attempt:0,} returns sandbox id \"62962888546e14b7d5ae35ac58a5066f9db0b0ed3d2b24f4fc112137e83a008c\"" Jan 30 14:01:28.680769 containerd[2015]: time="2025-01-30T14:01:28.680279239Z" level=info msg="CreateContainer within sandbox \"b877e14dfdbe9ba92855967b4346e64e53e1cecd7ba9ebf9ff2091fa665fc94f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:01:28.687710 containerd[2015]: time="2025-01-30T14:01:28.687656131Z" level=info msg="CreateContainer within sandbox \"62962888546e14b7d5ae35ac58a5066f9db0b0ed3d2b24f4fc112137e83a008c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:01:28.705302 containerd[2015]: time="2025-01-30T14:01:28.705223496Z" level=info msg="CreateContainer within sandbox \"99083c6d8e82c27b607abd3e8483092d34f1153468bdcf50ca493adecbc6222b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda\"" Jan 30 14:01:28.706370 containerd[2015]: time="2025-01-30T14:01:28.706308764Z" level=info msg="StartContainer for \"8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda\"" Jan 30 14:01:28.720388 containerd[2015]: time="2025-01-30T14:01:28.720253160Z" level=info msg="CreateContainer within sandbox \"b877e14dfdbe9ba92855967b4346e64e53e1cecd7ba9ebf9ff2091fa665fc94f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53\"" Jan 30 14:01:28.721410 containerd[2015]: time="2025-01-30T14:01:28.721265900Z" level=info msg="StartContainer for \"af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53\"" Jan 30 14:01:28.744630 containerd[2015]: time="2025-01-30T14:01:28.744433808Z" level=info msg="CreateContainer within sandbox \"62962888546e14b7d5ae35ac58a5066f9db0b0ed3d2b24f4fc112137e83a008c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23a4ddfa503fe9771330726249c1a659444f3bba0a045bb42367eaa97298e9e4\"" Jan 30 14:01:28.745501 containerd[2015]: time="2025-01-30T14:01:28.745290704Z" level=info msg="StartContainer for \"23a4ddfa503fe9771330726249c1a659444f3bba0a045bb42367eaa97298e9e4\"" Jan 30 14:01:28.769054 systemd[1]: Started cri-containerd-8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda.scope - libcontainer container 8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda. Jan 30 14:01:28.804807 kubelet[2854]: I0130 14:01:28.804579 2854 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-125" Jan 30 14:01:28.806426 kubelet[2854]: E0130 14:01:28.806368 2854 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.125:6443/api/v1/nodes\": dial tcp 172.31.25.125:6443: connect: connection refused" node="ip-172-31-25-125" Jan 30 14:01:28.824587 systemd[1]: Started cri-containerd-23a4ddfa503fe9771330726249c1a659444f3bba0a045bb42367eaa97298e9e4.scope - libcontainer container 23a4ddfa503fe9771330726249c1a659444f3bba0a045bb42367eaa97298e9e4. Jan 30 14:01:28.834758 systemd[1]: Started cri-containerd-af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53.scope - libcontainer container af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53. Jan 30 14:01:28.891583 containerd[2015]: time="2025-01-30T14:01:28.891498260Z" level=info msg="StartContainer for \"8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda\" returns successfully" Jan 30 14:01:28.961494 update_engine[1999]: I20250130 14:01:28.961400 1999 update_attempter.cc:509] Updating boot flags... Jan 30 14:01:28.962767 containerd[2015]: time="2025-01-30T14:01:28.961903857Z" level=info msg="StartContainer for \"23a4ddfa503fe9771330726249c1a659444f3bba0a045bb42367eaa97298e9e4\" returns successfully" Jan 30 14:01:28.989830 containerd[2015]: time="2025-01-30T14:01:28.988347285Z" level=info msg="StartContainer for \"af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53\" returns successfully" Jan 30 14:01:29.082127 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3132) Jan 30 14:01:29.592182 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3136) Jan 30 14:01:30.412034 kubelet[2854]: I0130 14:01:30.409671 2854 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-125" Jan 30 14:01:33.773341 kubelet[2854]: E0130 14:01:33.773272 2854 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-125\" not found" node="ip-172-31-25-125" Jan 30 14:01:33.842700 kubelet[2854]: I0130 14:01:33.842641 2854 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-25-125" Jan 30 14:01:34.156280 kubelet[2854]: I0130 14:01:34.155748 2854 apiserver.go:52] "Watching apiserver" Jan 30 14:01:34.183689 kubelet[2854]: I0130 14:01:34.183643 2854 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 14:01:35.765202 systemd[1]: Reloading requested from client PID 3309 ('systemctl') (unit session-9.scope)... Jan 30 14:01:35.765693 systemd[1]: Reloading... Jan 30 14:01:35.953032 zram_generator::config[3356]: No configuration found. Jan 30 14:01:36.185476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:01:36.385294 systemd[1]: Reloading finished in 618 ms. Jan 30 14:01:36.460518 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:36.471162 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:01:36.471559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:36.471640 systemd[1]: kubelet.service: Consumed 2.573s CPU time, 120.1M memory peak, 0B memory swap peak. Jan 30 14:01:36.481610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:36.791302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:36.801752 (kubelet)[3409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:01:36.906891 kubelet[3409]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:36.908044 kubelet[3409]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:01:36.908044 kubelet[3409]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:36.908044 kubelet[3409]: I0130 14:01:36.907560 3409 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:01:36.920965 kubelet[3409]: I0130 14:01:36.920639 3409 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 14:01:36.922089 kubelet[3409]: I0130 14:01:36.921221 3409 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:01:36.922089 kubelet[3409]: I0130 14:01:36.921697 3409 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 14:01:36.924756 kubelet[3409]: I0130 14:01:36.924715 3409 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:01:36.929094 kubelet[3409]: I0130 14:01:36.929052 3409 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:01:36.933608 sudo[3422]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 14:01:36.934368 sudo[3422]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 14:01:36.937896 kubelet[3409]: E0130 14:01:36.937408 3409 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:01:36.937896 kubelet[3409]: I0130 14:01:36.937459 3409 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:01:36.947799 kubelet[3409]: I0130 14:01:36.945881 3409 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:01:36.949783 kubelet[3409]: I0130 14:01:36.948435 3409 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 14:01:36.949783 kubelet[3409]: I0130 14:01:36.948665 3409 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:01:36.949783 kubelet[3409]: I0130 14:01:36.948726 3409 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:01:36.949783 kubelet[3409]: I0130 14:01:36.949250 3409 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:01:36.950279 kubelet[3409]: I0130 14:01:36.949281 3409 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 14:01:36.950279 kubelet[3409]: I0130 14:01:36.949338 3409 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:36.950279 kubelet[3409]: I0130 14:01:36.949524 3409 kubelet.go:408] "Attempting to sync node with API server" Jan 30 14:01:36.950279 kubelet[3409]: I0130 14:01:36.949546 3409 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:01:36.950279 kubelet[3409]: I0130 14:01:36.949585 3409 kubelet.go:314] "Adding apiserver pod source" Jan 30 14:01:36.950279 kubelet[3409]: I0130 14:01:36.949607 3409 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:01:36.959391 kubelet[3409]: I0130 14:01:36.959353 3409 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:01:36.960339 kubelet[3409]: I0130 14:01:36.960300 3409 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:01:36.961673 kubelet[3409]: I0130 14:01:36.961643 3409 server.go:1269] "Started kubelet" Jan 30 14:01:36.969980 kubelet[3409]: I0130 14:01:36.969802 3409 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:01:36.983144 kubelet[3409]: I0130 14:01:36.983082 3409 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:01:36.984870 kubelet[3409]: I0130 14:01:36.984838 3409 server.go:460] "Adding debug handlers to kubelet server" Jan 30 14:01:36.989951 kubelet[3409]: I0130 14:01:36.989788 3409 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:01:36.991409 kubelet[3409]: I0130 14:01:36.991378 3409 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:01:36.991882 kubelet[3409]: I0130 14:01:36.991855 3409 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:01:37.000181 kubelet[3409]: I0130 14:01:37.000145 3409 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 14:01:37.000738 kubelet[3409]: E0130 14:01:37.000708 3409 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-125\" not found" Jan 30 14:01:37.005030 kubelet[3409]: I0130 14:01:37.001898 3409 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 14:01:37.005030 kubelet[3409]: I0130 14:01:37.004341 3409 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:01:37.031542 kubelet[3409]: I0130 14:01:37.031505 3409 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:01:37.031952 kubelet[3409]: I0130 14:01:37.031920 3409 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:01:37.046907 kubelet[3409]: I0130 14:01:37.046782 3409 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:01:37.069219 kubelet[3409]: E0130 14:01:37.069153 3409 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:01:37.073966 kubelet[3409]: I0130 14:01:37.072580 3409 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:01:37.088085 kubelet[3409]: I0130 14:01:37.087177 3409 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:01:37.088085 kubelet[3409]: I0130 14:01:37.087236 3409 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:01:37.088085 kubelet[3409]: I0130 14:01:37.087269 3409 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 14:01:37.088085 kubelet[3409]: E0130 14:01:37.087332 3409 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:01:37.187960 kubelet[3409]: E0130 14:01:37.187505 3409 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:01:37.188791 kubelet[3409]: I0130 14:01:37.188662 3409 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:01:37.188791 kubelet[3409]: I0130 14:01:37.188706 3409 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:01:37.188791 kubelet[3409]: I0130 14:01:37.188740 3409 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:37.189482 kubelet[3409]: I0130 14:01:37.189304 3409 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:01:37.189482 kubelet[3409]: I0130 14:01:37.189332 3409 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:01:37.189482 kubelet[3409]: I0130 14:01:37.189393 3409 policy_none.go:49] "None policy: Start" Jan 30 14:01:37.192460 kubelet[3409]: I0130 14:01:37.192081 3409 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:01:37.192460 kubelet[3409]: I0130 14:01:37.192168 3409 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:01:37.193014 kubelet[3409]: I0130 14:01:37.192824 3409 state_mem.go:75] "Updated machine memory state" Jan 30 14:01:37.207506 kubelet[3409]: I0130 14:01:37.206087 3409 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:01:37.212112 kubelet[3409]: I0130 14:01:37.210957 3409 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:01:37.212112 kubelet[3409]: I0130 14:01:37.211063 3409 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:01:37.218323 kubelet[3409]: I0130 14:01:37.214532 3409 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:01:37.339484 kubelet[3409]: I0130 14:01:37.338415 3409 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-125" Jan 30 14:01:37.352010 kubelet[3409]: I0130 14:01:37.350267 3409 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-25-125" Jan 30 14:01:37.352010 kubelet[3409]: I0130 14:01:37.350399 3409 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-25-125" Jan 30 14:01:37.416124 kubelet[3409]: I0130 14:01:37.416066 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dd01c1bc71107a5f9ebb5d6f5180a45-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-125\" (UID: \"9dd01c1bc71107a5f9ebb5d6f5180a45\") " pod="kube-system/kube-apiserver-ip-172-31-25-125" Jan 30 14:01:37.416287 kubelet[3409]: I0130 14:01:37.416135 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dd01c1bc71107a5f9ebb5d6f5180a45-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-125\" (UID: \"9dd01c1bc71107a5f9ebb5d6f5180a45\") " pod="kube-system/kube-apiserver-ip-172-31-25-125" Jan 30 14:01:37.416287 kubelet[3409]: I0130 14:01:37.416185 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:37.416287 kubelet[3409]: I0130 14:01:37.416220 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:37.416287 kubelet[3409]: I0130 14:01:37.416258 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:37.416518 kubelet[3409]: I0130 14:01:37.416293 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:37.416518 kubelet[3409]: I0130 14:01:37.416350 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90ec48d293fda880ab28cf54f1004555-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-125\" (UID: \"90ec48d293fda880ab28cf54f1004555\") " pod="kube-system/kube-scheduler-ip-172-31-25-125" Jan 30 14:01:37.416518 kubelet[3409]: I0130 14:01:37.416384 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dd01c1bc71107a5f9ebb5d6f5180a45-ca-certs\") pod \"kube-apiserver-ip-172-31-25-125\" (UID: \"9dd01c1bc71107a5f9ebb5d6f5180a45\") " pod="kube-system/kube-apiserver-ip-172-31-25-125" Jan 30 14:01:37.416518 kubelet[3409]: I0130 14:01:37.416418 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f8e1ad1f506f680fed276a9a618aaf6-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-125\" (UID: \"7f8e1ad1f506f680fed276a9a618aaf6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-125" Jan 30 14:01:37.899797 sudo[3422]: pam_unix(sudo:session): session closed for user root Jan 30 14:01:37.955170 kubelet[3409]: I0130 14:01:37.955105 3409 apiserver.go:52] "Watching apiserver" Jan 30 14:01:38.004320 kubelet[3409]: I0130 14:01:38.004240 3409 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 14:01:38.193109 kubelet[3409]: I0130 14:01:38.192918 3409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-125" podStartSLOduration=1.192894963 podStartE2EDuration="1.192894963s" podCreationTimestamp="2025-01-30 14:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:38.173849055 +0000 UTC m=+1.363407464" watchObservedRunningTime="2025-01-30 14:01:38.192894963 +0000 UTC m=+1.382453372" Jan 30 14:01:38.214379 kubelet[3409]: I0130 14:01:38.214157 3409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-125" podStartSLOduration=1.214131363 podStartE2EDuration="1.214131363s" podCreationTimestamp="2025-01-30 14:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:38.194169327 +0000 UTC m=+1.383727760" watchObservedRunningTime="2025-01-30 14:01:38.214131363 +0000 UTC m=+1.403689784" Jan 30 14:01:38.214897 kubelet[3409]: I0130 14:01:38.214633 3409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-125" podStartSLOduration=1.214618875 podStartE2EDuration="1.214618875s" podCreationTimestamp="2025-01-30 14:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:38.212573523 +0000 UTC m=+1.402131944" watchObservedRunningTime="2025-01-30 14:01:38.214618875 +0000 UTC m=+1.404177260" Jan 30 14:01:40.458752 sudo[2366]: pam_unix(sudo:session): session closed for user root Jan 30 14:01:40.481960 sshd[2363]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:40.490608 systemd[1]: sshd@8-172.31.25.125:22-139.178.89.65:59224.service: Deactivated successfully. Jan 30 14:01:40.495394 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:01:40.495725 systemd[1]: session-9.scope: Consumed 10.393s CPU time, 152.4M memory peak, 0B memory swap peak. Jan 30 14:01:40.497376 systemd-logind[1996]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:01:40.500465 systemd-logind[1996]: Removed session 9. Jan 30 14:01:42.197506 kubelet[3409]: I0130 14:01:42.197277 3409 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:01:42.198135 containerd[2015]: time="2025-01-30T14:01:42.197741467Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:01:42.198587 kubelet[3409]: I0130 14:01:42.198521 3409 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:01:43.142245 systemd[1]: Created slice kubepods-besteffort-pod56ced701_45ec_40cb_8ffc_ede32cd50783.slice - libcontainer container kubepods-besteffort-pod56ced701_45ec_40cb_8ffc_ede32cd50783.slice. Jan 30 14:01:43.155085 kubelet[3409]: I0130 14:01:43.153778 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56ced701-45ec-40cb-8ffc-ede32cd50783-kube-proxy\") pod \"kube-proxy-9wrtq\" (UID: \"56ced701-45ec-40cb-8ffc-ede32cd50783\") " pod="kube-system/kube-proxy-9wrtq" Jan 30 14:01:43.155085 kubelet[3409]: I0130 14:01:43.153834 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56ced701-45ec-40cb-8ffc-ede32cd50783-xtables-lock\") pod \"kube-proxy-9wrtq\" (UID: \"56ced701-45ec-40cb-8ffc-ede32cd50783\") " pod="kube-system/kube-proxy-9wrtq" Jan 30 14:01:43.155085 kubelet[3409]: I0130 14:01:43.153871 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56ced701-45ec-40cb-8ffc-ede32cd50783-lib-modules\") pod \"kube-proxy-9wrtq\" (UID: \"56ced701-45ec-40cb-8ffc-ede32cd50783\") " pod="kube-system/kube-proxy-9wrtq" Jan 30 14:01:43.155085 kubelet[3409]: I0130 14:01:43.153912 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chnk\" (UniqueName: \"kubernetes.io/projected/56ced701-45ec-40cb-8ffc-ede32cd50783-kube-api-access-8chnk\") pod \"kube-proxy-9wrtq\" (UID: \"56ced701-45ec-40cb-8ffc-ede32cd50783\") " pod="kube-system/kube-proxy-9wrtq" Jan 30 14:01:43.175550 systemd[1]: Created slice kubepods-burstable-pode0cd4ef7_b86b_41cd_887c_cfc732c96893.slice - libcontainer container kubepods-burstable-pode0cd4ef7_b86b_41cd_887c_cfc732c96893.slice. Jan 30 14:01:43.254955 kubelet[3409]: I0130 14:01:43.254877 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cswd\" (UniqueName: \"kubernetes.io/projected/e0cd4ef7-b86b-41cd-887c-cfc732c96893-kube-api-access-9cswd\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255528 kubelet[3409]: I0130 14:01:43.254980 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-config-path\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255528 kubelet[3409]: I0130 14:01:43.255040 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-host-proc-sys-kernel\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255528 kubelet[3409]: I0130 14:01:43.255099 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-xtables-lock\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255528 kubelet[3409]: I0130 14:01:43.255136 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0cd4ef7-b86b-41cd-887c-cfc732c96893-hubble-tls\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255528 kubelet[3409]: I0130 14:01:43.255170 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cni-path\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255528 kubelet[3409]: I0130 14:01:43.255205 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-lib-modules\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255835 kubelet[3409]: I0130 14:01:43.255281 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-hostproc\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255835 kubelet[3409]: I0130 14:01:43.255319 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-host-proc-sys-net\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255835 kubelet[3409]: I0130 14:01:43.255369 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-run\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255835 kubelet[3409]: I0130 14:01:43.255403 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-bpf-maps\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255835 kubelet[3409]: I0130 14:01:43.255439 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0cd4ef7-b86b-41cd-887c-cfc732c96893-clustermesh-secrets\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.255835 kubelet[3409]: I0130 14:01:43.255474 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-cgroup\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.258672 kubelet[3409]: I0130 14:01:43.255533 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-etc-cni-netd\") pod \"cilium-m5cvm\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " pod="kube-system/cilium-m5cvm" Jan 30 14:01:43.343109 systemd[1]: Created slice kubepods-besteffort-pod9f8d738c_b401_47e2_a084_b97a72c155b3.slice - libcontainer container kubepods-besteffort-pod9f8d738c_b401_47e2_a084_b97a72c155b3.slice. Jan 30 14:01:43.357489 kubelet[3409]: I0130 14:01:43.356260 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f8d738c-b401-47e2-a084-b97a72c155b3-cilium-config-path\") pod \"cilium-operator-5d85765b45-4pll2\" (UID: \"9f8d738c-b401-47e2-a084-b97a72c155b3\") " pod="kube-system/cilium-operator-5d85765b45-4pll2" Jan 30 14:01:43.357489 kubelet[3409]: I0130 14:01:43.356476 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vsp\" (UniqueName: \"kubernetes.io/projected/9f8d738c-b401-47e2-a084-b97a72c155b3-kube-api-access-v5vsp\") pod \"cilium-operator-5d85765b45-4pll2\" (UID: \"9f8d738c-b401-47e2-a084-b97a72c155b3\") " pod="kube-system/cilium-operator-5d85765b45-4pll2" Jan 30 14:01:43.463847 containerd[2015]: time="2025-01-30T14:01:43.463262625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wrtq,Uid:56ced701-45ec-40cb-8ffc-ede32cd50783,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:43.487310 containerd[2015]: time="2025-01-30T14:01:43.486045633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m5cvm,Uid:e0cd4ef7-b86b-41cd-887c-cfc732c96893,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:43.571796 containerd[2015]: time="2025-01-30T14:01:43.571502373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:43.571796 containerd[2015]: time="2025-01-30T14:01:43.571628781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:43.571796 containerd[2015]: time="2025-01-30T14:01:43.571659261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:43.572321 containerd[2015]: time="2025-01-30T14:01:43.571836693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:43.593423 containerd[2015]: time="2025-01-30T14:01:43.591978633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:43.593423 containerd[2015]: time="2025-01-30T14:01:43.592680465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:43.593423 containerd[2015]: time="2025-01-30T14:01:43.592726245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:43.593423 containerd[2015]: time="2025-01-30T14:01:43.592899933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:43.641632 systemd[1]: Started cri-containerd-717aaa3fd52a9160607b95041c9cb4032a4985b5163ee8ed8d641a6ba2b7223c.scope - libcontainer container 717aaa3fd52a9160607b95041c9cb4032a4985b5163ee8ed8d641a6ba2b7223c. Jan 30 14:01:43.651581 containerd[2015]: time="2025-01-30T14:01:43.651417646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4pll2,Uid:9f8d738c-b401-47e2-a084-b97a72c155b3,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:43.658464 systemd[1]: Started cri-containerd-b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982.scope - libcontainer container b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982. Jan 30 14:01:43.738346 containerd[2015]: time="2025-01-30T14:01:43.738153130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:43.739216 containerd[2015]: time="2025-01-30T14:01:43.739098238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:43.739795 containerd[2015]: time="2025-01-30T14:01:43.739189738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:43.739795 containerd[2015]: time="2025-01-30T14:01:43.739532710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:43.743290 containerd[2015]: time="2025-01-30T14:01:43.743236414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wrtq,Uid:56ced701-45ec-40cb-8ffc-ede32cd50783,Namespace:kube-system,Attempt:0,} returns sandbox id \"717aaa3fd52a9160607b95041c9cb4032a4985b5163ee8ed8d641a6ba2b7223c\"" Jan 30 14:01:43.755392 containerd[2015]: time="2025-01-30T14:01:43.755334586Z" level=info msg="CreateContainer within sandbox \"717aaa3fd52a9160607b95041c9cb4032a4985b5163ee8ed8d641a6ba2b7223c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:01:43.757501 containerd[2015]: time="2025-01-30T14:01:43.757396138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m5cvm,Uid:e0cd4ef7-b86b-41cd-887c-cfc732c96893,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\"" Jan 30 14:01:43.765334 containerd[2015]: time="2025-01-30T14:01:43.765122746Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 14:01:43.796348 systemd[1]: Started cri-containerd-abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7.scope - libcontainer container abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7. Jan 30 14:01:43.805044 containerd[2015]: time="2025-01-30T14:01:43.804638314Z" level=info msg="CreateContainer within sandbox \"717aaa3fd52a9160607b95041c9cb4032a4985b5163ee8ed8d641a6ba2b7223c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d7399697c18855373091773e2d8ae6bb671bf18242effc59d2520fdce1c2aab0\"" Jan 30 14:01:43.808040 containerd[2015]: time="2025-01-30T14:01:43.805687007Z" level=info msg="StartContainer for \"d7399697c18855373091773e2d8ae6bb671bf18242effc59d2520fdce1c2aab0\"" Jan 30 14:01:43.858348 systemd[1]: Started cri-containerd-d7399697c18855373091773e2d8ae6bb671bf18242effc59d2520fdce1c2aab0.scope - libcontainer container d7399697c18855373091773e2d8ae6bb671bf18242effc59d2520fdce1c2aab0. Jan 30 14:01:43.893414 containerd[2015]: time="2025-01-30T14:01:43.893328827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4pll2,Uid:9f8d738c-b401-47e2-a084-b97a72c155b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\"" Jan 30 14:01:43.931109 containerd[2015]: time="2025-01-30T14:01:43.931011623Z" level=info msg="StartContainer for \"d7399697c18855373091773e2d8ae6bb671bf18242effc59d2520fdce1c2aab0\" returns successfully" Jan 30 14:01:44.179760 kubelet[3409]: I0130 14:01:44.179457 3409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9wrtq" podStartSLOduration=1.17943302 podStartE2EDuration="1.17943302s" podCreationTimestamp="2025-01-30 14:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:44.17779616 +0000 UTC m=+7.367354593" watchObservedRunningTime="2025-01-30 14:01:44.17943302 +0000 UTC m=+7.368991465" Jan 30 14:01:49.173384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount279712630.mount: Deactivated successfully. Jan 30 14:01:51.632594 containerd[2015]: time="2025-01-30T14:01:51.632508245Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:51.634473 containerd[2015]: time="2025-01-30T14:01:51.634407617Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 14:01:51.637263 containerd[2015]: time="2025-01-30T14:01:51.637173461Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:51.642869 containerd[2015]: time="2025-01-30T14:01:51.642798425Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.877603211s" Jan 30 14:01:51.643218 containerd[2015]: time="2025-01-30T14:01:51.643068113Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 14:01:51.646420 containerd[2015]: time="2025-01-30T14:01:51.646365401Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 14:01:51.648334 containerd[2015]: time="2025-01-30T14:01:51.647853317Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:01:51.685513 containerd[2015]: time="2025-01-30T14:01:51.685429854Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\"" Jan 30 14:01:51.686384 containerd[2015]: time="2025-01-30T14:01:51.686333946Z" level=info msg="StartContainer for \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\"" Jan 30 14:01:51.745350 systemd[1]: Started cri-containerd-13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a.scope - libcontainer container 13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a. Jan 30 14:01:51.795207 containerd[2015]: time="2025-01-30T14:01:51.795024318Z" level=info msg="StartContainer for \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\" returns successfully" Jan 30 14:01:51.818196 systemd[1]: cri-containerd-13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a.scope: Deactivated successfully. Jan 30 14:01:52.670286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a-rootfs.mount: Deactivated successfully. Jan 30 14:01:53.169976 containerd[2015]: time="2025-01-30T14:01:53.169464041Z" level=info msg="shim disconnected" id=13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a namespace=k8s.io Jan 30 14:01:53.169976 containerd[2015]: time="2025-01-30T14:01:53.169912073Z" level=warning msg="cleaning up after shim disconnected" id=13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a namespace=k8s.io Jan 30 14:01:53.170665 containerd[2015]: time="2025-01-30T14:01:53.170033609Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:53.195518 containerd[2015]: time="2025-01-30T14:01:53.195411677Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:01:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:01:53.216077 containerd[2015]: time="2025-01-30T14:01:53.214082117Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:01:53.245014 containerd[2015]: time="2025-01-30T14:01:53.244269377Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\"" Jan 30 14:01:53.245885 containerd[2015]: time="2025-01-30T14:01:53.245586641Z" level=info msg="StartContainer for \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\"" Jan 30 14:01:53.302323 systemd[1]: Started cri-containerd-d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a.scope - libcontainer container d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a. Jan 30 14:01:53.365493 containerd[2015]: time="2025-01-30T14:01:53.365408250Z" level=info msg="StartContainer for \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\" returns successfully" Jan 30 14:01:53.391063 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:01:53.391633 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:01:53.391752 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:01:53.400777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:01:53.401236 systemd[1]: cri-containerd-d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a.scope: Deactivated successfully. Jan 30 14:01:53.451601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:01:53.455004 containerd[2015]: time="2025-01-30T14:01:53.454909998Z" level=info msg="shim disconnected" id=d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a namespace=k8s.io Jan 30 14:01:53.455552 containerd[2015]: time="2025-01-30T14:01:53.455070942Z" level=warning msg="cleaning up after shim disconnected" id=d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a namespace=k8s.io Jan 30 14:01:53.455552 containerd[2015]: time="2025-01-30T14:01:53.455093922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:53.670536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a-rootfs.mount: Deactivated successfully. Jan 30 14:01:54.229738 containerd[2015]: time="2025-01-30T14:01:54.229659582Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:01:54.280835 containerd[2015]: time="2025-01-30T14:01:54.280773727Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\"" Jan 30 14:01:54.283090 containerd[2015]: time="2025-01-30T14:01:54.282400171Z" level=info msg="StartContainer for \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\"" Jan 30 14:01:54.371505 systemd[1]: Started cri-containerd-fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867.scope - libcontainer container fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867. Jan 30 14:01:54.451879 containerd[2015]: time="2025-01-30T14:01:54.450056299Z" level=info msg="StartContainer for \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\" returns successfully" Jan 30 14:01:54.462469 systemd[1]: cri-containerd-fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867.scope: Deactivated successfully. Jan 30 14:01:54.613036 containerd[2015]: time="2025-01-30T14:01:54.612806384Z" level=info msg="shim disconnected" id=fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867 namespace=k8s.io Jan 30 14:01:54.613036 containerd[2015]: time="2025-01-30T14:01:54.612906764Z" level=warning msg="cleaning up after shim disconnected" id=fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867 namespace=k8s.io Jan 30 14:01:54.613036 containerd[2015]: time="2025-01-30T14:01:54.612929504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:54.674600 systemd[1]: run-containerd-runc-k8s.io-fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867-runc.MPUhna.mount: Deactivated successfully. Jan 30 14:01:54.675166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867-rootfs.mount: Deactivated successfully. Jan 30 14:01:54.728497 containerd[2015]: time="2025-01-30T14:01:54.727674093Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:54.730558 containerd[2015]: time="2025-01-30T14:01:54.730445685Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 14:01:54.732338 containerd[2015]: time="2025-01-30T14:01:54.732279057Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:54.737729 containerd[2015]: time="2025-01-30T14:01:54.737670693Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.090958624s" Jan 30 14:01:54.737976 containerd[2015]: time="2025-01-30T14:01:54.737929593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 14:01:54.742979 containerd[2015]: time="2025-01-30T14:01:54.742925661Z" level=info msg="CreateContainer within sandbox \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 14:01:54.768918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3540464423.mount: Deactivated successfully. Jan 30 14:01:54.776788 containerd[2015]: time="2025-01-30T14:01:54.776716425Z" level=info msg="CreateContainer within sandbox \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\"" Jan 30 14:01:54.779077 containerd[2015]: time="2025-01-30T14:01:54.777906393Z" level=info msg="StartContainer for \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\"" Jan 30 14:01:54.837371 systemd[1]: Started cri-containerd-c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99.scope - libcontainer container c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99. Jan 30 14:01:54.886234 containerd[2015]: time="2025-01-30T14:01:54.884237074Z" level=info msg="StartContainer for \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\" returns successfully" Jan 30 14:01:55.236447 containerd[2015]: time="2025-01-30T14:01:55.236361163Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:01:55.283816 containerd[2015]: time="2025-01-30T14:01:55.283364720Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\"" Jan 30 14:01:55.286393 containerd[2015]: time="2025-01-30T14:01:55.286347128Z" level=info msg="StartContainer for \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\"" Jan 30 14:01:55.295377 kubelet[3409]: I0130 14:01:55.294870 3409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4pll2" podStartSLOduration=1.451829986 podStartE2EDuration="12.294811208s" podCreationTimestamp="2025-01-30 14:01:43 +0000 UTC" firstStartedPulling="2025-01-30 14:01:43.896135471 +0000 UTC m=+7.085693856" lastFinishedPulling="2025-01-30 14:01:54.739116681 +0000 UTC m=+17.928675078" observedRunningTime="2025-01-30 14:01:55.290172764 +0000 UTC m=+18.479731173" watchObservedRunningTime="2025-01-30 14:01:55.294811208 +0000 UTC m=+18.484369893" Jan 30 14:01:55.350321 systemd[1]: Started cri-containerd-c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa.scope - libcontainer container c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa. Jan 30 14:01:55.430737 systemd[1]: cri-containerd-c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa.scope: Deactivated successfully. Jan 30 14:01:55.441636 containerd[2015]: time="2025-01-30T14:01:55.441522764Z" level=info msg="StartContainer for \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\" returns successfully" Jan 30 14:01:55.520440 containerd[2015]: time="2025-01-30T14:01:55.519880821Z" level=info msg="shim disconnected" id=c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa namespace=k8s.io Jan 30 14:01:55.520973 containerd[2015]: time="2025-01-30T14:01:55.520708581Z" level=warning msg="cleaning up after shim disconnected" id=c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa namespace=k8s.io Jan 30 14:01:55.520973 containerd[2015]: time="2025-01-30T14:01:55.520740045Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:56.247917 containerd[2015]: time="2025-01-30T14:01:56.247684064Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:01:56.292065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640090372.mount: Deactivated successfully. Jan 30 14:01:56.298527 containerd[2015]: time="2025-01-30T14:01:56.293706741Z" level=info msg="CreateContainer within sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\"" Jan 30 14:01:56.304556 containerd[2015]: time="2025-01-30T14:01:56.301963833Z" level=info msg="StartContainer for \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\"" Jan 30 14:01:56.401307 systemd[1]: Started cri-containerd-8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05.scope - libcontainer container 8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05. Jan 30 14:01:56.477844 containerd[2015]: time="2025-01-30T14:01:56.477773361Z" level=info msg="StartContainer for \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\" returns successfully" Jan 30 14:01:56.741089 kubelet[3409]: I0130 14:01:56.739770 3409 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 14:01:56.814595 systemd[1]: Created slice kubepods-burstable-pod77d914ce_2439_4f32_9c44_149794f78647.slice - libcontainer container kubepods-burstable-pod77d914ce_2439_4f32_9c44_149794f78647.slice. Jan 30 14:01:56.828557 systemd[1]: Created slice kubepods-burstable-pod52b10fbf_e5e9_43af_bbe6_1d2c428546f0.slice - libcontainer container kubepods-burstable-pod52b10fbf_e5e9_43af_bbe6_1d2c428546f0.slice. Jan 30 14:01:56.857465 kubelet[3409]: I0130 14:01:56.857412 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77d914ce-2439-4f32-9c44-149794f78647-config-volume\") pod \"coredns-6f6b679f8f-lv925\" (UID: \"77d914ce-2439-4f32-9c44-149794f78647\") " pod="kube-system/coredns-6f6b679f8f-lv925" Jan 30 14:01:56.857713 kubelet[3409]: I0130 14:01:56.857685 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9ph\" (UniqueName: \"kubernetes.io/projected/77d914ce-2439-4f32-9c44-149794f78647-kube-api-access-gh9ph\") pod \"coredns-6f6b679f8f-lv925\" (UID: \"77d914ce-2439-4f32-9c44-149794f78647\") " pod="kube-system/coredns-6f6b679f8f-lv925" Jan 30 14:01:56.857873 kubelet[3409]: I0130 14:01:56.857848 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kxmh\" (UniqueName: \"kubernetes.io/projected/52b10fbf-e5e9-43af-bbe6-1d2c428546f0-kube-api-access-4kxmh\") pod \"coredns-6f6b679f8f-k8dmt\" (UID: \"52b10fbf-e5e9-43af-bbe6-1d2c428546f0\") " pod="kube-system/coredns-6f6b679f8f-k8dmt" Jan 30 14:01:56.858231 kubelet[3409]: I0130 14:01:56.858111 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52b10fbf-e5e9-43af-bbe6-1d2c428546f0-config-volume\") pod \"coredns-6f6b679f8f-k8dmt\" (UID: \"52b10fbf-e5e9-43af-bbe6-1d2c428546f0\") " pod="kube-system/coredns-6f6b679f8f-k8dmt" Jan 30 14:01:57.127975 containerd[2015]: time="2025-01-30T14:01:57.127339917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lv925,Uid:77d914ce-2439-4f32-9c44-149794f78647,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:57.134893 containerd[2015]: time="2025-01-30T14:01:57.134833713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k8dmt,Uid:52b10fbf-e5e9-43af-bbe6-1d2c428546f0,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:59.648354 systemd-networkd[1935]: cilium_host: Link UP Jan 30 14:01:59.649020 systemd-networkd[1935]: cilium_net: Link UP Jan 30 14:01:59.649792 systemd-networkd[1935]: cilium_net: Gained carrier Jan 30 14:01:59.650260 systemd-networkd[1935]: cilium_host: Gained carrier Jan 30 14:01:59.654726 (udev-worker)[4246]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:01:59.655896 (udev-worker)[4206]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:01:59.832628 (udev-worker)[4258]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:01:59.844719 systemd-networkd[1935]: cilium_vxlan: Link UP Jan 30 14:01:59.844744 systemd-networkd[1935]: cilium_vxlan: Gained carrier Jan 30 14:02:00.280436 systemd-networkd[1935]: cilium_host: Gained IPv6LL Jan 30 14:02:00.344894 systemd-networkd[1935]: cilium_net: Gained IPv6LL Jan 30 14:02:00.348335 kernel: NET: Registered PF_ALG protocol family Jan 30 14:02:00.921344 systemd-networkd[1935]: cilium_vxlan: Gained IPv6LL Jan 30 14:02:01.760943 systemd-networkd[1935]: lxc_health: Link UP Jan 30 14:02:01.768572 systemd-networkd[1935]: lxc_health: Gained carrier Jan 30 14:02:02.236665 (udev-worker)[4259]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:02:02.254037 kernel: eth0: renamed from tmpf3ca1 Jan 30 14:02:02.259239 systemd-networkd[1935]: lxccc4ce737678a: Link UP Jan 30 14:02:02.271173 systemd-networkd[1935]: lxccc4ce737678a: Gained carrier Jan 30 14:02:02.296592 systemd-networkd[1935]: lxc00c7b5b5e73b: Link UP Jan 30 14:02:02.318964 kernel: eth0: renamed from tmpfe74f Jan 30 14:02:02.324642 (udev-worker)[4592]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:02:02.329501 systemd-networkd[1935]: lxc00c7b5b5e73b: Gained carrier Jan 30 14:02:03.224342 systemd-networkd[1935]: lxc_health: Gained IPv6LL Jan 30 14:02:03.532543 kubelet[3409]: I0130 14:02:03.532339 3409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m5cvm" podStartSLOduration=12.648573501 podStartE2EDuration="20.532316056s" podCreationTimestamp="2025-01-30 14:01:43 +0000 UTC" firstStartedPulling="2025-01-30 14:01:43.76069123 +0000 UTC m=+6.950249615" lastFinishedPulling="2025-01-30 14:01:51.644433773 +0000 UTC m=+14.833992170" observedRunningTime="2025-01-30 14:01:57.301955554 +0000 UTC m=+20.491513963" watchObservedRunningTime="2025-01-30 14:02:03.532316056 +0000 UTC m=+26.721874477" Jan 30 14:02:03.546171 systemd-networkd[1935]: lxc00c7b5b5e73b: Gained IPv6LL Jan 30 14:02:03.864268 systemd-networkd[1935]: lxccc4ce737678a: Gained IPv6LL Jan 30 14:02:06.048966 ntpd[1990]: Listen normally on 8 cilium_host 192.168.0.6:123 Jan 30 14:02:06.049141 ntpd[1990]: Listen normally on 9 cilium_net [fe80::c11:42ff:fe72:fd4e%4]:123 Jan 30 14:02:06.049553 ntpd[1990]: 30 Jan 14:02:06 ntpd[1990]: Listen normally on 8 cilium_host 192.168.0.6:123 Jan 30 14:02:06.049553 ntpd[1990]: 30 Jan 14:02:06 ntpd[1990]: Listen normally on 9 cilium_net [fe80::c11:42ff:fe72:fd4e%4]:123 Jan 30 14:02:06.049553 ntpd[1990]: 30 Jan 14:02:06 ntpd[1990]: Listen normally on 10 cilium_host [fe80::bc7b:98ff:fefb:9417%5]:123 Jan 30 14:02:06.049553 ntpd[1990]: 30 Jan 14:02:06 ntpd[1990]: Listen normally on 11 cilium_vxlan [fe80::c889:5aff:fefc:5723%6]:123 Jan 30 14:02:06.049553 ntpd[1990]: 30 Jan 14:02:06 ntpd[1990]: Listen normally on 12 lxc_health [fe80::accd:30ff:feb1:747b%8]:123 Jan 30 14:02:06.049553 ntpd[1990]: 30 Jan 14:02:06 ntpd[1990]: Listen normally on 13 lxccc4ce737678a [fe80::487e:c1ff:fe7b:d572%10]:123 Jan 30 14:02:06.049553 ntpd[1990]: 30 Jan 14:02:06 ntpd[1990]: Listen normally on 14 lxc00c7b5b5e73b [fe80::48e2:3bff:feba:654c%12]:123 Jan 30 14:02:06.049225 ntpd[1990]: Listen normally on 10 cilium_host [fe80::bc7b:98ff:fefb:9417%5]:123 Jan 30 14:02:06.049294 ntpd[1990]: Listen normally on 11 cilium_vxlan [fe80::c889:5aff:fefc:5723%6]:123 Jan 30 14:02:06.049363 ntpd[1990]: Listen normally on 12 lxc_health [fe80::accd:30ff:feb1:747b%8]:123 Jan 30 14:02:06.049434 ntpd[1990]: Listen normally on 13 lxccc4ce737678a [fe80::487e:c1ff:fe7b:d572%10]:123 Jan 30 14:02:06.049504 ntpd[1990]: Listen normally on 14 lxc00c7b5b5e73b [fe80::48e2:3bff:feba:654c%12]:123 Jan 30 14:02:10.749536 containerd[2015]: time="2025-01-30T14:02:10.748594716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:10.749536 containerd[2015]: time="2025-01-30T14:02:10.748695360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:10.749536 containerd[2015]: time="2025-01-30T14:02:10.748755000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:10.749536 containerd[2015]: time="2025-01-30T14:02:10.748928736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:10.836446 systemd[1]: Started cri-containerd-f3ca1faa940c779573ee4212d5c2e6a087ad22f71eaebd6894f278784bd8970a.scope - libcontainer container f3ca1faa940c779573ee4212d5c2e6a087ad22f71eaebd6894f278784bd8970a. Jan 30 14:02:10.882094 containerd[2015]: time="2025-01-30T14:02:10.881720617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:02:10.882094 containerd[2015]: time="2025-01-30T14:02:10.881932069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:02:10.882350 containerd[2015]: time="2025-01-30T14:02:10.881962429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:10.883275 containerd[2015]: time="2025-01-30T14:02:10.883151737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:02:10.938263 systemd[1]: Started cri-containerd-fe74ffe6a702395da2ccfa5540fc9edc975039ea602399bcb8529b483e926387.scope - libcontainer container fe74ffe6a702395da2ccfa5540fc9edc975039ea602399bcb8529b483e926387. Jan 30 14:02:10.985223 containerd[2015]: time="2025-01-30T14:02:10.985118738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lv925,Uid:77d914ce-2439-4f32-9c44-149794f78647,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3ca1faa940c779573ee4212d5c2e6a087ad22f71eaebd6894f278784bd8970a\"" Jan 30 14:02:10.997446 containerd[2015]: time="2025-01-30T14:02:10.997335242Z" level=info msg="CreateContainer within sandbox \"f3ca1faa940c779573ee4212d5c2e6a087ad22f71eaebd6894f278784bd8970a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:02:11.059367 containerd[2015]: time="2025-01-30T14:02:11.058160410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k8dmt,Uid:52b10fbf-e5e9-43af-bbe6-1d2c428546f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe74ffe6a702395da2ccfa5540fc9edc975039ea602399bcb8529b483e926387\"" Jan 30 14:02:11.062803 containerd[2015]: time="2025-01-30T14:02:11.062698834Z" level=info msg="CreateContainer within sandbox \"f3ca1faa940c779573ee4212d5c2e6a087ad22f71eaebd6894f278784bd8970a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4388f396bbc74674174b89a102c921d1a3ec0eeaf73836d3d3f8b22bab996174\"" Jan 30 14:02:11.067380 containerd[2015]: time="2025-01-30T14:02:11.067142602Z" level=info msg="StartContainer for \"4388f396bbc74674174b89a102c921d1a3ec0eeaf73836d3d3f8b22bab996174\"" Jan 30 14:02:11.069917 containerd[2015]: time="2025-01-30T14:02:11.069831826Z" level=info msg="CreateContainer within sandbox \"fe74ffe6a702395da2ccfa5540fc9edc975039ea602399bcb8529b483e926387\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:02:11.121024 containerd[2015]: time="2025-01-30T14:02:11.120816010Z" level=info msg="CreateContainer within sandbox \"fe74ffe6a702395da2ccfa5540fc9edc975039ea602399bcb8529b483e926387\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e2fdcf65f9b0a3dba0315aa400c0fcc9ade9a7f1f4eb97b19e13d9a9bfe93a1\"" Jan 30 14:02:11.122267 containerd[2015]: time="2025-01-30T14:02:11.122164690Z" level=info msg="StartContainer for \"1e2fdcf65f9b0a3dba0315aa400c0fcc9ade9a7f1f4eb97b19e13d9a9bfe93a1\"" Jan 30 14:02:11.182558 systemd[1]: Started cri-containerd-4388f396bbc74674174b89a102c921d1a3ec0eeaf73836d3d3f8b22bab996174.scope - libcontainer container 4388f396bbc74674174b89a102c921d1a3ec0eeaf73836d3d3f8b22bab996174. Jan 30 14:02:11.199768 systemd[1]: Started cri-containerd-1e2fdcf65f9b0a3dba0315aa400c0fcc9ade9a7f1f4eb97b19e13d9a9bfe93a1.scope - libcontainer container 1e2fdcf65f9b0a3dba0315aa400c0fcc9ade9a7f1f4eb97b19e13d9a9bfe93a1. Jan 30 14:02:11.297782 containerd[2015]: time="2025-01-30T14:02:11.297706199Z" level=info msg="StartContainer for \"1e2fdcf65f9b0a3dba0315aa400c0fcc9ade9a7f1f4eb97b19e13d9a9bfe93a1\" returns successfully" Jan 30 14:02:11.313135 containerd[2015]: time="2025-01-30T14:02:11.312236807Z" level=info msg="StartContainer for \"4388f396bbc74674174b89a102c921d1a3ec0eeaf73836d3d3f8b22bab996174\" returns successfully" Jan 30 14:02:11.360209 kubelet[3409]: I0130 14:02:11.360106 3409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-k8dmt" podStartSLOduration=28.360083231 podStartE2EDuration="28.360083231s" podCreationTimestamp="2025-01-30 14:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:11.357382031 +0000 UTC m=+34.546940452" watchObservedRunningTime="2025-01-30 14:02:11.360083231 +0000 UTC m=+34.549641640" Jan 30 14:02:11.396382 kubelet[3409]: I0130 14:02:11.396279 3409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lv925" podStartSLOduration=28.396242052 podStartE2EDuration="28.396242052s" podCreationTimestamp="2025-01-30 14:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:02:11.392727888 +0000 UTC m=+34.582286333" watchObservedRunningTime="2025-01-30 14:02:11.396242052 +0000 UTC m=+34.585800449" Jan 30 14:02:24.577503 systemd[1]: Started sshd@9-172.31.25.125:22-139.178.89.65:53572.service - OpenSSH per-connection server daemon (139.178.89.65:53572). Jan 30 14:02:24.751772 sshd[4789]: Accepted publickey for core from 139.178.89.65 port 53572 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:24.754639 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:24.764539 systemd-logind[1996]: New session 10 of user core. Jan 30 14:02:24.772275 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:02:25.039161 sshd[4789]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:25.044598 systemd-logind[1996]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:02:25.045167 systemd[1]: sshd@9-172.31.25.125:22-139.178.89.65:53572.service: Deactivated successfully. Jan 30 14:02:25.049924 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:02:25.056949 systemd-logind[1996]: Removed session 10. Jan 30 14:02:30.085521 systemd[1]: Started sshd@10-172.31.25.125:22-139.178.89.65:53576.service - OpenSSH per-connection server daemon (139.178.89.65:53576). Jan 30 14:02:30.256183 sshd[4803]: Accepted publickey for core from 139.178.89.65 port 53576 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:30.258865 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:30.267536 systemd-logind[1996]: New session 11 of user core. Jan 30 14:02:30.275272 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:02:30.520348 sshd[4803]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:30.525958 systemd-logind[1996]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:02:30.526416 systemd[1]: sshd@10-172.31.25.125:22-139.178.89.65:53576.service: Deactivated successfully. Jan 30 14:02:30.530347 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:02:30.534693 systemd-logind[1996]: Removed session 11. Jan 30 14:02:35.564082 systemd[1]: Started sshd@11-172.31.25.125:22-139.178.89.65:59786.service - OpenSSH per-connection server daemon (139.178.89.65:59786). Jan 30 14:02:35.745552 sshd[4817]: Accepted publickey for core from 139.178.89.65 port 59786 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:35.748388 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:35.756607 systemd-logind[1996]: New session 12 of user core. Jan 30 14:02:35.765337 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:02:35.997346 sshd[4817]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:36.002850 systemd[1]: sshd@11-172.31.25.125:22-139.178.89.65:59786.service: Deactivated successfully. Jan 30 14:02:36.006598 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:02:36.010472 systemd-logind[1996]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:02:36.012817 systemd-logind[1996]: Removed session 12. Jan 30 14:02:41.039529 systemd[1]: Started sshd@12-172.31.25.125:22-139.178.89.65:42370.service - OpenSSH per-connection server daemon (139.178.89.65:42370). Jan 30 14:02:41.215917 sshd[4833]: Accepted publickey for core from 139.178.89.65 port 42370 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:41.218641 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:41.231370 systemd-logind[1996]: New session 13 of user core. Jan 30 14:02:41.241711 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:02:41.488413 sshd[4833]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:41.495376 systemd[1]: sshd@12-172.31.25.125:22-139.178.89.65:42370.service: Deactivated successfully. Jan 30 14:02:41.498732 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:02:41.500312 systemd-logind[1996]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:02:41.502113 systemd-logind[1996]: Removed session 13. Jan 30 14:02:41.527516 systemd[1]: Started sshd@13-172.31.25.125:22-139.178.89.65:42382.service - OpenSSH per-connection server daemon (139.178.89.65:42382). Jan 30 14:02:41.701606 sshd[4846]: Accepted publickey for core from 139.178.89.65 port 42382 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:41.704738 sshd[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:41.713325 systemd-logind[1996]: New session 14 of user core. Jan 30 14:02:41.723287 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:02:42.042328 sshd[4846]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:42.056021 systemd[1]: sshd@13-172.31.25.125:22-139.178.89.65:42382.service: Deactivated successfully. Jan 30 14:02:42.065945 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:02:42.072147 systemd-logind[1996]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:02:42.096184 systemd[1]: Started sshd@14-172.31.25.125:22-139.178.89.65:42398.service - OpenSSH per-connection server daemon (139.178.89.65:42398). Jan 30 14:02:42.099179 systemd-logind[1996]: Removed session 14. Jan 30 14:02:42.275642 sshd[4857]: Accepted publickey for core from 139.178.89.65 port 42398 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:42.277219 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:42.287229 systemd-logind[1996]: New session 15 of user core. Jan 30 14:02:42.292302 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:02:42.539125 sshd[4857]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:42.545629 systemd[1]: sshd@14-172.31.25.125:22-139.178.89.65:42398.service: Deactivated successfully. Jan 30 14:02:42.550098 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:02:42.553101 systemd-logind[1996]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:02:42.555923 systemd-logind[1996]: Removed session 15. Jan 30 14:02:47.585376 systemd[1]: Started sshd@15-172.31.25.125:22-139.178.89.65:42400.service - OpenSSH per-connection server daemon (139.178.89.65:42400). Jan 30 14:02:47.761079 sshd[4873]: Accepted publickey for core from 139.178.89.65 port 42400 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:47.763882 sshd[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:47.773255 systemd-logind[1996]: New session 16 of user core. Jan 30 14:02:47.784304 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:02:48.023310 sshd[4873]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:48.029814 systemd[1]: sshd@15-172.31.25.125:22-139.178.89.65:42400.service: Deactivated successfully. Jan 30 14:02:48.033499 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:02:48.034935 systemd-logind[1996]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:02:48.037611 systemd-logind[1996]: Removed session 16. Jan 30 14:02:53.068516 systemd[1]: Started sshd@16-172.31.25.125:22-139.178.89.65:37132.service - OpenSSH per-connection server daemon (139.178.89.65:37132). Jan 30 14:02:53.253495 sshd[4888]: Accepted publickey for core from 139.178.89.65 port 37132 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:53.256370 sshd[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:53.264252 systemd-logind[1996]: New session 17 of user core. Jan 30 14:02:53.272251 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:02:53.509353 sshd[4888]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:53.516228 systemd[1]: sshd@16-172.31.25.125:22-139.178.89.65:37132.service: Deactivated successfully. Jan 30 14:02:53.519751 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:02:53.521885 systemd-logind[1996]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:02:53.523907 systemd-logind[1996]: Removed session 17. Jan 30 14:02:58.555790 systemd[1]: Started sshd@17-172.31.25.125:22-139.178.89.65:37136.service - OpenSSH per-connection server daemon (139.178.89.65:37136). Jan 30 14:02:58.729798 sshd[4900]: Accepted publickey for core from 139.178.89.65 port 37136 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:58.732478 sshd[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:58.740064 systemd-logind[1996]: New session 18 of user core. Jan 30 14:02:58.748268 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:02:58.989118 sshd[4900]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:58.995771 systemd[1]: sshd@17-172.31.25.125:22-139.178.89.65:37136.service: Deactivated successfully. Jan 30 14:02:59.000654 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:02:59.004476 systemd-logind[1996]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:02:59.007511 systemd-logind[1996]: Removed session 18. Jan 30 14:03:04.032597 systemd[1]: Started sshd@18-172.31.25.125:22-139.178.89.65:60576.service - OpenSSH per-connection server daemon (139.178.89.65:60576). Jan 30 14:03:04.206872 sshd[4912]: Accepted publickey for core from 139.178.89.65 port 60576 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:04.210817 sshd[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:04.222258 systemd-logind[1996]: New session 19 of user core. Jan 30 14:03:04.228309 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:03:04.466363 sshd[4912]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:04.472877 systemd[1]: sshd@18-172.31.25.125:22-139.178.89.65:60576.service: Deactivated successfully. Jan 30 14:03:04.473317 systemd-logind[1996]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:03:04.478322 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:03:04.483039 systemd-logind[1996]: Removed session 19. Jan 30 14:03:04.507543 systemd[1]: Started sshd@19-172.31.25.125:22-139.178.89.65:60592.service - OpenSSH per-connection server daemon (139.178.89.65:60592). Jan 30 14:03:04.675936 sshd[4925]: Accepted publickey for core from 139.178.89.65 port 60592 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:04.678570 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:04.687170 systemd-logind[1996]: New session 20 of user core. Jan 30 14:03:04.694268 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:03:04.987310 sshd[4925]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:04.992566 systemd[1]: sshd@19-172.31.25.125:22-139.178.89.65:60592.service: Deactivated successfully. Jan 30 14:03:04.997178 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:03:05.001650 systemd-logind[1996]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:03:05.003672 systemd-logind[1996]: Removed session 20. Jan 30 14:03:05.024513 systemd[1]: Started sshd@20-172.31.25.125:22-139.178.89.65:60594.service - OpenSSH per-connection server daemon (139.178.89.65:60594). Jan 30 14:03:05.202449 sshd[4936]: Accepted publickey for core from 139.178.89.65 port 60594 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:05.205274 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:05.213221 systemd-logind[1996]: New session 21 of user core. Jan 30 14:03:05.225258 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:03:07.951369 sshd[4936]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:07.961803 systemd[1]: sshd@20-172.31.25.125:22-139.178.89.65:60594.service: Deactivated successfully. Jan 30 14:03:07.969093 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:03:07.976729 systemd-logind[1996]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:03:07.997868 systemd[1]: Started sshd@21-172.31.25.125:22-139.178.89.65:60596.service - OpenSSH per-connection server daemon (139.178.89.65:60596). Jan 30 14:03:08.004144 systemd-logind[1996]: Removed session 21. Jan 30 14:03:08.184009 sshd[4955]: Accepted publickey for core from 139.178.89.65 port 60596 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:08.186878 sshd[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:08.195505 systemd-logind[1996]: New session 22 of user core. Jan 30 14:03:08.201256 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:03:08.683750 sshd[4955]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:08.690676 systemd[1]: sshd@21-172.31.25.125:22-139.178.89.65:60596.service: Deactivated successfully. Jan 30 14:03:08.695297 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:03:08.697669 systemd-logind[1996]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:03:08.700166 systemd-logind[1996]: Removed session 22. Jan 30 14:03:08.722645 systemd[1]: Started sshd@22-172.31.25.125:22-139.178.89.65:60612.service - OpenSSH per-connection server daemon (139.178.89.65:60612). Jan 30 14:03:08.901123 sshd[4966]: Accepted publickey for core from 139.178.89.65 port 60612 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:08.903733 sshd[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:08.912329 systemd-logind[1996]: New session 23 of user core. Jan 30 14:03:08.919284 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:03:09.161079 sshd[4966]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:09.166885 systemd[1]: sshd@22-172.31.25.125:22-139.178.89.65:60612.service: Deactivated successfully. Jan 30 14:03:09.170801 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:03:09.175341 systemd-logind[1996]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:03:09.177722 systemd-logind[1996]: Removed session 23. Jan 30 14:03:14.200558 systemd[1]: Started sshd@23-172.31.25.125:22-139.178.89.65:35978.service - OpenSSH per-connection server daemon (139.178.89.65:35978). Jan 30 14:03:14.383674 sshd[4981]: Accepted publickey for core from 139.178.89.65 port 35978 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:14.386427 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:14.395039 systemd-logind[1996]: New session 24 of user core. Jan 30 14:03:14.405280 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:03:14.647741 sshd[4981]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:14.654181 systemd[1]: sshd@23-172.31.25.125:22-139.178.89.65:35978.service: Deactivated successfully. Jan 30 14:03:14.658151 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:03:14.659789 systemd-logind[1996]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:03:14.662484 systemd-logind[1996]: Removed session 24. Jan 30 14:03:19.687522 systemd[1]: Started sshd@24-172.31.25.125:22-139.178.89.65:35982.service - OpenSSH per-connection server daemon (139.178.89.65:35982). Jan 30 14:03:19.862329 sshd[4997]: Accepted publickey for core from 139.178.89.65 port 35982 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:19.866246 sshd[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:19.882915 systemd-logind[1996]: New session 25 of user core. Jan 30 14:03:19.889336 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:03:20.122405 sshd[4997]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:20.130100 systemd[1]: sshd@24-172.31.25.125:22-139.178.89.65:35982.service: Deactivated successfully. Jan 30 14:03:20.133751 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:03:20.136175 systemd-logind[1996]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:03:20.138575 systemd-logind[1996]: Removed session 25. Jan 30 14:03:25.161534 systemd[1]: Started sshd@25-172.31.25.125:22-139.178.89.65:59216.service - OpenSSH per-connection server daemon (139.178.89.65:59216). Jan 30 14:03:25.340413 sshd[5010]: Accepted publickey for core from 139.178.89.65 port 59216 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:25.343337 sshd[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:25.352420 systemd-logind[1996]: New session 26 of user core. Jan 30 14:03:25.359275 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:03:25.600358 sshd[5010]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:25.606355 systemd[1]: sshd@25-172.31.25.125:22-139.178.89.65:59216.service: Deactivated successfully. Jan 30 14:03:25.610648 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:03:25.612138 systemd-logind[1996]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:03:25.615254 systemd-logind[1996]: Removed session 26. Jan 30 14:03:30.640542 systemd[1]: Started sshd@26-172.31.25.125:22-139.178.89.65:59228.service - OpenSSH per-connection server daemon (139.178.89.65:59228). Jan 30 14:03:30.822952 sshd[5023]: Accepted publickey for core from 139.178.89.65 port 59228 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:30.825628 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:30.834328 systemd-logind[1996]: New session 27 of user core. Jan 30 14:03:30.840289 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:03:31.078199 sshd[5023]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:31.084473 systemd[1]: sshd@26-172.31.25.125:22-139.178.89.65:59228.service: Deactivated successfully. Jan 30 14:03:31.088337 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:03:31.090377 systemd-logind[1996]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:03:31.092846 systemd-logind[1996]: Removed session 27. Jan 30 14:03:31.118525 systemd[1]: Started sshd@27-172.31.25.125:22-139.178.89.65:36030.service - OpenSSH per-connection server daemon (139.178.89.65:36030). Jan 30 14:03:31.294737 sshd[5035]: Accepted publickey for core from 139.178.89.65 port 36030 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:31.297380 sshd[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:31.306767 systemd-logind[1996]: New session 28 of user core. Jan 30 14:03:31.322271 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:03:34.028692 containerd[2015]: time="2025-01-30T14:03:34.028622718Z" level=info msg="StopContainer for \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\" with timeout 30 (s)" Jan 30 14:03:34.030595 containerd[2015]: time="2025-01-30T14:03:34.030154878Z" level=info msg="Stop container \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\" with signal terminated" Jan 30 14:03:34.044723 containerd[2015]: time="2025-01-30T14:03:34.044642082Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:03:34.058878 containerd[2015]: time="2025-01-30T14:03:34.056949450Z" level=info msg="StopContainer for \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\" with timeout 2 (s)" Jan 30 14:03:34.058878 containerd[2015]: time="2025-01-30T14:03:34.057643158Z" level=info msg="Stop container \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\" with signal terminated" Jan 30 14:03:34.068402 systemd[1]: cri-containerd-c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99.scope: Deactivated successfully. Jan 30 14:03:34.083729 systemd-networkd[1935]: lxc_health: Link DOWN Jan 30 14:03:34.083748 systemd-networkd[1935]: lxc_health: Lost carrier Jan 30 14:03:34.122255 systemd[1]: cri-containerd-8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05.scope: Deactivated successfully. Jan 30 14:03:34.122756 systemd[1]: cri-containerd-8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05.scope: Consumed 14.490s CPU time. Jan 30 14:03:34.147127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99-rootfs.mount: Deactivated successfully. Jan 30 14:03:34.168160 containerd[2015]: time="2025-01-30T14:03:34.168070855Z" level=info msg="shim disconnected" id=c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99 namespace=k8s.io Jan 30 14:03:34.168554 containerd[2015]: time="2025-01-30T14:03:34.168152059Z" level=warning msg="cleaning up after shim disconnected" id=c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99 namespace=k8s.io Jan 30 14:03:34.168554 containerd[2015]: time="2025-01-30T14:03:34.168196999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:34.184804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05-rootfs.mount: Deactivated successfully. Jan 30 14:03:34.193605 containerd[2015]: time="2025-01-30T14:03:34.193494295Z" level=info msg="shim disconnected" id=8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05 namespace=k8s.io Jan 30 14:03:34.194165 containerd[2015]: time="2025-01-30T14:03:34.194034067Z" level=warning msg="cleaning up after shim disconnected" id=8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05 namespace=k8s.io Jan 30 14:03:34.194165 containerd[2015]: time="2025-01-30T14:03:34.194090827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:34.203531 containerd[2015]: time="2025-01-30T14:03:34.203458747Z" level=info msg="StopContainer for \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\" returns successfully" Jan 30 14:03:34.204355 containerd[2015]: time="2025-01-30T14:03:34.204310279Z" level=info msg="StopPodSandbox for \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\"" Jan 30 14:03:34.205168 containerd[2015]: time="2025-01-30T14:03:34.204961735Z" level=info msg="Container to stop \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:03:34.212468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7-shm.mount: Deactivated successfully. Jan 30 14:03:34.222752 systemd[1]: cri-containerd-abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7.scope: Deactivated successfully. Jan 30 14:03:34.237828 containerd[2015]: time="2025-01-30T14:03:34.237735127Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:03:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:03:34.245438 containerd[2015]: time="2025-01-30T14:03:34.244873495Z" level=info msg="StopContainer for \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\" returns successfully" Jan 30 14:03:34.245606 containerd[2015]: time="2025-01-30T14:03:34.245565031Z" level=info msg="StopPodSandbox for \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\"" Jan 30 14:03:34.245667 containerd[2015]: time="2025-01-30T14:03:34.245620915Z" level=info msg="Container to stop \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:03:34.245667 containerd[2015]: time="2025-01-30T14:03:34.245650243Z" level=info msg="Container to stop \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:03:34.245793 containerd[2015]: time="2025-01-30T14:03:34.245673679Z" level=info msg="Container to stop \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:03:34.245793 containerd[2015]: time="2025-01-30T14:03:34.245697727Z" level=info msg="Container to stop \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:03:34.245793 containerd[2015]: time="2025-01-30T14:03:34.245720131Z" level=info msg="Container to stop \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:03:34.253750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982-shm.mount: Deactivated successfully. Jan 30 14:03:34.270524 systemd[1]: cri-containerd-b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982.scope: Deactivated successfully. Jan 30 14:03:34.288316 containerd[2015]: time="2025-01-30T14:03:34.287851459Z" level=info msg="shim disconnected" id=abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7 namespace=k8s.io Jan 30 14:03:34.288316 containerd[2015]: time="2025-01-30T14:03:34.287925523Z" level=warning msg="cleaning up after shim disconnected" id=abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7 namespace=k8s.io Jan 30 14:03:34.288316 containerd[2015]: time="2025-01-30T14:03:34.287946211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:34.320430 containerd[2015]: time="2025-01-30T14:03:34.320070787Z" level=info msg="shim disconnected" id=b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982 namespace=k8s.io Jan 30 14:03:34.320430 containerd[2015]: time="2025-01-30T14:03:34.320146891Z" level=warning msg="cleaning up after shim disconnected" id=b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982 namespace=k8s.io Jan 30 14:03:34.320430 containerd[2015]: time="2025-01-30T14:03:34.320186695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:34.331152 containerd[2015]: time="2025-01-30T14:03:34.330515239Z" level=info msg="TearDown network for sandbox \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\" successfully" Jan 30 14:03:34.331152 containerd[2015]: time="2025-01-30T14:03:34.330583687Z" level=info msg="StopPodSandbox for \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\" returns successfully" Jan 30 14:03:34.356771 containerd[2015]: time="2025-01-30T14:03:34.356699936Z" level=info msg="TearDown network for sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" successfully" Jan 30 14:03:34.356771 containerd[2015]: time="2025-01-30T14:03:34.356754476Z" level=info msg="StopPodSandbox for \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" returns successfully" Jan 30 14:03:34.380048 kubelet[3409]: I0130 14:03:34.379396 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f8d738c-b401-47e2-a084-b97a72c155b3-cilium-config-path\") pod \"9f8d738c-b401-47e2-a084-b97a72c155b3\" (UID: \"9f8d738c-b401-47e2-a084-b97a72c155b3\") " Jan 30 14:03:34.380048 kubelet[3409]: I0130 14:03:34.379465 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5vsp\" (UniqueName: \"kubernetes.io/projected/9f8d738c-b401-47e2-a084-b97a72c155b3-kube-api-access-v5vsp\") pod \"9f8d738c-b401-47e2-a084-b97a72c155b3\" (UID: \"9f8d738c-b401-47e2-a084-b97a72c155b3\") " Jan 30 14:03:34.394039 kubelet[3409]: I0130 14:03:34.393922 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8d738c-b401-47e2-a084-b97a72c155b3-kube-api-access-v5vsp" (OuterVolumeSpecName: "kube-api-access-v5vsp") pod "9f8d738c-b401-47e2-a084-b97a72c155b3" (UID: "9f8d738c-b401-47e2-a084-b97a72c155b3"). InnerVolumeSpecName "kube-api-access-v5vsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:03:34.395547 kubelet[3409]: I0130 14:03:34.395326 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f8d738c-b401-47e2-a084-b97a72c155b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f8d738c-b401-47e2-a084-b97a72c155b3" (UID: "9f8d738c-b401-47e2-a084-b97a72c155b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:03:34.480391 kubelet[3409]: I0130 14:03:34.480328 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0cd4ef7-b86b-41cd-887c-cfc732c96893-clustermesh-secrets\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480554 kubelet[3409]: I0130 14:03:34.480399 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-host-proc-sys-kernel\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480554 kubelet[3409]: I0130 14:03:34.480439 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-lib-modules\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480554 kubelet[3409]: I0130 14:03:34.480482 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-config-path\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480554 kubelet[3409]: I0130 14:03:34.480514 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-cgroup\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480554 kubelet[3409]: I0130 14:03:34.480546 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-xtables-lock\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480920 kubelet[3409]: I0130 14:03:34.480582 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-etc-cni-netd\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480920 kubelet[3409]: I0130 14:03:34.480613 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-hostproc\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480920 kubelet[3409]: I0130 14:03:34.480644 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-host-proc-sys-net\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480920 kubelet[3409]: I0130 14:03:34.480682 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0cd4ef7-b86b-41cd-887c-cfc732c96893-hubble-tls\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480920 kubelet[3409]: I0130 14:03:34.480715 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cni-path\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.480920 kubelet[3409]: I0130 14:03:34.480748 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-bpf-maps\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.481303 kubelet[3409]: I0130 14:03:34.480788 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cswd\" (UniqueName: \"kubernetes.io/projected/e0cd4ef7-b86b-41cd-887c-cfc732c96893-kube-api-access-9cswd\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.481303 kubelet[3409]: I0130 14:03:34.480823 3409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-run\") pod \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\" (UID: \"e0cd4ef7-b86b-41cd-887c-cfc732c96893\") " Jan 30 14:03:34.481303 kubelet[3409]: I0130 14:03:34.480878 3409 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f8d738c-b401-47e2-a084-b97a72c155b3-cilium-config-path\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.481303 kubelet[3409]: I0130 14:03:34.480902 3409 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-v5vsp\" (UniqueName: \"kubernetes.io/projected/9f8d738c-b401-47e2-a084-b97a72c155b3-kube-api-access-v5vsp\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.481303 kubelet[3409]: I0130 14:03:34.480968 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.481598 kubelet[3409]: I0130 14:03:34.481549 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.481660 kubelet[3409]: I0130 14:03:34.481619 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.481721 kubelet[3409]: I0130 14:03:34.481660 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.484018 kubelet[3409]: I0130 14:03:34.483276 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-hostproc" (OuterVolumeSpecName: "hostproc") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.484018 kubelet[3409]: I0130 14:03:34.483383 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.484352 kubelet[3409]: I0130 14:03:34.484306 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.484470 kubelet[3409]: I0130 14:03:34.484412 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cni-path" (OuterVolumeSpecName: "cni-path") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.484869 kubelet[3409]: I0130 14:03:34.484445 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.485449 kubelet[3409]: I0130 14:03:34.485390 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:03:34.489354 kubelet[3409]: I0130 14:03:34.489188 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0cd4ef7-b86b-41cd-887c-cfc732c96893-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:03:34.491402 kubelet[3409]: I0130 14:03:34.491347 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:03:34.495476 kubelet[3409]: I0130 14:03:34.495398 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cd4ef7-b86b-41cd-887c-cfc732c96893-kube-api-access-9cswd" (OuterVolumeSpecName: "kube-api-access-9cswd") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "kube-api-access-9cswd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:03:34.495744 kubelet[3409]: I0130 14:03:34.495693 3409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cd4ef7-b86b-41cd-887c-cfc732c96893-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e0cd4ef7-b86b-41cd-887c-cfc732c96893" (UID: "e0cd4ef7-b86b-41cd-887c-cfc732c96893"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:03:34.549915 kubelet[3409]: I0130 14:03:34.549358 3409 scope.go:117] "RemoveContainer" containerID="8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05" Jan 30 14:03:34.558779 containerd[2015]: time="2025-01-30T14:03:34.558516909Z" level=info msg="RemoveContainer for \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\"" Jan 30 14:03:34.569871 systemd[1]: Removed slice kubepods-burstable-pode0cd4ef7_b86b_41cd_887c_cfc732c96893.slice - libcontainer container kubepods-burstable-pode0cd4ef7_b86b_41cd_887c_cfc732c96893.slice. Jan 30 14:03:34.570274 systemd[1]: kubepods-burstable-pode0cd4ef7_b86b_41cd_887c_cfc732c96893.slice: Consumed 14.639s CPU time. Jan 30 14:03:34.581255 containerd[2015]: time="2025-01-30T14:03:34.581183481Z" level=info msg="RemoveContainer for \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\" returns successfully" Jan 30 14:03:34.581636 kubelet[3409]: I0130 14:03:34.581588 3409 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-config-path\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.581723 kubelet[3409]: I0130 14:03:34.581637 3409 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-cgroup\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.581723 kubelet[3409]: I0130 14:03:34.581688 3409 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-xtables-lock\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.581723 kubelet[3409]: I0130 14:03:34.581711 3409 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-etc-cni-netd\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.581887 kubelet[3409]: I0130 14:03:34.581780 3409 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-hostproc\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.581887 kubelet[3409]: I0130 14:03:34.581804 3409 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-host-proc-sys-net\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.581887 kubelet[3409]: I0130 14:03:34.581825 3409 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0cd4ef7-b86b-41cd-887c-cfc732c96893-hubble-tls\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.581887 kubelet[3409]: I0130 14:03:34.581866 3409 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cni-path\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.582139 kubelet[3409]: I0130 14:03:34.581889 3409 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-bpf-maps\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.582139 kubelet[3409]: I0130 14:03:34.581909 3409 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9cswd\" (UniqueName: \"kubernetes.io/projected/e0cd4ef7-b86b-41cd-887c-cfc732c96893-kube-api-access-9cswd\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.582139 kubelet[3409]: I0130 14:03:34.581950 3409 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-cilium-run\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.582139 kubelet[3409]: I0130 14:03:34.581978 3409 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-host-proc-sys-kernel\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.582139 kubelet[3409]: I0130 14:03:34.582064 3409 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0cd4ef7-b86b-41cd-887c-cfc732c96893-lib-modules\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.582139 kubelet[3409]: I0130 14:03:34.582084 3409 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0cd4ef7-b86b-41cd-887c-cfc732c96893-clustermesh-secrets\") on node \"ip-172-31-25-125\" DevicePath \"\"" Jan 30 14:03:34.582752 kubelet[3409]: I0130 14:03:34.582635 3409 scope.go:117] "RemoveContainer" containerID="c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa" Jan 30 14:03:34.588393 systemd[1]: Removed slice kubepods-besteffort-pod9f8d738c_b401_47e2_a084_b97a72c155b3.slice - libcontainer container kubepods-besteffort-pod9f8d738c_b401_47e2_a084_b97a72c155b3.slice. Jan 30 14:03:34.593055 containerd[2015]: time="2025-01-30T14:03:34.591198825Z" level=info msg="RemoveContainer for \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\"" Jan 30 14:03:34.600181 containerd[2015]: time="2025-01-30T14:03:34.600116361Z" level=info msg="RemoveContainer for \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\" returns successfully" Jan 30 14:03:34.601158 kubelet[3409]: I0130 14:03:34.601100 3409 scope.go:117] "RemoveContainer" containerID="fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867" Jan 30 14:03:34.606674 containerd[2015]: time="2025-01-30T14:03:34.606349557Z" level=info msg="RemoveContainer for \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\"" Jan 30 14:03:34.616770 containerd[2015]: time="2025-01-30T14:03:34.615950445Z" level=info msg="RemoveContainer for \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\" returns successfully" Jan 30 14:03:34.617445 kubelet[3409]: I0130 14:03:34.617388 3409 scope.go:117] "RemoveContainer" containerID="d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a" Jan 30 14:03:34.620384 containerd[2015]: time="2025-01-30T14:03:34.619931277Z" level=info msg="RemoveContainer for \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\"" Jan 30 14:03:34.626190 containerd[2015]: time="2025-01-30T14:03:34.626119485Z" level=info msg="RemoveContainer for \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\" returns successfully" Jan 30 14:03:34.628532 kubelet[3409]: I0130 14:03:34.628353 3409 scope.go:117] "RemoveContainer" containerID="13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a" Jan 30 14:03:34.632933 containerd[2015]: time="2025-01-30T14:03:34.632878881Z" level=info msg="RemoveContainer for \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\"" Jan 30 14:03:34.638876 containerd[2015]: time="2025-01-30T14:03:34.638801181Z" level=info msg="RemoveContainer for \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\" returns successfully" Jan 30 14:03:34.639250 kubelet[3409]: I0130 14:03:34.639210 3409 scope.go:117] "RemoveContainer" containerID="8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05" Jan 30 14:03:34.639626 containerd[2015]: time="2025-01-30T14:03:34.639548517Z" level=error msg="ContainerStatus for \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\": not found" Jan 30 14:03:34.639963 kubelet[3409]: E0130 14:03:34.639852 3409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\": not found" containerID="8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05" Jan 30 14:03:34.640198 kubelet[3409]: I0130 14:03:34.639978 3409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05"} err="failed to get container status \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ecc01aa9ae938cdd01ab53c4370e8fed4aac521adab588800b6d1373baf6a05\": not found" Jan 30 14:03:34.640275 kubelet[3409]: I0130 14:03:34.640212 3409 scope.go:117] "RemoveContainer" containerID="c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa" Jan 30 14:03:34.640742 containerd[2015]: time="2025-01-30T14:03:34.640619853Z" level=error msg="ContainerStatus for \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\": not found" Jan 30 14:03:34.641081 kubelet[3409]: E0130 14:03:34.641039 3409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\": not found" containerID="c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa" Jan 30 14:03:34.641176 kubelet[3409]: I0130 14:03:34.641092 3409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa"} err="failed to get container status \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2bb381294409317a62c3e48523f06e90c5e98fe6c3820497c9df0e85845aeaa\": not found" Jan 30 14:03:34.641176 kubelet[3409]: I0130 14:03:34.641126 3409 scope.go:117] "RemoveContainer" containerID="fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867" Jan 30 14:03:34.641918 containerd[2015]: time="2025-01-30T14:03:34.641699937Z" level=error msg="ContainerStatus for \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\": not found" Jan 30 14:03:34.642084 kubelet[3409]: E0130 14:03:34.641925 3409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\": not found" containerID="fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867" Jan 30 14:03:34.642084 kubelet[3409]: I0130 14:03:34.641966 3409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867"} err="failed to get container status \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd1e22958af83c18e3a95bb0d501d243002f687039efe6e0760a2a647b210867\": not found" Jan 30 14:03:34.642084 kubelet[3409]: I0130 14:03:34.642041 3409 scope.go:117] "RemoveContainer" containerID="d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a" Jan 30 14:03:34.642966 kubelet[3409]: E0130 14:03:34.642861 3409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\": not found" containerID="d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a" Jan 30 14:03:34.643085 containerd[2015]: time="2025-01-30T14:03:34.642429909Z" level=error msg="ContainerStatus for \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\": not found" Jan 30 14:03:34.643343 kubelet[3409]: I0130 14:03:34.643183 3409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a"} err="failed to get container status \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d33e9b80b4e0c5ec34054dd2207903d405069e5a3cd561bd254f801ce778086a\": not found" Jan 30 14:03:34.643343 kubelet[3409]: I0130 14:03:34.643224 3409 scope.go:117] "RemoveContainer" containerID="13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a" Jan 30 14:03:34.643834 containerd[2015]: time="2025-01-30T14:03:34.643686321Z" level=error msg="ContainerStatus for \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\": not found" Jan 30 14:03:34.644218 kubelet[3409]: E0130 14:03:34.644105 3409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\": not found" containerID="13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a" Jan 30 14:03:34.644490 kubelet[3409]: I0130 14:03:34.644305 3409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a"} err="failed to get container status \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\": rpc error: code = NotFound desc = an error occurred when try to find container \"13313fa8da33d9c24bfefb30dfeaacbfb2a8f23349934488cec735a266f0663a\": not found" Jan 30 14:03:34.644490 kubelet[3409]: I0130 14:03:34.644343 3409 scope.go:117] "RemoveContainer" containerID="c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99" Jan 30 14:03:34.646637 containerd[2015]: time="2025-01-30T14:03:34.646478241Z" level=info msg="RemoveContainer for \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\"" Jan 30 14:03:34.652532 containerd[2015]: time="2025-01-30T14:03:34.652464549Z" level=info msg="RemoveContainer for \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\" returns successfully" Jan 30 14:03:34.652911 kubelet[3409]: I0130 14:03:34.652832 3409 scope.go:117] "RemoveContainer" containerID="c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99" Jan 30 14:03:34.653580 containerd[2015]: time="2025-01-30T14:03:34.653261325Z" level=error msg="ContainerStatus for \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\": not found" Jan 30 14:03:34.653702 kubelet[3409]: E0130 14:03:34.653535 3409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\": not found" containerID="c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99" Jan 30 14:03:34.653886 kubelet[3409]: I0130 14:03:34.653817 3409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99"} err="failed to get container status \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4caa99f0070dde6755f007dd9bb3e2e89cc382bd8568492326b64e207a35c99\": not found" Jan 30 14:03:35.005559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7-rootfs.mount: Deactivated successfully. Jan 30 14:03:35.005734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982-rootfs.mount: Deactivated successfully. Jan 30 14:03:35.005870 systemd[1]: var-lib-kubelet-pods-9f8d738c\x2db401\x2d47e2\x2da084\x2db97a72c155b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv5vsp.mount: Deactivated successfully. Jan 30 14:03:35.006058 systemd[1]: var-lib-kubelet-pods-e0cd4ef7\x2db86b\x2d41cd\x2d887c\x2dcfc732c96893-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9cswd.mount: Deactivated successfully. Jan 30 14:03:35.006202 systemd[1]: var-lib-kubelet-pods-e0cd4ef7\x2db86b\x2d41cd\x2d887c\x2dcfc732c96893-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 14:03:35.006348 systemd[1]: var-lib-kubelet-pods-e0cd4ef7\x2db86b\x2d41cd\x2d887c\x2dcfc732c96893-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 14:03:35.095301 kubelet[3409]: I0130 14:03:35.095238 3409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8d738c-b401-47e2-a084-b97a72c155b3" path="/var/lib/kubelet/pods/9f8d738c-b401-47e2-a084-b97a72c155b3/volumes" Jan 30 14:03:35.096340 kubelet[3409]: I0130 14:03:35.096294 3409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cd4ef7-b86b-41cd-887c-cfc732c96893" path="/var/lib/kubelet/pods/e0cd4ef7-b86b-41cd-887c-cfc732c96893/volumes" Jan 30 14:03:35.937869 sshd[5035]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:35.942950 systemd[1]: sshd@27-172.31.25.125:22-139.178.89.65:36030.service: Deactivated successfully. Jan 30 14:03:35.947229 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:03:35.947821 systemd[1]: session-28.scope: Consumed 1.921s CPU time. Jan 30 14:03:35.950769 systemd-logind[1996]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:03:35.953470 systemd-logind[1996]: Removed session 28. Jan 30 14:03:35.975509 systemd[1]: Started sshd@28-172.31.25.125:22-139.178.89.65:36038.service - OpenSSH per-connection server daemon (139.178.89.65:36038). Jan 30 14:03:36.155390 sshd[5200]: Accepted publickey for core from 139.178.89.65 port 36038 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:36.158096 sshd[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:36.165914 systemd-logind[1996]: New session 29 of user core. Jan 30 14:03:36.176258 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 14:03:37.048941 ntpd[1990]: Deleting interface #12 lxc_health, fe80::accd:30ff:feb1:747b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs Jan 30 14:03:37.049504 ntpd[1990]: 30 Jan 14:03:37 ntpd[1990]: Deleting interface #12 lxc_health, fe80::accd:30ff:feb1:747b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs Jan 30 14:03:37.066075 containerd[2015]: time="2025-01-30T14:03:37.065185365Z" level=info msg="StopPodSandbox for \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\"" Jan 30 14:03:37.066075 containerd[2015]: time="2025-01-30T14:03:37.065322585Z" level=info msg="TearDown network for sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" successfully" Jan 30 14:03:37.066075 containerd[2015]: time="2025-01-30T14:03:37.065346861Z" level=info msg="StopPodSandbox for \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" returns successfully" Jan 30 14:03:37.068538 containerd[2015]: time="2025-01-30T14:03:37.067781529Z" level=info msg="RemovePodSandbox for \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\"" Jan 30 14:03:37.068538 containerd[2015]: time="2025-01-30T14:03:37.067860261Z" level=info msg="Forcibly stopping sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\"" Jan 30 14:03:37.068538 containerd[2015]: time="2025-01-30T14:03:37.068014545Z" level=info msg="TearDown network for sandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" successfully" Jan 30 14:03:37.075388 containerd[2015]: time="2025-01-30T14:03:37.075149217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:03:37.075388 containerd[2015]: time="2025-01-30T14:03:37.075232893Z" level=info msg="RemovePodSandbox \"b6feac8f8847c86cc314188cdb0f74fbe419bda18f377c36f341e3e876b13982\" returns successfully" Jan 30 14:03:37.076567 containerd[2015]: time="2025-01-30T14:03:37.076084557Z" level=info msg="StopPodSandbox for \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\"" Jan 30 14:03:37.076567 containerd[2015]: time="2025-01-30T14:03:37.076236093Z" level=info msg="TearDown network for sandbox \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\" successfully" Jan 30 14:03:37.076567 containerd[2015]: time="2025-01-30T14:03:37.076260069Z" level=info msg="StopPodSandbox for \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\" returns successfully" Jan 30 14:03:37.077417 containerd[2015]: time="2025-01-30T14:03:37.077339649Z" level=info msg="RemovePodSandbox for \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\"" Jan 30 14:03:37.077417 containerd[2015]: time="2025-01-30T14:03:37.077392005Z" level=info msg="Forcibly stopping sandbox \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\"" Jan 30 14:03:37.077601 containerd[2015]: time="2025-01-30T14:03:37.077493273Z" level=info msg="TearDown network for sandbox \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\" successfully" Jan 30 14:03:37.085142 containerd[2015]: time="2025-01-30T14:03:37.084304785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:03:37.085142 containerd[2015]: time="2025-01-30T14:03:37.084385497Z" level=info msg="RemovePodSandbox \"abeab54218fcc67f9bd9876e78fadd23881e328e2fd26aadd12cd2d2a1e1c9f7\" returns successfully" Jan 30 14:03:37.258657 kubelet[3409]: E0130 14:03:37.258458 3409 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:03:37.587810 sshd[5200]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:37.596803 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 14:03:37.597184 systemd[1]: session-29.scope: Consumed 1.205s CPU time. Jan 30 14:03:37.599376 systemd[1]: sshd@28-172.31.25.125:22-139.178.89.65:36038.service: Deactivated successfully. Jan 30 14:03:37.613082 systemd-logind[1996]: Session 29 logged out. Waiting for processes to exit. Jan 30 14:03:37.630683 kubelet[3409]: E0130 14:03:37.628732 3409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f8d738c-b401-47e2-a084-b97a72c155b3" containerName="cilium-operator" Jan 30 14:03:37.630683 kubelet[3409]: E0130 14:03:37.628778 3409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e0cd4ef7-b86b-41cd-887c-cfc732c96893" containerName="mount-cgroup" Jan 30 14:03:37.630683 kubelet[3409]: E0130 14:03:37.628797 3409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e0cd4ef7-b86b-41cd-887c-cfc732c96893" containerName="apply-sysctl-overwrites" Jan 30 14:03:37.630683 kubelet[3409]: E0130 14:03:37.628812 3409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e0cd4ef7-b86b-41cd-887c-cfc732c96893" containerName="mount-bpf-fs" Jan 30 14:03:37.630683 kubelet[3409]: E0130 14:03:37.628827 3409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e0cd4ef7-b86b-41cd-887c-cfc732c96893" containerName="cilium-agent" Jan 30 14:03:37.630683 kubelet[3409]: E0130 14:03:37.628842 3409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e0cd4ef7-b86b-41cd-887c-cfc732c96893" containerName="clean-cilium-state" Jan 30 14:03:37.630683 kubelet[3409]: I0130 14:03:37.628886 3409 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8d738c-b401-47e2-a084-b97a72c155b3" containerName="cilium-operator" Jan 30 14:03:37.630683 kubelet[3409]: I0130 14:03:37.628906 3409 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cd4ef7-b86b-41cd-887c-cfc732c96893" containerName="cilium-agent" Jan 30 14:03:37.642501 systemd[1]: Started sshd@29-172.31.25.125:22-139.178.89.65:36054.service - OpenSSH per-connection server daemon (139.178.89.65:36054). Jan 30 14:03:37.647458 systemd-logind[1996]: Removed session 29. Jan 30 14:03:37.675567 systemd[1]: Created slice kubepods-burstable-pod89501e0b_addd_4bae_b2f0_f1f8ac9c19bf.slice - libcontainer container kubepods-burstable-pod89501e0b_addd_4bae_b2f0_f1f8ac9c19bf.slice. Jan 30 14:03:37.705978 kubelet[3409]: I0130 14:03:37.705186 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-host-proc-sys-kernel\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.705978 kubelet[3409]: I0130 14:03:37.705255 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-host-proc-sys-net\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.705978 kubelet[3409]: I0130 14:03:37.705298 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-cilium-cgroup\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.705978 kubelet[3409]: I0130 14:03:37.705333 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-cni-path\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.705978 kubelet[3409]: I0130 14:03:37.705376 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtt5t\" (UniqueName: \"kubernetes.io/projected/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-kube-api-access-xtt5t\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706410 kubelet[3409]: I0130 14:03:37.705423 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-cilium-config-path\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706410 kubelet[3409]: I0130 14:03:37.705462 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-hostproc\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706410 kubelet[3409]: I0130 14:03:37.705501 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-lib-modules\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706410 kubelet[3409]: I0130 14:03:37.705541 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-clustermesh-secrets\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706410 kubelet[3409]: I0130 14:03:37.705577 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-cilium-ipsec-secrets\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706410 kubelet[3409]: I0130 14:03:37.705617 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-xtables-lock\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706702 kubelet[3409]: I0130 14:03:37.705651 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-hubble-tls\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706702 kubelet[3409]: I0130 14:03:37.705690 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-bpf-maps\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706702 kubelet[3409]: I0130 14:03:37.705729 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-cilium-run\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.706702 kubelet[3409]: I0130 14:03:37.705765 3409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89501e0b-addd-4bae-b2f0-f1f8ac9c19bf-etc-cni-netd\") pod \"cilium-8sv86\" (UID: \"89501e0b-addd-4bae-b2f0-f1f8ac9c19bf\") " pod="kube-system/cilium-8sv86" Jan 30 14:03:37.889553 sshd[5213]: Accepted publickey for core from 139.178.89.65 port 36054 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:37.893415 sshd[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:37.902158 systemd-logind[1996]: New session 30 of user core. Jan 30 14:03:37.908360 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 30 14:03:37.991359 containerd[2015]: time="2025-01-30T14:03:37.991283690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8sv86,Uid:89501e0b-addd-4bae-b2f0-f1f8ac9c19bf,Namespace:kube-system,Attempt:0,}" Jan 30 14:03:38.039379 containerd[2015]: time="2025-01-30T14:03:38.039226702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:38.040793 containerd[2015]: time="2025-01-30T14:03:38.040457542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:38.040793 containerd[2015]: time="2025-01-30T14:03:38.040499470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:38.040793 containerd[2015]: time="2025-01-30T14:03:38.040660594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:38.045830 sshd[5213]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:38.060434 systemd[1]: sshd@29-172.31.25.125:22-139.178.89.65:36054.service: Deactivated successfully. Jan 30 14:03:38.069873 systemd[1]: session-30.scope: Deactivated successfully. Jan 30 14:03:38.077587 systemd-logind[1996]: Session 30 logged out. Waiting for processes to exit. Jan 30 14:03:38.089788 systemd-logind[1996]: Removed session 30. Jan 30 14:03:38.100330 systemd[1]: Started cri-containerd-2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de.scope - libcontainer container 2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de. Jan 30 14:03:38.105088 systemd[1]: Started sshd@30-172.31.25.125:22-139.178.89.65:36060.service - OpenSSH per-connection server daemon (139.178.89.65:36060). Jan 30 14:03:38.155931 containerd[2015]: time="2025-01-30T14:03:38.155718034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8sv86,Uid:89501e0b-addd-4bae-b2f0-f1f8ac9c19bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\"" Jan 30 14:03:38.163213 containerd[2015]: time="2025-01-30T14:03:38.162486899Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:03:38.186885 containerd[2015]: time="2025-01-30T14:03:38.186804755Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a61621c4326f6fd11dc9a72c3c29d0638df37d2d9b37a43eb94ef7726cdb6a3\"" Jan 30 14:03:38.188133 containerd[2015]: time="2025-01-30T14:03:38.187949879Z" level=info msg="StartContainer for \"5a61621c4326f6fd11dc9a72c3c29d0638df37d2d9b37a43eb94ef7726cdb6a3\"" Jan 30 14:03:38.237364 systemd[1]: Started cri-containerd-5a61621c4326f6fd11dc9a72c3c29d0638df37d2d9b37a43eb94ef7726cdb6a3.scope - libcontainer container 5a61621c4326f6fd11dc9a72c3c29d0638df37d2d9b37a43eb94ef7726cdb6a3. Jan 30 14:03:38.289146 containerd[2015]: time="2025-01-30T14:03:38.289082243Z" level=info msg="StartContainer for \"5a61621c4326f6fd11dc9a72c3c29d0638df37d2d9b37a43eb94ef7726cdb6a3\" returns successfully" Jan 30 14:03:38.305877 sshd[5251]: Accepted publickey for core from 139.178.89.65 port 36060 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:03:38.307344 systemd[1]: cri-containerd-5a61621c4326f6fd11dc9a72c3c29d0638df37d2d9b37a43eb94ef7726cdb6a3.scope: Deactivated successfully. Jan 30 14:03:38.312876 sshd[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:38.324380 systemd-logind[1996]: New session 31 of user core. Jan 30 14:03:38.330712 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 30 14:03:38.370197 containerd[2015]: time="2025-01-30T14:03:38.370077252Z" level=info msg="shim disconnected" id=5a61621c4326f6fd11dc9a72c3c29d0638df37d2d9b37a43eb94ef7726cdb6a3 namespace=k8s.io Jan 30 14:03:38.370749 containerd[2015]: time="2025-01-30T14:03:38.370479180Z" level=warning msg="cleaning up after shim disconnected" id=5a61621c4326f6fd11dc9a72c3c29d0638df37d2d9b37a43eb94ef7726cdb6a3 namespace=k8s.io Jan 30 14:03:38.370749 containerd[2015]: time="2025-01-30T14:03:38.370509768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:38.594096 containerd[2015]: time="2025-01-30T14:03:38.593876785Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:03:38.620405 containerd[2015]: time="2025-01-30T14:03:38.620324725Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"86f494a58d7d29ddf59e62c8db3b12163aea1510a6b41d2a3767c45d5c3d8d03\"" Jan 30 14:03:38.623038 containerd[2015]: time="2025-01-30T14:03:38.621415753Z" level=info msg="StartContainer for \"86f494a58d7d29ddf59e62c8db3b12163aea1510a6b41d2a3767c45d5c3d8d03\"" Jan 30 14:03:38.670342 systemd[1]: Started cri-containerd-86f494a58d7d29ddf59e62c8db3b12163aea1510a6b41d2a3767c45d5c3d8d03.scope - libcontainer container 86f494a58d7d29ddf59e62c8db3b12163aea1510a6b41d2a3767c45d5c3d8d03. Jan 30 14:03:38.719101 containerd[2015]: time="2025-01-30T14:03:38.719020993Z" level=info msg="StartContainer for \"86f494a58d7d29ddf59e62c8db3b12163aea1510a6b41d2a3767c45d5c3d8d03\" returns successfully" Jan 30 14:03:38.731504 systemd[1]: cri-containerd-86f494a58d7d29ddf59e62c8db3b12163aea1510a6b41d2a3767c45d5c3d8d03.scope: Deactivated successfully. Jan 30 14:03:38.774875 containerd[2015]: time="2025-01-30T14:03:38.774788258Z" level=info msg="shim disconnected" id=86f494a58d7d29ddf59e62c8db3b12163aea1510a6b41d2a3767c45d5c3d8d03 namespace=k8s.io Jan 30 14:03:38.774875 containerd[2015]: time="2025-01-30T14:03:38.774864278Z" level=warning msg="cleaning up after shim disconnected" id=86f494a58d7d29ddf59e62c8db3b12163aea1510a6b41d2a3767c45d5c3d8d03 namespace=k8s.io Jan 30 14:03:38.775333 containerd[2015]: time="2025-01-30T14:03:38.774885998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:39.542245 kubelet[3409]: I0130 14:03:39.542066 3409 setters.go:600] "Node became not ready" node="ip-172-31-25-125" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T14:03:39Z","lastTransitionTime":"2025-01-30T14:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 14:03:39.609294 containerd[2015]: time="2025-01-30T14:03:39.605145626Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:03:39.650127 containerd[2015]: time="2025-01-30T14:03:39.650049410Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f3e80d25ec0fc9ae29407b71fe64e33464c19f72ab15a9b0d3913990ecd1ea83\"" Jan 30 14:03:39.651121 containerd[2015]: time="2025-01-30T14:03:39.651061286Z" level=info msg="StartContainer for \"f3e80d25ec0fc9ae29407b71fe64e33464c19f72ab15a9b0d3913990ecd1ea83\"" Jan 30 14:03:39.715334 systemd[1]: Started cri-containerd-f3e80d25ec0fc9ae29407b71fe64e33464c19f72ab15a9b0d3913990ecd1ea83.scope - libcontainer container f3e80d25ec0fc9ae29407b71fe64e33464c19f72ab15a9b0d3913990ecd1ea83. Jan 30 14:03:39.768704 containerd[2015]: time="2025-01-30T14:03:39.768531170Z" level=info msg="StartContainer for \"f3e80d25ec0fc9ae29407b71fe64e33464c19f72ab15a9b0d3913990ecd1ea83\" returns successfully" Jan 30 14:03:39.771679 systemd[1]: cri-containerd-f3e80d25ec0fc9ae29407b71fe64e33464c19f72ab15a9b0d3913990ecd1ea83.scope: Deactivated successfully. Jan 30 14:03:39.816316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3e80d25ec0fc9ae29407b71fe64e33464c19f72ab15a9b0d3913990ecd1ea83-rootfs.mount: Deactivated successfully. Jan 30 14:03:39.822704 containerd[2015]: time="2025-01-30T14:03:39.822623439Z" level=info msg="shim disconnected" id=f3e80d25ec0fc9ae29407b71fe64e33464c19f72ab15a9b0d3913990ecd1ea83 namespace=k8s.io Jan 30 14:03:39.822896 containerd[2015]: time="2025-01-30T14:03:39.822704811Z" level=warning msg="cleaning up after shim disconnected" id=f3e80d25ec0fc9ae29407b71fe64e33464c19f72ab15a9b0d3913990ecd1ea83 namespace=k8s.io Jan 30 14:03:39.822896 containerd[2015]: time="2025-01-30T14:03:39.822729303Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:40.608045 containerd[2015]: time="2025-01-30T14:03:40.607754283Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:03:40.646206 containerd[2015]: time="2025-01-30T14:03:40.645915567Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb1e6b463028f2f75f79dea40ee26d21085a9face89527175058134670e9ff08\"" Jan 30 14:03:40.649285 containerd[2015]: time="2025-01-30T14:03:40.649025307Z" level=info msg="StartContainer for \"cb1e6b463028f2f75f79dea40ee26d21085a9face89527175058134670e9ff08\"" Jan 30 14:03:40.705572 systemd[1]: Started cri-containerd-cb1e6b463028f2f75f79dea40ee26d21085a9face89527175058134670e9ff08.scope - libcontainer container cb1e6b463028f2f75f79dea40ee26d21085a9face89527175058134670e9ff08. Jan 30 14:03:40.747473 systemd[1]: cri-containerd-cb1e6b463028f2f75f79dea40ee26d21085a9face89527175058134670e9ff08.scope: Deactivated successfully. Jan 30 14:03:40.753321 containerd[2015]: time="2025-01-30T14:03:40.752666031Z" level=info msg="StartContainer for \"cb1e6b463028f2f75f79dea40ee26d21085a9face89527175058134670e9ff08\" returns successfully" Jan 30 14:03:40.795040 containerd[2015]: time="2025-01-30T14:03:40.794696260Z" level=info msg="shim disconnected" id=cb1e6b463028f2f75f79dea40ee26d21085a9face89527175058134670e9ff08 namespace=k8s.io Jan 30 14:03:40.795040 containerd[2015]: time="2025-01-30T14:03:40.794769184Z" level=warning msg="cleaning up after shim disconnected" id=cb1e6b463028f2f75f79dea40ee26d21085a9face89527175058134670e9ff08 namespace=k8s.io Jan 30 14:03:40.795040 containerd[2015]: time="2025-01-30T14:03:40.794792116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:03:40.816965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb1e6b463028f2f75f79dea40ee26d21085a9face89527175058134670e9ff08-rootfs.mount: Deactivated successfully. Jan 30 14:03:41.620019 containerd[2015]: time="2025-01-30T14:03:41.615884548Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:03:41.654021 containerd[2015]: time="2025-01-30T14:03:41.653911996Z" level=info msg="CreateContainer within sandbox \"2723ad8e1f715be8b3230621521bd9e8bff382dca241517397ffeb6c97ba76de\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6bb33af464fd2a9bf91f7338589c65a7d7bebb0e1f380b98ab053ae564868304\"" Jan 30 14:03:41.657165 containerd[2015]: time="2025-01-30T14:03:41.657081580Z" level=info msg="StartContainer for \"6bb33af464fd2a9bf91f7338589c65a7d7bebb0e1f380b98ab053ae564868304\"" Jan 30 14:03:41.713302 systemd[1]: Started cri-containerd-6bb33af464fd2a9bf91f7338589c65a7d7bebb0e1f380b98ab053ae564868304.scope - libcontainer container 6bb33af464fd2a9bf91f7338589c65a7d7bebb0e1f380b98ab053ae564868304. Jan 30 14:03:41.765858 containerd[2015]: time="2025-01-30T14:03:41.765627412Z" level=info msg="StartContainer for \"6bb33af464fd2a9bf91f7338589c65a7d7bebb0e1f380b98ab053ae564868304\" returns successfully" Jan 30 14:03:42.088448 kubelet[3409]: E0130 14:03:42.088298 3409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-k8dmt" podUID="52b10fbf-e5e9-43af-bbe6-1d2c428546f0" Jan 30 14:03:42.529110 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 14:03:45.028975 systemd[1]: run-containerd-runc-k8s.io-6bb33af464fd2a9bf91f7338589c65a7d7bebb0e1f380b98ab053ae564868304-runc.Oisi0h.mount: Deactivated successfully. Jan 30 14:03:46.762566 (udev-worker)[6044]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:03:46.763120 (udev-worker)[6043]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:03:46.772225 systemd-networkd[1935]: lxc_health: Link UP Jan 30 14:03:46.795124 systemd-networkd[1935]: lxc_health: Gained carrier Jan 30 14:03:47.364445 systemd[1]: run-containerd-runc-k8s.io-6bb33af464fd2a9bf91f7338589c65a7d7bebb0e1f380b98ab053ae564868304-runc.zOTHIU.mount: Deactivated successfully. Jan 30 14:03:48.029305 kubelet[3409]: I0130 14:03:48.028171 3409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8sv86" podStartSLOduration=11.028148276 podStartE2EDuration="11.028148276s" podCreationTimestamp="2025-01-30 14:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:03:42.701791517 +0000 UTC m=+125.891349926" watchObservedRunningTime="2025-01-30 14:03:48.028148276 +0000 UTC m=+131.217706709" Jan 30 14:03:48.248210 systemd-networkd[1935]: lxc_health: Gained IPv6LL Jan 30 14:03:49.764766 kubelet[3409]: E0130 14:03:49.764711 3409 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45768->127.0.0.1:38037: write tcp 127.0.0.1:45768->127.0.0.1:38037: write: broken pipe Jan 30 14:03:51.049061 ntpd[1990]: Listen normally on 15 lxc_health [fe80::fcfc:25ff:feff:2d74%14]:123 Jan 30 14:03:51.049594 ntpd[1990]: 30 Jan 14:03:51 ntpd[1990]: Listen normally on 15 lxc_health [fe80::fcfc:25ff:feff:2d74%14]:123 Jan 30 14:03:52.036423 sshd[5251]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:52.044399 systemd[1]: sshd@30-172.31.25.125:22-139.178.89.65:36060.service: Deactivated successfully. Jan 30 14:03:52.049976 systemd[1]: session-31.scope: Deactivated successfully. Jan 30 14:03:52.056546 systemd-logind[1996]: Session 31 logged out. Waiting for processes to exit. Jan 30 14:03:52.059788 systemd-logind[1996]: Removed session 31. Jan 30 14:04:06.127499 systemd[1]: cri-containerd-8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda.scope: Deactivated successfully. Jan 30 14:04:06.129529 systemd[1]: cri-containerd-8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda.scope: Consumed 4.199s CPU time, 18.1M memory peak, 0B memory swap peak. Jan 30 14:04:06.165071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda-rootfs.mount: Deactivated successfully. Jan 30 14:04:06.187702 containerd[2015]: time="2025-01-30T14:04:06.187583846Z" level=info msg="shim disconnected" id=8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda namespace=k8s.io Jan 30 14:04:06.187702 containerd[2015]: time="2025-01-30T14:04:06.187658798Z" level=warning msg="cleaning up after shim disconnected" id=8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda namespace=k8s.io Jan 30 14:04:06.187702 containerd[2015]: time="2025-01-30T14:04:06.187679402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:06.698263 kubelet[3409]: I0130 14:04:06.698098 3409 scope.go:117] "RemoveContainer" containerID="8500c074e37d0a2a3cffd786b005a6c98062d68f598ad34619840bd02bde6dda" Jan 30 14:04:06.701120 containerd[2015]: time="2025-01-30T14:04:06.701062732Z" level=info msg="CreateContainer within sandbox \"99083c6d8e82c27b607abd3e8483092d34f1153468bdcf50ca493adecbc6222b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 14:04:06.726687 containerd[2015]: time="2025-01-30T14:04:06.726548404Z" level=info msg="CreateContainer within sandbox \"99083c6d8e82c27b607abd3e8483092d34f1153468bdcf50ca493adecbc6222b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b985193b9ba5f75aa7ef7fd1cc16bb9f172f7df2cd506fce0543b3b539023373\"" Jan 30 14:04:06.727558 containerd[2015]: time="2025-01-30T14:04:06.727519408Z" level=info msg="StartContainer for \"b985193b9ba5f75aa7ef7fd1cc16bb9f172f7df2cd506fce0543b3b539023373\"" Jan 30 14:04:06.784315 systemd[1]: Started cri-containerd-b985193b9ba5f75aa7ef7fd1cc16bb9f172f7df2cd506fce0543b3b539023373.scope - libcontainer container b985193b9ba5f75aa7ef7fd1cc16bb9f172f7df2cd506fce0543b3b539023373. Jan 30 14:04:06.856029 containerd[2015]: time="2025-01-30T14:04:06.855868577Z" level=info msg="StartContainer for \"b985193b9ba5f75aa7ef7fd1cc16bb9f172f7df2cd506fce0543b3b539023373\" returns successfully" Jan 30 14:04:09.271504 kubelet[3409]: E0130 14:04:09.270946 3409 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-125?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 14:04:12.097563 systemd[1]: cri-containerd-af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53.scope: Deactivated successfully. Jan 30 14:04:12.100317 systemd[1]: cri-containerd-af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53.scope: Consumed 3.684s CPU time, 16.0M memory peak, 0B memory swap peak. Jan 30 14:04:12.138675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53-rootfs.mount: Deactivated successfully. Jan 30 14:04:12.154620 containerd[2015]: time="2025-01-30T14:04:12.154504111Z" level=info msg="shim disconnected" id=af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53 namespace=k8s.io Jan 30 14:04:12.154620 containerd[2015]: time="2025-01-30T14:04:12.154605379Z" level=warning msg="cleaning up after shim disconnected" id=af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53 namespace=k8s.io Jan 30 14:04:12.155734 containerd[2015]: time="2025-01-30T14:04:12.154628335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:12.174745 containerd[2015]: time="2025-01-30T14:04:12.174670027Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:04:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:04:12.720070 kubelet[3409]: I0130 14:04:12.719239 3409 scope.go:117] "RemoveContainer" containerID="af56dfec3c5494afd156c153828feb5e7a60bfd6f356db0657c96e097eafaa53" Jan 30 14:04:12.723719 containerd[2015]: time="2025-01-30T14:04:12.723543418Z" level=info msg="CreateContainer within sandbox \"b877e14dfdbe9ba92855967b4346e64e53e1cecd7ba9ebf9ff2091fa665fc94f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 14:04:12.752307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911477257.mount: Deactivated successfully. Jan 30 14:04:12.757438 containerd[2015]: time="2025-01-30T14:04:12.757363750Z" level=info msg="CreateContainer within sandbox \"b877e14dfdbe9ba92855967b4346e64e53e1cecd7ba9ebf9ff2091fa665fc94f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a5f42ba66244e4ec59743aa59863af3e54bd1071a2c363ab98f1a6259fdb63d5\"" Jan 30 14:04:12.758298 containerd[2015]: time="2025-01-30T14:04:12.758253358Z" level=info msg="StartContainer for \"a5f42ba66244e4ec59743aa59863af3e54bd1071a2c363ab98f1a6259fdb63d5\"" Jan 30 14:04:12.814321 systemd[1]: Started cri-containerd-a5f42ba66244e4ec59743aa59863af3e54bd1071a2c363ab98f1a6259fdb63d5.scope - libcontainer container a5f42ba66244e4ec59743aa59863af3e54bd1071a2c363ab98f1a6259fdb63d5. Jan 30 14:04:12.882135 containerd[2015]: time="2025-01-30T14:04:12.881781683Z" level=info msg="StartContainer for \"a5f42ba66244e4ec59743aa59863af3e54bd1071a2c363ab98f1a6259fdb63d5\" returns successfully" Jan 30 14:04:19.271560 kubelet[3409]: E0130 14:04:19.271235 3409 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-125?timeout=10s\": context deadline exceeded"