Dec 13 01:54:55.194689 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:54:55.194734 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:54:55.194759 kernel: KASLR disabled due to lack of seed Dec 13 01:54:55.194776 kernel: efi: EFI v2.7 by EDK II Dec 13 01:54:55.194792 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:54:55.194808 kernel: ACPI: Early table checksum verification disabled Dec 13 01:54:55.194825 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:54:55.194841 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:54:55.194857 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:54:55.194872 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:54:55.194892 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:54:55.194908 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:54:55.194923 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:54:55.194940 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:54:55.194958 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:54:55.194978 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:54:55.194996 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:54:55.195012 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:54:55.195028 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:54:55.195045 kernel: printk: bootconsole [uart0] enabled Dec 13 01:54:55.195061 kernel: NUMA: Failed to initialise from firmware Dec 13 01:54:55.195077 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:55.195094 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:54:55.195110 kernel: Zone ranges: Dec 13 01:54:55.195127 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:54:55.195143 kernel: DMA32 empty Dec 13 01:54:55.195163 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:54:55.195180 kernel: Movable zone start for each node Dec 13 01:54:55.195196 kernel: Early memory node ranges Dec 13 01:54:55.195212 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:54:55.195228 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:54:55.195244 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:54:55.195260 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:54:55.195320 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:54:55.195342 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:54:55.195359 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:54:55.195376 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:54:55.195392 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:55.195415 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:54:55.195432 kernel: psci: probing for conduit method from ACPI. Dec 13 01:54:55.195456 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:54:55.195473 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:54:55.195491 kernel: psci: Trusted OS migration not required Dec 13 01:54:55.195512 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:54:55.195530 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:54:55.195547 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:54:55.195565 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:54:55.195582 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:54:55.195599 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:54:55.195616 kernel: CPU features: detected: Spectre-v2 Dec 13 01:54:55.195634 kernel: CPU features: detected: Spectre-v3a Dec 13 01:54:55.195651 kernel: CPU features: detected: Spectre-BHB Dec 13 01:54:55.195668 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:54:55.195685 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:54:55.195706 kernel: alternatives: applying boot alternatives Dec 13 01:54:55.195726 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:55.195745 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:54:55.195763 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:54:55.195780 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:54:55.195798 kernel: Fallback order for Node 0: 0 Dec 13 01:54:55.195815 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:54:55.195832 kernel: Policy zone: Normal Dec 13 01:54:55.195849 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:54:55.195866 kernel: software IO TLB: area num 2. Dec 13 01:54:55.195884 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:54:55.195906 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:54:55.195924 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:54:55.195942 kernel: trace event string verifier disabled Dec 13 01:54:55.195959 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:54:55.195977 kernel: rcu: RCU event tracing is enabled. Dec 13 01:54:55.195995 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:54:55.196013 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:54:55.196031 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:54:55.196049 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:54:55.196066 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:54:55.196084 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:54:55.196105 kernel: GICv3: 96 SPIs implemented Dec 13 01:54:55.196122 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:54:55.196139 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:54:55.196156 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:54:55.196174 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:54:55.196191 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:54:55.196208 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:54:55.196226 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:54:55.196243 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:54:55.196260 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:54:55.199370 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:54:55.199405 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:54:55.199434 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:54:55.199454 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:54:55.199472 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:54:55.199491 kernel: Console: colour dummy device 80x25 Dec 13 01:54:55.199509 kernel: printk: console [tty1] enabled Dec 13 01:54:55.199528 kernel: ACPI: Core revision 20230628 Dec 13 01:54:55.199546 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:54:55.199564 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:54:55.199583 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:54:55.199605 kernel: landlock: Up and running. Dec 13 01:54:55.199623 kernel: SELinux: Initializing. Dec 13 01:54:55.199641 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:55.199659 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:55.199677 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:55.199695 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:55.199714 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:54:55.199733 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:54:55.199751 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:54:55.199773 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:54:55.199791 kernel: Remapping and enabling EFI services. Dec 13 01:54:55.199809 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:54:55.199827 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:54:55.199846 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:54:55.199864 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:54:55.199882 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:54:55.199900 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:54:55.199918 kernel: SMP: Total of 2 processors activated. Dec 13 01:54:55.199939 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:54:55.199957 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:54:55.199976 kernel: CPU features: detected: CRC32 instructions Dec 13 01:54:55.200006 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:54:55.200028 kernel: alternatives: applying system-wide alternatives Dec 13 01:54:55.200047 kernel: devtmpfs: initialized Dec 13 01:54:55.200065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:54:55.200084 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:54:55.200103 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:54:55.200128 kernel: SMBIOS 3.0.0 present. Dec 13 01:54:55.200151 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:54:55.200170 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:54:55.200189 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:54:55.200208 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:54:55.200227 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:54:55.200245 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:54:55.200264 kernel: audit: type=2000 audit(0.321:1): state=initialized audit_enabled=0 res=1 Dec 13 01:54:55.200334 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:54:55.200354 kernel: cpuidle: using governor menu Dec 13 01:54:55.200374 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:54:55.200392 kernel: ASID allocator initialised with 65536 entries Dec 13 01:54:55.200411 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:54:55.200431 kernel: Serial: AMBA PL011 UART driver Dec 13 01:54:55.200450 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:54:55.200469 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:54:55.200507 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:54:55.200535 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:54:55.200555 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:54:55.200575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:54:55.200594 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:54:55.200613 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:54:55.200632 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:54:55.200651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:54:55.200670 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:54:55.200688 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:54:55.200711 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:54:55.200730 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:54:55.200749 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:54:55.200768 kernel: ACPI: Interpreter enabled Dec 13 01:54:55.200786 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:54:55.200805 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:54:55.200823 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:54:55.201267 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:54:55.202605 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:54:55.202808 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:54:55.203015 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:54:55.203218 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:54:55.203243 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:54:55.203263 kernel: acpiphp: Slot [1] registered Dec 13 01:54:55.203302 kernel: acpiphp: Slot [2] registered Dec 13 01:54:55.203323 kernel: acpiphp: Slot [3] registered Dec 13 01:54:55.203348 kernel: acpiphp: Slot [4] registered Dec 13 01:54:55.203367 kernel: acpiphp: Slot [5] registered Dec 13 01:54:55.203386 kernel: acpiphp: Slot [6] registered Dec 13 01:54:55.203404 kernel: acpiphp: Slot [7] registered Dec 13 01:54:55.203422 kernel: acpiphp: Slot [8] registered Dec 13 01:54:55.203441 kernel: acpiphp: Slot [9] registered Dec 13 01:54:55.203459 kernel: acpiphp: Slot [10] registered Dec 13 01:54:55.203477 kernel: acpiphp: Slot [11] registered Dec 13 01:54:55.203496 kernel: acpiphp: Slot [12] registered Dec 13 01:54:55.203514 kernel: acpiphp: Slot [13] registered Dec 13 01:54:55.203537 kernel: acpiphp: Slot [14] registered Dec 13 01:54:55.203555 kernel: acpiphp: Slot [15] registered Dec 13 01:54:55.203573 kernel: acpiphp: Slot [16] registered Dec 13 01:54:55.203591 kernel: acpiphp: Slot [17] registered Dec 13 01:54:55.203610 kernel: acpiphp: Slot [18] registered Dec 13 01:54:55.203628 kernel: acpiphp: Slot [19] registered Dec 13 01:54:55.203646 kernel: acpiphp: Slot [20] registered Dec 13 01:54:55.203665 kernel: acpiphp: Slot [21] registered Dec 13 01:54:55.203683 kernel: acpiphp: Slot [22] registered Dec 13 01:54:55.203705 kernel: acpiphp: Slot [23] registered Dec 13 01:54:55.203724 kernel: acpiphp: Slot [24] registered Dec 13 01:54:55.203742 kernel: acpiphp: Slot [25] registered Dec 13 01:54:55.203760 kernel: acpiphp: Slot [26] registered Dec 13 01:54:55.203778 kernel: acpiphp: Slot [27] registered Dec 13 01:54:55.203797 kernel: acpiphp: Slot [28] registered Dec 13 01:54:55.203815 kernel: acpiphp: Slot [29] registered Dec 13 01:54:55.203834 kernel: acpiphp: Slot [30] registered Dec 13 01:54:55.203853 kernel: acpiphp: Slot [31] registered Dec 13 01:54:55.203872 kernel: PCI host bridge to bus 0000:00 Dec 13 01:54:55.204111 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:54:55.206394 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:54:55.206621 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:55.206812 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:54:55.207059 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:54:55.207315 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:54:55.207545 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:54:55.207768 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:54:55.207979 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:54:55.208188 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:55.208454 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:54:55.208688 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:54:55.208911 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:54:55.209118 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:54:55.209392 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:55.209599 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:54:55.210475 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:54:55.210694 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:54:55.210905 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:54:55.211114 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:54:55.211390 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:54:55.211582 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:54:55.211768 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:55.211795 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:54:55.211814 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:54:55.211834 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:54:55.211853 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:54:55.211872 kernel: iommu: Default domain type: Translated Dec 13 01:54:55.211901 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:54:55.211920 kernel: efivars: Registered efivars operations Dec 13 01:54:55.211939 kernel: vgaarb: loaded Dec 13 01:54:55.211957 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:54:55.211977 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:54:55.211995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:54:55.212014 kernel: pnp: PnP ACPI init Dec 13 01:54:55.212234 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:54:55.212268 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:54:55.212552 kernel: NET: Registered PF_INET protocol family Dec 13 01:54:55.212572 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:54:55.212591 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:54:55.212610 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:54:55.212629 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:54:55.212648 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:54:55.212666 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:54:55.212685 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:55.212711 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:55.212729 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:54:55.212748 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:54:55.212766 kernel: kvm [1]: HYP mode not available Dec 13 01:54:55.212784 kernel: Initialise system trusted keyrings Dec 13 01:54:55.212803 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:54:55.212822 kernel: Key type asymmetric registered Dec 13 01:54:55.212840 kernel: Asymmetric key parser 'x509' registered Dec 13 01:54:55.212858 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:54:55.212881 kernel: io scheduler mq-deadline registered Dec 13 01:54:55.212899 kernel: io scheduler kyber registered Dec 13 01:54:55.212918 kernel: io scheduler bfq registered Dec 13 01:54:55.213141 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:54:55.213169 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:54:55.213188 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:54:55.213207 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:54:55.213225 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:54:55.213249 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:54:55.213269 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:54:55.215732 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:54:55.215759 kernel: printk: console [ttyS0] disabled Dec 13 01:54:55.215779 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:54:55.215798 kernel: printk: console [ttyS0] enabled Dec 13 01:54:55.215817 kernel: printk: bootconsole [uart0] disabled Dec 13 01:54:55.215836 kernel: thunder_xcv, ver 1.0 Dec 13 01:54:55.215854 kernel: thunder_bgx, ver 1.0 Dec 13 01:54:55.215881 kernel: nicpf, ver 1.0 Dec 13 01:54:55.215900 kernel: nicvf, ver 1.0 Dec 13 01:54:55.216123 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:54:55.216366 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:54:54 UTC (1734054894) Dec 13 01:54:55.216395 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:54:55.216414 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:54:55.216433 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:54:55.216452 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:54:55.216479 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:54:55.216515 kernel: Segment Routing with IPv6 Dec 13 01:54:55.216536 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:54:55.216554 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:54:55.216573 kernel: Key type dns_resolver registered Dec 13 01:54:55.216592 kernel: registered taskstats version 1 Dec 13 01:54:55.216610 kernel: Loading compiled-in X.509 certificates Dec 13 01:54:55.216629 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:54:55.216647 kernel: Key type .fscrypt registered Dec 13 01:54:55.216671 kernel: Key type fscrypt-provisioning registered Dec 13 01:54:55.216690 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:54:55.216708 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:54:55.216727 kernel: ima: No architecture policies found Dec 13 01:54:55.216745 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:54:55.216764 kernel: clk: Disabling unused clocks Dec 13 01:54:55.216782 kernel: Freeing unused kernel memory: 39360K Dec 13 01:54:55.216801 kernel: Run /init as init process Dec 13 01:54:55.216819 kernel: with arguments: Dec 13 01:54:55.216838 kernel: /init Dec 13 01:54:55.216860 kernel: with environment: Dec 13 01:54:55.216879 kernel: HOME=/ Dec 13 01:54:55.216897 kernel: TERM=linux Dec 13 01:54:55.216915 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:54:55.216938 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:55.216962 systemd[1]: Detected virtualization amazon. Dec 13 01:54:55.216983 systemd[1]: Detected architecture arm64. Dec 13 01:54:55.217007 systemd[1]: Running in initrd. Dec 13 01:54:55.217027 systemd[1]: No hostname configured, using default hostname. Dec 13 01:54:55.217047 systemd[1]: Hostname set to <localhost>. Dec 13 01:54:55.217068 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:55.217088 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:54:55.217108 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:55.217128 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:55.217150 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:54:55.217175 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:55.217196 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:54:55.217217 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:54:55.217241 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:54:55.217262 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:54:55.217319 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:55.217342 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:55.217369 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:55.217390 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:55.217410 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:55.217430 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:55.217451 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:55.217471 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:55.217491 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:55.217512 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:55.217532 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:55.217557 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:55.217578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:55.217598 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:55.217618 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:54:55.217639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:55.217659 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:54:55.217679 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:54:55.217699 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:55.217724 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:55.217745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:55.217765 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:55.217786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:55.217806 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:54:55.217864 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 01:54:55.217913 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:55.217934 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:54:55.217957 kernel: Bridge firewalling registered Dec 13 01:54:55.217977 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:55.217998 systemd-journald[251]: Journal started Dec 13 01:54:55.218035 systemd-journald[251]: Runtime Journal (/run/log/journal/ec204906b3692ef2726056983586ecfb) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:55.178914 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 01:54:55.210689 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 01:54:55.234728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:55.234798 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:55.238862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:55.244382 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:55.255837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:55.269680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:55.275518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:55.292559 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:55.325764 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:55.329826 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:55.350704 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:54:55.364117 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:55.374578 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:55.389248 dracut-cmdline[284]: dracut-dracut-053 Dec 13 01:54:55.396651 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:55.459855 systemd-resolved[289]: Positive Trust Anchors: Dec 13 01:54:55.459885 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:55.459948 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:55.577328 kernel: SCSI subsystem initialized Dec 13 01:54:55.584304 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:54:55.597309 kernel: iscsi: registered transport (tcp) Dec 13 01:54:55.619752 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:54:55.619825 kernel: QLogic iSCSI HBA Driver Dec 13 01:54:55.691321 kernel: random: crng init done Dec 13 01:54:55.691530 systemd-resolved[289]: Defaulting to hostname 'linux'. Dec 13 01:54:55.694519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:55.695037 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:55.725342 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:55.741647 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:54:55.773978 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:54:55.774055 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:54:55.775775 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:54:55.842324 kernel: raid6: neonx8 gen() 6647 MB/s Dec 13 01:54:55.859310 kernel: raid6: neonx4 gen() 6440 MB/s Dec 13 01:54:55.876317 kernel: raid6: neonx2 gen() 5367 MB/s Dec 13 01:54:55.893316 kernel: raid6: neonx1 gen() 3921 MB/s Dec 13 01:54:55.910310 kernel: raid6: int64x8 gen() 3795 MB/s Dec 13 01:54:55.927316 kernel: raid6: int64x4 gen() 3675 MB/s Dec 13 01:54:55.944321 kernel: raid6: int64x2 gen() 3553 MB/s Dec 13 01:54:55.962110 kernel: raid6: int64x1 gen() 2749 MB/s Dec 13 01:54:55.962176 kernel: raid6: using algorithm neonx8 gen() 6647 MB/s Dec 13 01:54:55.980064 kernel: raid6: .... xor() 4915 MB/s, rmw enabled Dec 13 01:54:55.980135 kernel: raid6: using neon recovery algorithm Dec 13 01:54:55.988564 kernel: xor: measuring software checksum speed Dec 13 01:54:55.988628 kernel: 8regs : 10974 MB/sec Dec 13 01:54:55.989657 kernel: 32regs : 11941 MB/sec Dec 13 01:54:55.990813 kernel: arm64_neon : 9588 MB/sec Dec 13 01:54:55.990846 kernel: xor: using function: 32regs (11941 MB/sec) Dec 13 01:54:56.077687 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:54:56.096499 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:56.107616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:56.153455 systemd-udevd[469]: Using default interface naming scheme 'v255'. Dec 13 01:54:56.162049 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:56.175899 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:54:56.214855 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Dec 13 01:54:56.271137 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:56.280583 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:56.401550 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:56.412617 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:54:56.459744 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:56.464818 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:56.467408 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:56.469606 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:56.489220 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:54:56.523533 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:56.596335 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:54:56.596411 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:54:56.618462 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:54:56.618716 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:54:56.618947 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:7b:14:a2:39:fd Dec 13 01:54:56.600816 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:56.600976 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:56.605107 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:56.607258 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:56.607468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:56.610680 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:56.622157 (udev-worker)[513]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:56.627390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:56.666324 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:54:56.668318 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:54:56.677313 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:54:56.690807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:56.691313 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:54:56.691352 kernel: GPT:9289727 != 16777215 Dec 13 01:54:56.693070 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:54:56.694417 kernel: GPT:9289727 != 16777215 Dec 13 01:54:56.696043 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:54:56.697499 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:56.708627 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:56.753889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:56.791398 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (525) Dec 13 01:54:56.818318 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (528) Dec 13 01:54:56.889822 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:54:56.920839 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:54:56.960259 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:56.962932 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:56.981856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:56.993618 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:54:57.011012 disk-uuid[658]: Primary Header is updated. Dec 13 01:54:57.011012 disk-uuid[658]: Secondary Entries is updated. Dec 13 01:54:57.011012 disk-uuid[658]: Secondary Header is updated. Dec 13 01:54:57.025334 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:57.035351 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:57.044372 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:58.049334 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:58.049404 disk-uuid[659]: The operation has completed successfully. Dec 13 01:54:58.247987 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:54:58.248263 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:54:58.309639 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:54:58.323998 sh[1003]: Success Dec 13 01:54:58.353355 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:54:58.487056 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:54:58.498536 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:54:58.500942 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:54:58.555212 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:54:58.555316 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:58.555361 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:54:58.556674 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:54:58.556744 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:54:58.585326 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:54:58.590236 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:54:58.590830 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:54:58.604732 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:54:58.619627 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:54:58.641364 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:58.641447 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:58.642826 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:58.661487 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:58.678933 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:54:58.681946 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:58.695174 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:54:58.708674 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:54:58.837774 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:58.864717 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:58.904534 ignition[1116]: Ignition 2.19.0 Dec 13 01:54:58.904565 ignition[1116]: Stage: fetch-offline Dec 13 01:54:58.905977 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:58.906004 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:58.907201 ignition[1116]: Ignition finished successfully Dec 13 01:54:58.923242 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:58.946334 systemd-networkd[1212]: lo: Link UP Dec 13 01:54:58.946808 systemd-networkd[1212]: lo: Gained carrier Dec 13 01:54:58.949988 systemd-networkd[1212]: Enumeration completed Dec 13 01:54:58.951824 systemd-networkd[1212]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:58.951832 systemd-networkd[1212]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:58.952787 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:58.959485 systemd[1]: Reached target network.target - Network. Dec 13 01:54:58.962761 systemd-networkd[1212]: eth0: Link UP Dec 13 01:54:58.962770 systemd-networkd[1212]: eth0: Gained carrier Dec 13 01:54:58.962788 systemd-networkd[1212]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:58.988792 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:54:59.007451 systemd-networkd[1212]: eth0: DHCPv4 address 172.31.18.118/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:59.036072 ignition[1218]: Ignition 2.19.0 Dec 13 01:54:59.036762 ignition[1218]: Stage: fetch Dec 13 01:54:59.037677 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:59.037713 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:59.037895 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:59.054881 ignition[1218]: PUT result: OK Dec 13 01:54:59.058682 ignition[1218]: parsed url from cmdline: "" Dec 13 01:54:59.058708 ignition[1218]: no config URL provided Dec 13 01:54:59.058726 ignition[1218]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:54:59.058755 ignition[1218]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:54:59.058795 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:59.062870 ignition[1218]: PUT result: OK Dec 13 01:54:59.064751 ignition[1218]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:54:59.068639 ignition[1218]: GET result: OK Dec 13 01:54:59.070976 ignition[1218]: parsing config with SHA512: b18ffe32205757f5ba150e29e5db886b6efdbbf1086f462d4d1aeaa893c0c3dba6884e595380c9ef4f194c6fdd2a883d502201900747404070ea91f44b0b1be0 Dec 13 01:54:59.082489 unknown[1218]: fetched base config from "system" Dec 13 01:54:59.082515 unknown[1218]: fetched base config from "system" Dec 13 01:54:59.083558 ignition[1218]: fetch: fetch complete Dec 13 01:54:59.082532 unknown[1218]: fetched user config from "aws" Dec 13 01:54:59.083572 ignition[1218]: fetch: fetch passed Dec 13 01:54:59.091695 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:54:59.084268 ignition[1218]: Ignition finished successfully Dec 13 01:54:59.107668 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:54:59.139418 ignition[1226]: Ignition 2.19.0 Dec 13 01:54:59.139440 ignition[1226]: Stage: kargs Dec 13 01:54:59.140060 ignition[1226]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:59.140084 ignition[1226]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:59.140244 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:59.142564 ignition[1226]: PUT result: OK Dec 13 01:54:59.154175 ignition[1226]: kargs: kargs passed Dec 13 01:54:59.154374 ignition[1226]: Ignition finished successfully Dec 13 01:54:59.160375 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:54:59.175358 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:54:59.198730 ignition[1232]: Ignition 2.19.0 Dec 13 01:54:59.198757 ignition[1232]: Stage: disks Dec 13 01:54:59.199721 ignition[1232]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:59.199749 ignition[1232]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:59.199912 ignition[1232]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:59.201195 ignition[1232]: PUT result: OK Dec 13 01:54:59.212892 ignition[1232]: disks: disks passed Dec 13 01:54:59.213206 ignition[1232]: Ignition finished successfully Dec 13 01:54:59.218441 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:54:59.221034 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:59.224743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:59.229099 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:59.231073 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:59.233048 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:59.250683 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:54:59.290026 systemd-fsck[1240]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:54:59.296372 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:54:59.306688 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:54:59.412323 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:54:59.414988 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:54:59.420406 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:59.437523 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:59.446524 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:54:59.452711 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:54:59.452831 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:54:59.452886 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:59.477485 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1259) Dec 13 01:54:59.481345 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:59.481423 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:59.484646 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:59.489493 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:54:59.500322 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:59.500612 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:54:59.509057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:59.614673 initrd-setup-root[1283]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:54:59.625811 initrd-setup-root[1290]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:54:59.634673 initrd-setup-root[1297]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:54:59.643971 initrd-setup-root[1304]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:54:59.834488 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:59.849471 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:54:59.859659 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:54:59.873751 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:54:59.875857 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:59.928739 ignition[1372]: INFO : Ignition 2.19.0 Dec 13 01:54:59.928739 ignition[1372]: INFO : Stage: mount Dec 13 01:54:59.928739 ignition[1372]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:59.928739 ignition[1372]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:59.928739 ignition[1372]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:59.925420 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:54:59.943743 ignition[1372]: INFO : PUT result: OK Dec 13 01:54:59.943743 ignition[1372]: INFO : mount: mount passed Dec 13 01:54:59.943743 ignition[1372]: INFO : Ignition finished successfully Dec 13 01:54:59.950491 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:54:59.961520 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:54:59.988724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:55:00.014330 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1383) Dec 13 01:55:00.014412 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:55:00.017780 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:55:00.019023 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:55:00.024329 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:55:00.027591 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:55:00.071306 ignition[1400]: INFO : Ignition 2.19.0 Dec 13 01:55:00.071306 ignition[1400]: INFO : Stage: files Dec 13 01:55:00.071306 ignition[1400]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:55:00.071306 ignition[1400]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:55:00.071306 ignition[1400]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:55:00.081471 ignition[1400]: INFO : PUT result: OK Dec 13 01:55:00.096244 ignition[1400]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:55:00.099880 ignition[1400]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:55:00.099880 ignition[1400]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:55:00.110170 ignition[1400]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:55:00.112923 ignition[1400]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:55:00.116580 unknown[1400]: wrote ssh authorized keys file for user: core Dec 13 01:55:00.118862 ignition[1400]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:55:00.122316 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:55:00.126071 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:55:00.240782 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:55:00.394462 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:55:00.398356 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:55:00.398356 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 01:55:00.722061 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:55:00.873616 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:55:00.873616 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:55:00.880331 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:55:00.880331 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:55:00.880331 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:55:00.880331 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:55:00.897293 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:55:00.897293 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:55:00.897293 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:55:00.897293 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:55:00.897293 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:55:00.897293 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:55:00.918800 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:55:00.918800 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:55:00.927876 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 01:55:01.017454 systemd-networkd[1212]: eth0: Gained IPv6LL Dec 13 01:55:01.185041 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:55:01.504211 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:55:01.508601 ignition[1400]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:55:01.511248 ignition[1400]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:55:01.517503 ignition[1400]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:55:01.517503 ignition[1400]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:55:01.517503 ignition[1400]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:55:01.517503 ignition[1400]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:55:01.517503 ignition[1400]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:55:01.517503 ignition[1400]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:55:01.517503 ignition[1400]: INFO : files: files passed Dec 13 01:55:01.517503 ignition[1400]: INFO : Ignition finished successfully Dec 13 01:55:01.519939 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:55:01.541703 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:55:01.550036 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:55:01.559849 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:55:01.562692 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:55:01.595919 initrd-setup-root-after-ignition[1429]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:55:01.595919 initrd-setup-root-after-ignition[1429]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:55:01.602633 initrd-setup-root-after-ignition[1433]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:55:01.609020 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:55:01.612344 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:55:01.625681 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:55:01.702066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:55:01.703956 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:55:01.707825 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:55:01.712854 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:55:01.715134 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:55:01.724713 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:55:01.769980 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:55:01.784608 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:55:01.813618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:55:01.819517 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:55:01.822615 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:55:01.828976 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:55:01.829731 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:55:01.836582 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:55:01.839139 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:55:01.841608 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:55:01.845377 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:55:01.852669 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:55:01.856246 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:55:01.862119 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:55:01.865839 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:55:01.871189 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:55:01.875057 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:55:01.878986 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:55:01.879228 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:55:01.882302 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:55:01.892024 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:55:01.895044 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:55:01.898805 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:55:01.906082 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:55:01.906363 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:55:01.908994 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:55:01.909248 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:55:01.912195 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:55:01.912449 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:55:01.935337 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:55:01.937450 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:55:01.937911 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:55:01.966300 ignition[1453]: INFO : Ignition 2.19.0 Dec 13 01:55:01.966300 ignition[1453]: INFO : Stage: umount Dec 13 01:55:01.984963 ignition[1453]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:55:01.984963 ignition[1453]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:55:01.984963 ignition[1453]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:55:01.984963 ignition[1453]: INFO : PUT result: OK Dec 13 01:55:01.968755 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:55:01.973222 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:55:01.975329 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:55:01.978479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:55:01.978712 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:55:02.001416 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:55:02.001608 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:55:02.016364 ignition[1453]: INFO : umount: umount passed Dec 13 01:55:02.016364 ignition[1453]: INFO : Ignition finished successfully Dec 13 01:55:02.021811 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:55:02.022028 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:55:02.027331 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:55:02.027519 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:55:02.036708 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:55:02.038700 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:55:02.052664 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:55:02.052752 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:55:02.060954 systemd[1]: Stopped target network.target - Network. Dec 13 01:55:02.066960 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:55:02.067068 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:55:02.069298 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:55:02.070892 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:55:02.081342 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:55:02.083683 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:55:02.085388 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:55:02.087197 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:55:02.087296 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:55:02.089339 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:55:02.089410 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:55:02.103874 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:55:02.103970 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:55:02.106849 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:55:02.106929 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:55:02.115237 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:55:02.118690 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:55:02.124955 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:55:02.125987 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:55:02.126166 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:55:02.129339 systemd-networkd[1212]: eth0: DHCPv6 lease lost Dec 13 01:55:02.133176 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:55:02.136239 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:55:02.143525 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:55:02.143849 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:55:02.156698 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:55:02.156985 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:55:02.161418 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:55:02.161892 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:55:02.182040 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:55:02.187662 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:55:02.187785 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:55:02.190894 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:55:02.190988 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:02.197343 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:55:02.197433 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:55:02.199658 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:55:02.199742 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:55:02.205217 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:55:02.235811 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:55:02.237818 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:55:02.253907 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:55:02.254187 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:55:02.257685 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:55:02.257822 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:55:02.264008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:55:02.264082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:55:02.273560 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:55:02.273655 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:55:02.275878 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:55:02.275960 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:55:02.285535 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:55:02.285631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:55:02.302811 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:55:02.305041 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:55:02.309504 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:55:02.312557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:55:02.312672 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:55:02.323216 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:55:02.323464 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:55:02.326076 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:55:02.346125 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:55:02.369109 systemd[1]: Switching root. Dec 13 01:55:02.412146 systemd-journald[251]: Journal stopped Dec 13 01:55:04.513766 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 01:55:04.513909 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:55:04.513956 kernel: SELinux: policy capability open_perms=1 Dec 13 01:55:04.513995 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:55:04.514029 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:55:04.514062 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:55:04.514094 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:55:04.514126 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:55:04.514158 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:55:04.514189 kernel: audit: type=1403 audit(1734054902.735:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:55:04.514233 systemd[1]: Successfully loaded SELinux policy in 51.946ms. Dec 13 01:55:04.515504 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.027ms. Dec 13 01:55:04.515572 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:55:04.515608 systemd[1]: Detected virtualization amazon. Dec 13 01:55:04.515644 systemd[1]: Detected architecture arm64. Dec 13 01:55:04.515678 systemd[1]: Detected first boot. Dec 13 01:55:04.515715 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:55:04.515749 zram_generator::config[1496]: No configuration found. Dec 13 01:55:04.515790 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:55:04.515824 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:55:04.515863 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:55:04.515897 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:55:04.515931 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:55:04.515966 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:55:04.515999 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:55:04.516032 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:55:04.516064 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:55:04.516097 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:55:04.516136 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:55:04.516167 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:55:04.516198 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:55:04.516232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:55:04.517367 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:55:04.517448 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:55:04.517501 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:55:04.517533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:55:04.517576 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:55:04.517615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:55:04.517646 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:55:04.517677 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:55:04.517708 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:55:04.517739 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:55:04.517773 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:55:04.517807 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:55:04.517840 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:55:04.517879 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:55:04.517914 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:55:04.517945 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:55:04.517977 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:55:04.518009 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:55:04.518041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:55:04.518073 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:55:04.518103 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:55:04.518136 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:55:04.518171 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:55:04.518212 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:55:04.518244 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:55:04.519326 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:55:04.519405 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:55:04.519441 systemd[1]: Reached target machines.target - Containers. Dec 13 01:55:04.519473 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:55:04.519505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:04.519548 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:55:04.519581 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:55:04.519612 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:55:04.519644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:55:04.519683 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:55:04.519719 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:55:04.519752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:55:04.519787 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:55:04.519825 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:55:04.519867 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:55:04.519901 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:55:04.519939 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:55:04.519973 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:55:04.520009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:55:04.520044 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:55:04.520077 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:55:04.520116 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:55:04.520152 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:55:04.520199 systemd[1]: Stopped verity-setup.service. Dec 13 01:55:04.520232 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:55:04.520265 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:55:04.523382 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:55:04.523422 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:55:04.523459 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:55:04.523499 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:55:04.523531 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:55:04.523561 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:55:04.523594 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:55:04.523625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:55:04.523655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:55:04.523687 kernel: ACPI: bus type drm_connector registered Dec 13 01:55:04.523727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:55:04.523760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:55:04.523806 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:55:04.523844 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:55:04.523875 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:55:04.523907 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:55:04.523939 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:55:04.523969 kernel: fuse: init (API version 7.39) Dec 13 01:55:04.524003 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:55:04.524036 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:55:04.524069 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:55:04.524098 kernel: loop: module loaded Dec 13 01:55:04.524129 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:55:04.524162 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:55:04.524195 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:55:04.524230 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:55:04.524260 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:55:04.524321 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:55:04.524353 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:55:04.524385 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:55:04.524418 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:55:04.524513 systemd-journald[1581]: Collecting audit messages is disabled. Dec 13 01:55:04.524571 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:55:04.524604 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:55:04.524636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:04.524668 systemd-journald[1581]: Journal started Dec 13 01:55:04.524722 systemd-journald[1581]: Runtime Journal (/run/log/journal/ec204906b3692ef2726056983586ecfb) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:55:03.816518 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:55:03.844608 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:55:03.845428 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:55:04.541412 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:55:04.541502 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:55:04.561138 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:55:04.564325 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:55:04.574871 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:55:04.583312 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:55:04.588391 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:55:04.592221 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:55:04.595200 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:55:04.610135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:04.686637 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:55:04.697979 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:55:04.701793 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:55:04.706993 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:55:04.718623 kernel: loop0: detected capacity change from 0 to 114432 Dec 13 01:55:04.728354 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:55:04.770472 systemd-journald[1581]: Time spent on flushing to /var/log/journal/ec204906b3692ef2726056983586ecfb is 93.120ms for 916 entries. Dec 13 01:55:04.770472 systemd-journald[1581]: System Journal (/var/log/journal/ec204906b3692ef2726056983586ecfb) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:55:04.878118 systemd-journald[1581]: Received client request to flush runtime journal. Dec 13 01:55:04.880710 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:55:04.880776 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:55:04.842971 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:55:04.849549 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:55:04.889851 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:55:04.913791 kernel: loop2: detected capacity change from 0 to 189592 Dec 13 01:55:04.924166 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:55:04.931375 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:55:04.952071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:55:04.974386 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:55:05.004664 udevadm[1646]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:55:05.035165 kernel: loop3: detected capacity change from 0 to 52536 Dec 13 01:55:05.058460 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Dec 13 01:55:05.059190 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Dec 13 01:55:05.070561 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:55:05.101377 kernel: loop4: detected capacity change from 0 to 114432 Dec 13 01:55:05.129336 kernel: loop5: detected capacity change from 0 to 114328 Dec 13 01:55:05.158412 kernel: loop6: detected capacity change from 0 to 189592 Dec 13 01:55:05.208339 kernel: loop7: detected capacity change from 0 to 52536 Dec 13 01:55:05.234362 (sd-merge)[1651]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:55:05.238098 (sd-merge)[1651]: Merged extensions into '/usr'. Dec 13 01:55:05.251513 systemd[1]: Reloading requested from client PID 1608 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:55:05.251541 systemd[1]: Reloading... Dec 13 01:55:05.476338 zram_generator::config[1680]: No configuration found. Dec 13 01:55:05.621394 ldconfig[1605]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:55:05.751520 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:05.871980 systemd[1]: Reloading finished in 619 ms. Dec 13 01:55:05.915398 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:55:05.919030 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:55:05.924927 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:55:05.939611 systemd[1]: Starting ensure-sysext.service... Dec 13 01:55:05.951453 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:55:05.957669 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:55:05.977116 systemd[1]: Reloading requested from client PID 1730 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:55:05.977147 systemd[1]: Reloading... Dec 13 01:55:05.994716 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:55:05.997500 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:55:05.999260 systemd-tmpfiles[1731]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:55:06.003017 systemd-tmpfiles[1731]: ACLs are not supported, ignoring. Dec 13 01:55:06.003179 systemd-tmpfiles[1731]: ACLs are not supported, ignoring. Dec 13 01:55:06.015880 systemd-tmpfiles[1731]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:55:06.015903 systemd-tmpfiles[1731]: Skipping /boot Dec 13 01:55:06.065196 systemd-tmpfiles[1731]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:55:06.069450 systemd-tmpfiles[1731]: Skipping /boot Dec 13 01:55:06.101386 systemd-udevd[1732]: Using default interface naming scheme 'v255'. Dec 13 01:55:06.137325 zram_generator::config[1758]: No configuration found. Dec 13 01:55:06.344178 (udev-worker)[1782]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:06.357802 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1789) Dec 13 01:55:06.369318 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1789) Dec 13 01:55:06.515455 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1783) Dec 13 01:55:06.542821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:06.698850 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:55:06.699527 systemd[1]: Reloading finished in 721 ms. Dec 13 01:55:06.738050 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:55:06.748183 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:55:06.804004 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:06.818782 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:55:06.825402 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:55:06.835473 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:55:06.842602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:55:06.848780 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:55:06.876696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:06.881580 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:55:06.901417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:55:06.953119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:55:06.956636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:06.976056 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:06.977467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:07.000897 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:55:07.006543 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:55:07.009414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:55:07.029088 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:55:07.040997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:55:07.041396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:55:07.044715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:55:07.045053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:55:07.068137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:55:07.073444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:07.083929 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:55:07.097834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:55:07.100074 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:07.110719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:55:07.114490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:55:07.114921 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:55:07.128939 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:55:07.132715 augenrules[1957]: No rules Dec 13 01:55:07.136864 systemd[1]: Finished ensure-sysext.service. Dec 13 01:55:07.139602 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:07.143826 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:55:07.144443 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:55:07.161489 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:55:07.165470 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:55:07.169112 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:55:07.169489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:55:07.194378 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:55:07.204970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:55:07.207358 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:55:07.208755 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:55:07.228435 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:55:07.233371 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:55:07.242665 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:55:07.261581 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:55:07.297867 lvm[1977]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:55:07.353132 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:55:07.356880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:55:07.382060 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:55:07.409997 systemd-resolved[1927]: Positive Trust Anchors: Dec 13 01:55:07.410035 systemd-resolved[1927]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:55:07.410099 systemd-resolved[1927]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:55:07.417401 lvm[1983]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:55:07.428174 systemd-resolved[1927]: Defaulting to hostname 'linux'. Dec 13 01:55:07.430855 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:55:07.432998 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:55:07.445555 systemd-networkd[1926]: lo: Link UP Dec 13 01:55:07.445571 systemd-networkd[1926]: lo: Gained carrier Dec 13 01:55:07.450720 systemd-networkd[1926]: Enumeration completed Dec 13 01:55:07.450905 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:55:07.453105 systemd[1]: Reached target network.target - Network. Dec 13 01:55:07.454879 systemd-networkd[1926]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:55:07.454887 systemd-networkd[1926]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:55:07.456914 systemd-networkd[1926]: eth0: Link UP Dec 13 01:55:07.457453 systemd-networkd[1926]: eth0: Gained carrier Dec 13 01:55:07.457489 systemd-networkd[1926]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:55:07.463558 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:55:07.471391 systemd-networkd[1926]: eth0: DHCPv4 address 172.31.18.118/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:55:07.475326 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:55:07.489960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:55:07.492804 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:55:07.495065 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:55:07.497439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:55:07.500005 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:55:07.502378 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:55:07.504698 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:55:07.506968 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:55:07.507029 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:55:07.508771 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:55:07.514125 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:55:07.518974 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:55:07.527646 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:55:07.530857 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:55:07.533433 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:55:07.535534 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:55:07.537715 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:55:07.537772 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:55:07.545558 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:55:07.550397 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:55:07.559805 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:55:07.566436 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:55:07.572047 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:55:07.574080 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:55:07.590615 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:55:07.598256 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:55:07.605746 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:55:07.617470 jq[1995]: false Dec 13 01:55:07.622502 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:55:07.629113 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:55:07.635778 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:55:07.662138 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:55:07.665458 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:55:07.666883 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:55:07.670079 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:55:07.675420 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:55:07.683044 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:55:07.683836 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:55:07.718546 dbus-daemon[1994]: [system] SELinux support is enabled Dec 13 01:55:07.726663 dbus-daemon[1994]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1926 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:55:07.737459 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:55:07.749954 extend-filesystems[1996]: Found loop4 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found loop5 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found loop6 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found loop7 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found nvme0n1 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found nvme0n1p1 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found nvme0n1p2 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found nvme0n1p3 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found usr Dec 13 01:55:07.749954 extend-filesystems[1996]: Found nvme0n1p4 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found nvme0n1p6 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found nvme0n1p7 Dec 13 01:55:07.749954 extend-filesystems[1996]: Found nvme0n1p9 Dec 13 01:55:07.749954 extend-filesystems[1996]: Checking size of /dev/nvme0n1p9 Dec 13 01:55:07.743214 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:55:07.768970 dbus-daemon[1994]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: ---------------------------------------------------- Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: corporation. Support and training for ntp-4 are Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: available at https://www.nwtime.org/support Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: ---------------------------------------------------- Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: proto: precision = 0.096 usec (-23) Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: basedate set to 2024-11-30 Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: gps base set to 2024-12-01 (week 2343) Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: Listen normally on 3 eth0 172.31.18.118:123 Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: Listen normally on 4 lo [::1]:123 Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: bind(21) AF_INET6 fe80::47b:14ff:fea2:39fd%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: unable to create socket on eth0 (5) for fe80::47b:14ff:fea2:39fd%2#123 Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: failed to init interface for address fe80::47b:14ff:fea2:39fd%2 Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: Listening on routing socket on fd #21 for interface updates Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:07.822863 ntpd[1998]: 13 Dec 01:55:07 ntpd[1998]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:07.743264 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:55:07.786239 ntpd[1998]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:55:07.746529 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:55:07.786325 ntpd[1998]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:55:07.746571 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:55:07.786348 ntpd[1998]: ---------------------------------------------------- Dec 13 01:55:07.773193 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:55:07.786368 ntpd[1998]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:55:07.773594 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:55:07.786390 ntpd[1998]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:55:07.812144 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:55:07.786409 ntpd[1998]: corporation. Support and training for ntp-4 are Dec 13 01:55:07.812617 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:55:07.786427 ntpd[1998]: available at https://www.nwtime.org/support Dec 13 01:55:07.834623 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:55:07.786446 ntpd[1998]: ---------------------------------------------------- Dec 13 01:55:07.848612 (ntainerd)[2029]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:55:07.792811 ntpd[1998]: proto: precision = 0.096 usec (-23) Dec 13 01:55:07.793218 ntpd[1998]: basedate set to 2024-11-30 Dec 13 01:55:07.793242 ntpd[1998]: gps base set to 2024-12-01 (week 2343) Dec 13 01:55:07.800184 ntpd[1998]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:55:07.800266 ntpd[1998]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:55:07.805821 ntpd[1998]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:55:07.888496 jq[2010]: true Dec 13 01:55:07.805886 ntpd[1998]: Listen normally on 3 eth0 172.31.18.118:123 Dec 13 01:55:07.805955 ntpd[1998]: Listen normally on 4 lo [::1]:123 Dec 13 01:55:07.806035 ntpd[1998]: bind(21) AF_INET6 fe80::47b:14ff:fea2:39fd%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:07.806074 ntpd[1998]: unable to create socket on eth0 (5) for fe80::47b:14ff:fea2:39fd%2#123 Dec 13 01:55:07.806133 ntpd[1998]: failed to init interface for address fe80::47b:14ff:fea2:39fd%2 Dec 13 01:55:07.806189 ntpd[1998]: Listening on routing socket on fd #21 for interface updates Dec 13 01:55:07.815601 ntpd[1998]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:07.815651 ntpd[1998]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:07.907764 extend-filesystems[1996]: Resized partition /dev/nvme0n1p9 Dec 13 01:55:07.913660 extend-filesystems[2041]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:55:07.920491 tar[2013]: linux-arm64/helm Dec 13 01:55:07.923072 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:55:07.962569 jq[2040]: true Dec 13 01:55:08.017423 update_engine[2009]: I20241213 01:55:08.017161 2009 main.cc:92] Flatcar Update Engine starting Dec 13 01:55:08.033984 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:55:08.040424 update_engine[2009]: I20241213 01:55:08.040213 2009 update_check_scheduler.cc:74] Next update check in 7m18s Dec 13 01:55:08.054498 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:55:08.057360 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:55:08.065642 coreos-metadata[1993]: Dec 13 01:55:08.063 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:08.065642 coreos-metadata[1993]: Dec 13 01:55:08.063 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.067 INFO Fetch successful Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.067 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.067 INFO Fetch successful Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.067 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.072 INFO Fetch successful Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.072 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.072 INFO Fetch successful Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.072 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.073 INFO Fetch failed with 404: resource not found Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.073 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.074 INFO Fetch successful Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.074 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.074 INFO Fetch successful Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.075 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.077 INFO Fetch successful Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.077 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.078 INFO Fetch successful Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.078 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:55:08.081455 coreos-metadata[1993]: Dec 13 01:55:08.079 INFO Fetch successful Dec 13 01:55:08.126970 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:55:08.136894 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:55:08.140920 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:55:08.166673 extend-filesystems[2041]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:55:08.166673 extend-filesystems[2041]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:55:08.166673 extend-filesystems[2041]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:55:08.185534 extend-filesystems[1996]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:55:08.189486 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:55:08.192370 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:55:08.214382 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1815) Dec 13 01:55:08.339150 bash[2110]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:08.345661 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:55:08.389423 systemd[1]: Starting sshkeys.service... Dec 13 01:55:08.403512 locksmithd[2056]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:55:08.466053 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:55:08.475552 systemd-logind[2008]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:55:08.475596 systemd-logind[2008]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:55:08.477540 systemd-logind[2008]: New seat seat0. Dec 13 01:55:08.478786 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:55:08.498873 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:55:08.504237 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:55:08.508049 systemd-networkd[1926]: eth0: Gained IPv6LL Dec 13 01:55:08.565635 dbus-daemon[1994]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:55:08.567497 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:55:08.573881 dbus-daemon[1994]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2031 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:55:08.592992 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:55:08.595856 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:55:08.602631 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:55:08.655995 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:55:08.666020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:08.672308 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:55:08.712404 containerd[2029]: time="2024-12-13T01:55:08.711487066Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:55:08.750435 polkitd[2165]: Started polkitd version 121 Dec 13 01:55:08.830883 polkitd[2165]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:55:08.831006 polkitd[2165]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:55:08.853919 polkitd[2165]: Finished loading, compiling and executing 2 rules Dec 13 01:55:08.876082 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:55:08.879461 coreos-metadata[2161]: Dec 13 01:55:08.876 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:08.879360 dbus-daemon[1994]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:55:08.887673 coreos-metadata[2161]: Dec 13 01:55:08.881 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:55:08.886481 polkitd[2165]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:55:08.882572 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:55:08.888677 coreos-metadata[2161]: Dec 13 01:55:08.888 INFO Fetch successful Dec 13 01:55:08.888762 coreos-metadata[2161]: Dec 13 01:55:08.888 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:55:08.888762 coreos-metadata[2161]: Dec 13 01:55:08.888 INFO Fetch successful Dec 13 01:55:08.895192 unknown[2161]: wrote ssh authorized keys file for user: core Dec 13 01:55:08.950294 update-ssh-keys[2203]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:08.951611 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:55:08.959135 systemd[1]: Finished sshkeys.service. Dec 13 01:55:08.977566 containerd[2029]: time="2024-12-13T01:55:08.976757111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:08.981952 systemd-hostnamed[2031]: Hostname set to <ip-172-31-18-118> (transient) Dec 13 01:55:08.987657 systemd-resolved[1927]: System hostname changed to 'ip-172-31-18-118'. Dec 13 01:55:08.995617 amazon-ssm-agent[2168]: Initializing new seelog logger Dec 13 01:55:08.995617 amazon-ssm-agent[2168]: New Seelog Logger Creation Complete Dec 13 01:55:08.995617 amazon-ssm-agent[2168]: 2024/12/13 01:55:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:08.995617 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:08.996203 containerd[2029]: time="2024-12-13T01:55:08.993487007Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:08.996203 containerd[2029]: time="2024-12-13T01:55:08.993545963Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:55:08.996203 containerd[2029]: time="2024-12-13T01:55:08.993582191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:55:08.996203 containerd[2029]: time="2024-12-13T01:55:08.993873479Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:55:08.996203 containerd[2029]: time="2024-12-13T01:55:08.993911267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:08.996203 containerd[2029]: time="2024-12-13T01:55:08.994028927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:08.996203 containerd[2029]: time="2024-12-13T01:55:08.994056947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:09.002944 containerd[2029]: time="2024-12-13T01:55:09.000495032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:09.002944 containerd[2029]: time="2024-12-13T01:55:09.000555452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:09.002944 containerd[2029]: time="2024-12-13T01:55:09.000594392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:09.002944 containerd[2029]: time="2024-12-13T01:55:09.000625520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:09.002944 containerd[2029]: time="2024-12-13T01:55:09.000874520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:09.003350 amazon-ssm-agent[2168]: 2024/12/13 01:55:09 processing appconfig overrides Dec 13 01:55:09.009269 containerd[2029]: time="2024-12-13T01:55:09.005537276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:09.009269 containerd[2029]: time="2024-12-13T01:55:09.005842172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:09.009269 containerd[2029]: time="2024-12-13T01:55:09.005874200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:55:09.009269 containerd[2029]: time="2024-12-13T01:55:09.006077552Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:55:09.009269 containerd[2029]: time="2024-12-13T01:55:09.006172256Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:55:09.009628 amazon-ssm-agent[2168]: 2024/12/13 01:55:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:09.009628 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:09.009628 amazon-ssm-agent[2168]: 2024/12/13 01:55:09 processing appconfig overrides Dec 13 01:55:09.009628 amazon-ssm-agent[2168]: 2024/12/13 01:55:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:09.009628 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:09.009628 amazon-ssm-agent[2168]: 2024/12/13 01:55:09 processing appconfig overrides Dec 13 01:55:09.010838 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO Proxy environment variables: Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.020313644Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.020439476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.020491304Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.020541824Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.020576636Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.020833592Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.021302612Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.021542432Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.021583220Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.021622736Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.021654584Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.021691376Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.021722180Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:55:09.030299 containerd[2029]: time="2024-12-13T01:55:09.021753380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:55:09.030967 amazon-ssm-agent[2168]: 2024/12/13 01:55:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:09.030967 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:09.030967 amazon-ssm-agent[2168]: 2024/12/13 01:55:09 processing appconfig overrides Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.021784736Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.021818396Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.021856340Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.021890648Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.021930680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.021963056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.021992372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.022031684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.022060976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.022092368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.022127156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.022158752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.022188932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031108 containerd[2029]: time="2024-12-13T01:55:09.022221380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.031804 containerd[2029]: time="2024-12-13T01:55:09.022249712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.037307 containerd[2029]: time="2024-12-13T01:55:09.034904900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.037307 containerd[2029]: time="2024-12-13T01:55:09.035262692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.037307 containerd[2029]: time="2024-12-13T01:55:09.036287828Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:55:09.037307 containerd[2029]: time="2024-12-13T01:55:09.036551420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.037307 containerd[2029]: time="2024-12-13T01:55:09.036587744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.037307 containerd[2029]: time="2024-12-13T01:55:09.036648440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:55:09.040228 containerd[2029]: time="2024-12-13T01:55:09.037406396Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:55:09.040228 containerd[2029]: time="2024-12-13T01:55:09.037966448Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:55:09.040228 containerd[2029]: time="2024-12-13T01:55:09.038891036Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:55:09.040228 containerd[2029]: time="2024-12-13T01:55:09.038943524Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:55:09.040228 containerd[2029]: time="2024-12-13T01:55:09.038975012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.040228 containerd[2029]: time="2024-12-13T01:55:09.039007220Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:55:09.040228 containerd[2029]: time="2024-12-13T01:55:09.039031724Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:55:09.040228 containerd[2029]: time="2024-12-13T01:55:09.039065504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:55:09.046054 containerd[2029]: time="2024-12-13T01:55:09.041722472Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:55:09.050085 containerd[2029]: time="2024-12-13T01:55:09.046989200Z" level=info msg="Connect containerd service" Dec 13 01:55:09.050085 containerd[2029]: time="2024-12-13T01:55:09.047084396Z" level=info msg="using legacy CRI server" Dec 13 01:55:09.050085 containerd[2029]: time="2024-12-13T01:55:09.047104628Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:55:09.050085 containerd[2029]: time="2024-12-13T01:55:09.047302976Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:55:09.050085 containerd[2029]: time="2024-12-13T01:55:09.049076624Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:55:09.057837 containerd[2029]: time="2024-12-13T01:55:09.057743672Z" level=info msg="Start subscribing containerd event" Dec 13 01:55:09.057963 containerd[2029]: time="2024-12-13T01:55:09.057854672Z" level=info msg="Start recovering state" Dec 13 01:55:09.058014 containerd[2029]: time="2024-12-13T01:55:09.057988220Z" level=info msg="Start event monitor" Dec 13 01:55:09.058064 containerd[2029]: time="2024-12-13T01:55:09.058013252Z" level=info msg="Start snapshots syncer" Dec 13 01:55:09.058064 containerd[2029]: time="2024-12-13T01:55:09.058034804Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:55:09.058064 containerd[2029]: time="2024-12-13T01:55:09.058054712Z" level=info msg="Start streaming server" Dec 13 01:55:09.058435 containerd[2029]: time="2024-12-13T01:55:09.058389692Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:55:09.058555 containerd[2029]: time="2024-12-13T01:55:09.058516796Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:55:09.066303 containerd[2029]: time="2024-12-13T01:55:09.060366896Z" level=info msg="containerd successfully booted in 0.361578s" Dec 13 01:55:09.060532 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:55:09.113304 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO https_proxy: Dec 13 01:55:09.215173 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO http_proxy: Dec 13 01:55:09.322381 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO no_proxy: Dec 13 01:55:09.420888 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:55:09.519868 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:55:09.618240 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO Agent will take identity from EC2 Dec 13 01:55:09.717467 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:09.816304 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:09.915650 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:09.994963 tar[2013]: linux-arm64/LICENSE Dec 13 01:55:09.994963 tar[2013]: linux-arm64/README.md Dec 13 01:55:10.014905 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:55:10.037387 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:55:10.114838 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:55:10.201740 sshd_keygen[2034]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:55:10.215004 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:55:10.254512 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:55:10.267826 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:55:10.273252 systemd[1]: Started sshd@0-172.31.18.118:22-139.178.68.195:42120.service - OpenSSH per-connection server daemon (139.178.68.195:42120). Dec 13 01:55:10.314156 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:55:10.314572 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:55:10.316559 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:55:10.329786 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:55:10.387385 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:55:10.404205 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:55:10.419095 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO [Registrar] Starting registrar module Dec 13 01:55:10.422984 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:55:10.425735 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:55:10.525612 amazon-ssm-agent[2168]: 2024-12-13 01:55:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:55:10.545854 sshd[2230]: Accepted publickey for core from 139.178.68.195 port 42120 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:10.550718 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:10.585261 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:55:10.585622 systemd-logind[2008]: New session 1 of user core. Dec 13 01:55:10.598792 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:55:10.641492 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:55:10.656892 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:55:10.669581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:10.677540 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:55:10.685839 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:10.691239 (systemd)[2245]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:10.787721 ntpd[1998]: Listen normally on 6 eth0 [fe80::47b:14ff:fea2:39fd%2]:123 Dec 13 01:55:10.791156 ntpd[1998]: 13 Dec 01:55:10 ntpd[1998]: Listen normally on 6 eth0 [fe80::47b:14ff:fea2:39fd%2]:123 Dec 13 01:55:10.943676 systemd[2245]: Queued start job for default target default.target. Dec 13 01:55:10.951733 systemd[2245]: Created slice app.slice - User Application Slice. Dec 13 01:55:10.951793 systemd[2245]: Reached target paths.target - Paths. Dec 13 01:55:10.951825 systemd[2245]: Reached target timers.target - Timers. Dec 13 01:55:10.962597 systemd[2245]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:55:10.996553 systemd[2245]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:55:10.997671 systemd[2245]: Reached target sockets.target - Sockets. Dec 13 01:55:10.997705 systemd[2245]: Reached target basic.target - Basic System. Dec 13 01:55:10.997790 systemd[2245]: Reached target default.target - Main User Target. Dec 13 01:55:10.997853 systemd[2245]: Startup finished in 290ms. Dec 13 01:55:10.998003 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:55:11.008110 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:55:11.011704 systemd[1]: Startup finished in 1.252s (kernel) + 7.935s (initrd) + 8.326s (userspace) = 17.514s. Dec 13 01:55:11.187859 systemd[1]: Started sshd@1-172.31.18.118:22-139.178.68.195:42128.service - OpenSSH per-connection server daemon (139.178.68.195:42128). Dec 13 01:55:11.404351 sshd[2267]: Accepted publickey for core from 139.178.68.195 port 42128 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:11.410180 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:11.425665 systemd-logind[2008]: New session 2 of user core. Dec 13 01:55:11.430770 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:55:11.570129 sshd[2267]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:11.577898 systemd[1]: sshd@1-172.31.18.118:22-139.178.68.195:42128.service: Deactivated successfully. Dec 13 01:55:11.582247 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:55:11.589209 systemd-logind[2008]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:55:11.612820 systemd[1]: Started sshd@2-172.31.18.118:22-139.178.68.195:42132.service - OpenSSH per-connection server daemon (139.178.68.195:42132). Dec 13 01:55:11.614909 systemd-logind[2008]: Removed session 2. Dec 13 01:55:11.721395 amazon-ssm-agent[2168]: 2024-12-13 01:55:11 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:55:11.756571 amazon-ssm-agent[2168]: 2024-12-13 01:55:11 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:55:11.756747 amazon-ssm-agent[2168]: 2024-12-13 01:55:11 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:55:11.756896 amazon-ssm-agent[2168]: 2024-12-13 01:55:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:55:11.791736 sshd[2274]: Accepted publickey for core from 139.178.68.195 port 42132 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:11.794924 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:11.806369 systemd-logind[2008]: New session 3 of user core. Dec 13 01:55:11.815731 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:55:11.823155 kubelet[2246]: E1213 01:55:11.822336 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:11.823598 amazon-ssm-agent[2168]: 2024-12-13 01:55:11 INFO [CredentialRefresher] Next credential rotation will be in 30.816651642766665 minutes Dec 13 01:55:11.828478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:11.828831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:11.831525 systemd[1]: kubelet.service: Consumed 1.275s CPU time. Dec 13 01:55:11.939084 sshd[2274]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:11.943598 systemd[1]: sshd@2-172.31.18.118:22-139.178.68.195:42132.service: Deactivated successfully. Dec 13 01:55:11.946936 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:55:11.949972 systemd-logind[2008]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:55:11.951677 systemd-logind[2008]: Removed session 3. Dec 13 01:55:11.975806 systemd[1]: Started sshd@3-172.31.18.118:22-139.178.68.195:42146.service - OpenSSH per-connection server daemon (139.178.68.195:42146). Dec 13 01:55:12.159322 sshd[2282]: Accepted publickey for core from 139.178.68.195 port 42146 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:12.161897 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:12.169129 systemd-logind[2008]: New session 4 of user core. Dec 13 01:55:12.178530 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:55:12.306233 sshd[2282]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:12.312616 systemd[1]: sshd@3-172.31.18.118:22-139.178.68.195:42146.service: Deactivated successfully. Dec 13 01:55:12.316791 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:55:12.318105 systemd-logind[2008]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:55:12.319920 systemd-logind[2008]: Removed session 4. Dec 13 01:55:12.340802 systemd[1]: Started sshd@4-172.31.18.118:22-139.178.68.195:42148.service - OpenSSH per-connection server daemon (139.178.68.195:42148). Dec 13 01:55:12.514117 sshd[2289]: Accepted publickey for core from 139.178.68.195 port 42148 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:12.516716 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:12.524080 systemd-logind[2008]: New session 5 of user core. Dec 13 01:55:12.536535 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:55:12.654777 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:55:12.655451 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:12.671474 sudo[2292]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:12.694928 sshd[2289]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:12.700784 systemd[1]: sshd@4-172.31.18.118:22-139.178.68.195:42148.service: Deactivated successfully. Dec 13 01:55:12.704165 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:55:12.706529 systemd-logind[2008]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:55:12.710770 systemd-logind[2008]: Removed session 5. Dec 13 01:55:12.740005 systemd[1]: Started sshd@5-172.31.18.118:22-139.178.68.195:42156.service - OpenSSH per-connection server daemon (139.178.68.195:42156). Dec 13 01:55:12.786941 amazon-ssm-agent[2168]: 2024-12-13 01:55:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:55:12.887098 amazon-ssm-agent[2168]: 2024-12-13 01:55:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2300) started Dec 13 01:55:12.907322 sshd[2297]: Accepted publickey for core from 139.178.68.195 port 42156 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:12.909780 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:12.922168 systemd-logind[2008]: New session 6 of user core. Dec 13 01:55:12.925945 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:55:12.987831 amazon-ssm-agent[2168]: 2024-12-13 01:55:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:55:13.041476 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:55:13.042113 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:13.048501 sudo[2311]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:13.058828 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:55:13.059564 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:13.084793 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:13.088356 auditctl[2314]: No rules Dec 13 01:55:13.090726 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:55:13.092366 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:13.100594 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:13.150954 augenrules[2332]: No rules Dec 13 01:55:13.153208 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:13.156608 sudo[2310]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:13.180037 sshd[2297]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.184586 systemd-logind[2008]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:55:13.185809 systemd[1]: sshd@5-172.31.18.118:22-139.178.68.195:42156.service: Deactivated successfully. Dec 13 01:55:13.188847 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:55:13.193629 systemd-logind[2008]: Removed session 6. Dec 13 01:55:13.218773 systemd[1]: Started sshd@6-172.31.18.118:22-139.178.68.195:42158.service - OpenSSH per-connection server daemon (139.178.68.195:42158). Dec 13 01:55:13.380647 sshd[2340]: Accepted publickey for core from 139.178.68.195 port 42158 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:13.383152 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:13.391374 systemd-logind[2008]: New session 7 of user core. Dec 13 01:55:13.398535 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:55:13.501817 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:55:13.502480 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:13.941994 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:55:13.943486 (dockerd)[2359]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:55:14.315985 dockerd[2359]: time="2024-12-13T01:55:14.315247394Z" level=info msg="Starting up" Dec 13 01:55:14.486151 systemd[1]: var-lib-docker-metacopy\x2dcheck3453951648-merged.mount: Deactivated successfully. Dec 13 01:55:14.498576 dockerd[2359]: time="2024-12-13T01:55:14.498496863Z" level=info msg="Loading containers: start." Dec 13 01:55:14.647391 kernel: Initializing XFRM netlink socket Dec 13 01:55:14.681205 (udev-worker)[2382]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:14.767887 systemd-networkd[1926]: docker0: Link UP Dec 13 01:55:14.796491 dockerd[2359]: time="2024-12-13T01:55:14.796402036Z" level=info msg="Loading containers: done." Dec 13 01:55:14.821210 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1812940205-merged.mount: Deactivated successfully. Dec 13 01:55:14.827335 dockerd[2359]: time="2024-12-13T01:55:14.827144176Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:55:14.827335 dockerd[2359]: time="2024-12-13T01:55:14.827351596Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:55:14.827699 dockerd[2359]: time="2024-12-13T01:55:14.827563636Z" level=info msg="Daemon has completed initialization" Dec 13 01:55:14.890142 dockerd[2359]: time="2024-12-13T01:55:14.888378401Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:55:14.888982 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:55:15.954033 containerd[2029]: time="2024-12-13T01:55:15.953975419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:55:16.595111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744261590.mount: Deactivated successfully. Dec 13 01:55:17.851220 containerd[2029]: time="2024-12-13T01:55:17.851125097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:17.853312 containerd[2029]: time="2024-12-13T01:55:17.853237829Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615585" Dec 13 01:55:17.854400 containerd[2029]: time="2024-12-13T01:55:17.854304010Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:17.865081 containerd[2029]: time="2024-12-13T01:55:17.864964785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:17.867047 containerd[2029]: time="2024-12-13T01:55:17.866764449Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 1.912722156s" Dec 13 01:55:17.867047 containerd[2029]: time="2024-12-13T01:55:17.866827564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Dec 13 01:55:17.868404 containerd[2029]: time="2024-12-13T01:55:17.868107440Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:55:19.322382 containerd[2029]: time="2024-12-13T01:55:19.322099675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:19.324472 containerd[2029]: time="2024-12-13T01:55:19.324357283Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470096" Dec 13 01:55:19.326320 containerd[2029]: time="2024-12-13T01:55:19.325474033Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:19.331250 containerd[2029]: time="2024-12-13T01:55:19.331167519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:19.334318 containerd[2029]: time="2024-12-13T01:55:19.333543446Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.465371414s" Dec 13 01:55:19.334318 containerd[2029]: time="2024-12-13T01:55:19.333602912Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Dec 13 01:55:19.334874 containerd[2029]: time="2024-12-13T01:55:19.334831450Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:55:20.598722 containerd[2029]: time="2024-12-13T01:55:20.598465924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:20.600313 containerd[2029]: time="2024-12-13T01:55:20.600205473Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024202" Dec 13 01:55:20.601674 containerd[2029]: time="2024-12-13T01:55:20.601590005Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:20.607554 containerd[2029]: time="2024-12-13T01:55:20.607459259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:20.610043 containerd[2029]: time="2024-12-13T01:55:20.609839016Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.27482259s" Dec 13 01:55:20.610043 containerd[2029]: time="2024-12-13T01:55:20.609899550Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Dec 13 01:55:20.611136 containerd[2029]: time="2024-12-13T01:55:20.610848024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:55:21.878516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769746784.mount: Deactivated successfully. Dec 13 01:55:21.881400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:55:21.893073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:22.248750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:22.261250 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:22.342494 kubelet[2577]: E1213 01:55:22.341837 2577 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:22.353333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:22.353679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:22.599499 containerd[2029]: time="2024-12-13T01:55:22.599340996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.601415 containerd[2029]: time="2024-12-13T01:55:22.601343332Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771426" Dec 13 01:55:22.603560 containerd[2029]: time="2024-12-13T01:55:22.603483498Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.607674 containerd[2029]: time="2024-12-13T01:55:22.607621042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.609174 containerd[2029]: time="2024-12-13T01:55:22.608990458Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.998084193s" Dec 13 01:55:22.609174 containerd[2029]: time="2024-12-13T01:55:22.609044725Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 01:55:22.609937 containerd[2029]: time="2024-12-13T01:55:22.609874316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:55:23.224951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334731688.mount: Deactivated successfully. Dec 13 01:55:24.328965 containerd[2029]: time="2024-12-13T01:55:24.328878928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.331177 containerd[2029]: time="2024-12-13T01:55:24.331107194Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:55:24.333548 containerd[2029]: time="2024-12-13T01:55:24.333459169Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.339662 containerd[2029]: time="2024-12-13T01:55:24.339563788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.342126 containerd[2029]: time="2024-12-13T01:55:24.341919185Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.731854191s" Dec 13 01:55:24.342126 containerd[2029]: time="2024-12-13T01:55:24.341981521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:55:24.343777 containerd[2029]: time="2024-12-13T01:55:24.343717192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:55:24.928171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479460168.mount: Deactivated successfully. Dec 13 01:55:24.942383 containerd[2029]: time="2024-12-13T01:55:24.942303099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.944452 containerd[2029]: time="2024-12-13T01:55:24.944387797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 13 01:55:24.946815 containerd[2029]: time="2024-12-13T01:55:24.946742930Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.951671 containerd[2029]: time="2024-12-13T01:55:24.951605228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.953574 containerd[2029]: time="2024-12-13T01:55:24.953371227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 609.596551ms" Dec 13 01:55:24.953574 containerd[2029]: time="2024-12-13T01:55:24.953441414Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 01:55:24.954863 containerd[2029]: time="2024-12-13T01:55:24.954536013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:55:25.822242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533043846.mount: Deactivated successfully. Dec 13 01:55:28.136711 containerd[2029]: time="2024-12-13T01:55:28.136641321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:28.138502 containerd[2029]: time="2024-12-13T01:55:28.138423227Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Dec 13 01:55:28.140322 containerd[2029]: time="2024-12-13T01:55:28.140209984Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:28.150296 containerd[2029]: time="2024-12-13T01:55:28.149308023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:28.151797 containerd[2029]: time="2024-12-13T01:55:28.151744965Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.197156101s" Dec 13 01:55:28.151959 containerd[2029]: time="2024-12-13T01:55:28.151929941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Dec 13 01:55:32.370200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:55:32.378048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:32.659797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:32.664332 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:32.745301 kubelet[2714]: E1213 01:55:32.743617 2714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:32.749162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:32.749688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:35.986565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:36.003767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:36.065544 systemd[1]: Reloading requested from client PID 2729 ('systemctl') (unit session-7.scope)... Dec 13 01:55:36.065755 systemd[1]: Reloading... Dec 13 01:55:36.284321 zram_generator::config[2773]: No configuration found. Dec 13 01:55:36.516928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:36.687529 systemd[1]: Reloading finished in 620 ms. Dec 13 01:55:36.785996 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:55:36.786176 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:55:36.786959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:36.795976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:37.104547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:37.105680 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:37.200806 kubelet[2832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:37.202301 kubelet[2832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:37.202301 kubelet[2832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:37.202301 kubelet[2832]: I1213 01:55:37.201457 2832 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:37.842519 kubelet[2832]: I1213 01:55:37.842473 2832 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:55:37.842698 kubelet[2832]: I1213 01:55:37.842678 2832 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:37.843215 kubelet[2832]: I1213 01:55:37.843193 2832 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:55:38.074199 kubelet[2832]: I1213 01:55:38.074145 2832 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:38.075819 kubelet[2832]: E1213 01:55:38.075753 2832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:38.091053 kubelet[2832]: E1213 01:55:38.090978 2832 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:55:38.091053 kubelet[2832]: I1213 01:55:38.091039 2832 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:55:38.101252 kubelet[2832]: I1213 01:55:38.101011 2832 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:38.102298 kubelet[2832]: I1213 01:55:38.101762 2832 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:55:38.102298 kubelet[2832]: I1213 01:55:38.102038 2832 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:38.103259 kubelet[2832]: I1213 01:55:38.102086 2832 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-118","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:55:38.103259 kubelet[2832]: I1213 01:55:38.102666 2832 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:38.103259 kubelet[2832]: I1213 01:55:38.102689 2832 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:55:38.103259 kubelet[2832]: I1213 01:55:38.102908 2832 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:38.124715 kubelet[2832]: I1213 01:55:38.124668 2832 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:55:38.125442 kubelet[2832]: I1213 01:55:38.124905 2832 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:38.125442 kubelet[2832]: I1213 01:55:38.124960 2832 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:55:38.125442 kubelet[2832]: I1213 01:55:38.124982 2832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:38.143348 kubelet[2832]: W1213 01:55:38.143247 2832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-118&limit=500&resourceVersion=0": dial tcp 172.31.18.118:6443: connect: connection refused Dec 13 01:55:38.143594 kubelet[2832]: E1213 01:55:38.143559 2832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-118&limit=500&resourceVersion=0\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:38.144208 kubelet[2832]: I1213 01:55:38.144177 2832 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:38.156079 kubelet[2832]: I1213 01:55:38.155966 2832 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:38.161163 kubelet[2832]: W1213 01:55:38.160970 2832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.118:6443: connect: connection refused Dec 13 01:55:38.161163 kubelet[2832]: E1213 01:55:38.161068 2832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:38.166909 kubelet[2832]: W1213 01:55:38.166863 2832 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:55:38.168046 kubelet[2832]: I1213 01:55:38.167995 2832 server.go:1269] "Started kubelet" Dec 13 01:55:38.174151 kubelet[2832]: I1213 01:55:38.173877 2832 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:38.175770 kubelet[2832]: I1213 01:55:38.175717 2832 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:55:38.180246 kubelet[2832]: I1213 01:55:38.180181 2832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:38.187223 kubelet[2832]: I1213 01:55:38.186532 2832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:38.187223 kubelet[2832]: I1213 01:55:38.186927 2832 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:38.189119 kubelet[2832]: I1213 01:55:38.189034 2832 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:55:38.193089 kubelet[2832]: I1213 01:55:38.193003 2832 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:55:38.193440 kubelet[2832]: E1213 01:55:38.193404 2832 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-118\" not found" Dec 13 01:55:38.194283 kubelet[2832]: I1213 01:55:38.194246 2832 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:55:38.194410 kubelet[2832]: I1213 01:55:38.194369 2832 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:55:38.196134 kubelet[2832]: W1213 01:55:38.195392 2832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.118:6443: connect: connection refused Dec 13 01:55:38.196134 kubelet[2832]: E1213 01:55:38.195488 2832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:38.196134 kubelet[2832]: E1213 01:55:38.195601 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-118?timeout=10s\": dial tcp 172.31.18.118:6443: connect: connection refused" interval="200ms" Dec 13 01:55:38.198665 kubelet[2832]: E1213 01:55:38.188529 2832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.118:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.118:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-118.181099c76283251f default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-118,UID:ip-172-31-18-118,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-118,},FirstTimestamp:2024-12-13 01:55:38.167944479 +0000 UTC m=+1.052360545,LastTimestamp:2024-12-13 01:55:38.167944479 +0000 UTC m=+1.052360545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-118,}" Dec 13 01:55:38.199311 kubelet[2832]: I1213 01:55:38.199258 2832 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:38.199597 kubelet[2832]: I1213 01:55:38.199566 2832 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:38.202955 kubelet[2832]: I1213 01:55:38.202890 2832 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:38.218850 kubelet[2832]: E1213 01:55:38.218586 2832 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:38.235249 kubelet[2832]: I1213 01:55:38.234953 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:38.239409 kubelet[2832]: I1213 01:55:38.237895 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:38.239409 kubelet[2832]: I1213 01:55:38.237940 2832 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:38.239409 kubelet[2832]: I1213 01:55:38.237971 2832 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:55:38.239409 kubelet[2832]: E1213 01:55:38.238036 2832 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:55:38.239409 kubelet[2832]: W1213 01:55:38.238760 2832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.118:6443: connect: connection refused Dec 13 01:55:38.239409 kubelet[2832]: E1213 01:55:38.238836 2832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:38.245781 kubelet[2832]: I1213 01:55:38.245717 2832 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:38.245781 kubelet[2832]: I1213 01:55:38.245762 2832 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:38.245949 kubelet[2832]: I1213 01:55:38.245796 2832 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:38.269204 kubelet[2832]: I1213 01:55:38.269144 2832 policy_none.go:49] "None policy: Start" Dec 13 01:55:38.270505 kubelet[2832]: I1213 01:55:38.270479 2832 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:38.270778 kubelet[2832]: I1213 01:55:38.270718 2832 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:38.293585 kubelet[2832]: E1213 01:55:38.293533 2832 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-118\" not found" Dec 13 01:55:38.308372 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:55:38.327937 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:55:38.334068 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:55:38.338911 kubelet[2832]: E1213 01:55:38.338863 2832 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:55:38.346564 kubelet[2832]: I1213 01:55:38.346499 2832 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:38.348383 kubelet[2832]: I1213 01:55:38.347140 2832 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:55:38.348383 kubelet[2832]: I1213 01:55:38.347182 2832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:55:38.348383 kubelet[2832]: I1213 01:55:38.347752 2832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:38.354994 kubelet[2832]: E1213 01:55:38.352401 2832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-118\" not found" Dec 13 01:55:38.396858 kubelet[2832]: E1213 01:55:38.396785 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-118?timeout=10s\": dial tcp 172.31.18.118:6443: connect: connection refused" interval="400ms" Dec 13 01:55:38.450024 kubelet[2832]: I1213 01:55:38.449921 2832 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-118" Dec 13 01:55:38.450562 kubelet[2832]: E1213 01:55:38.450488 2832 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.118:6443/api/v1/nodes\": dial tcp 172.31.18.118:6443: connect: connection refused" node="ip-172-31-18-118" Dec 13 01:55:38.557351 systemd[1]: Created slice kubepods-burstable-podc83738dcbf9ce515ce19b2be45af10e9.slice - libcontainer container kubepods-burstable-podc83738dcbf9ce515ce19b2be45af10e9.slice. Dec 13 01:55:38.582805 systemd[1]: Created slice kubepods-burstable-podace863fd36c4828eaa05360adced59f6.slice - libcontainer container kubepods-burstable-podace863fd36c4828eaa05360adced59f6.slice. Dec 13 01:55:38.596480 kubelet[2832]: I1213 01:55:38.596366 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c83738dcbf9ce515ce19b2be45af10e9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-118\" (UID: \"c83738dcbf9ce515ce19b2be45af10e9\") " pod="kube-system/kube-apiserver-ip-172-31-18-118" Dec 13 01:55:38.596785 kubelet[2832]: I1213 01:55:38.596449 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:38.596785 kubelet[2832]: I1213 01:55:38.596671 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:38.596785 kubelet[2832]: I1213 01:55:38.596732 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c83738dcbf9ce515ce19b2be45af10e9-ca-certs\") pod \"kube-apiserver-ip-172-31-18-118\" (UID: \"c83738dcbf9ce515ce19b2be45af10e9\") " pod="kube-system/kube-apiserver-ip-172-31-18-118" Dec 13 01:55:38.597168 kubelet[2832]: I1213 01:55:38.597005 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c83738dcbf9ce515ce19b2be45af10e9-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-118\" (UID: \"c83738dcbf9ce515ce19b2be45af10e9\") " pod="kube-system/kube-apiserver-ip-172-31-18-118" Dec 13 01:55:38.597168 kubelet[2832]: I1213 01:55:38.597057 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:38.597168 kubelet[2832]: I1213 01:55:38.597132 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26d01e39c311317df02608f7dcc5904c-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-118\" (UID: \"26d01e39c311317df02608f7dcc5904c\") " pod="kube-system/kube-scheduler-ip-172-31-18-118" Dec 13 01:55:38.597586 kubelet[2832]: I1213 01:55:38.597441 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:38.597586 kubelet[2832]: I1213 01:55:38.597511 2832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:38.599583 systemd[1]: Created slice kubepods-burstable-pod26d01e39c311317df02608f7dcc5904c.slice - libcontainer container kubepods-burstable-pod26d01e39c311317df02608f7dcc5904c.slice. Dec 13 01:55:38.653328 kubelet[2832]: I1213 01:55:38.653220 2832 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-118" Dec 13 01:55:38.653964 kubelet[2832]: E1213 01:55:38.653906 2832 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.118:6443/api/v1/nodes\": dial tcp 172.31.18.118:6443: connect: connection refused" node="ip-172-31-18-118" Dec 13 01:55:38.797744 kubelet[2832]: E1213 01:55:38.797670 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-118?timeout=10s\": dial tcp 172.31.18.118:6443: connect: connection refused" interval="800ms" Dec 13 01:55:38.877147 containerd[2029]: time="2024-12-13T01:55:38.877022746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-118,Uid:c83738dcbf9ce515ce19b2be45af10e9,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:38.897143 containerd[2029]: time="2024-12-13T01:55:38.896767778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-118,Uid:ace863fd36c4828eaa05360adced59f6,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:38.904827 containerd[2029]: time="2024-12-13T01:55:38.904682493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-118,Uid:26d01e39c311317df02608f7dcc5904c,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:39.009622 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:55:39.056749 kubelet[2832]: I1213 01:55:39.056663 2832 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-118" Dec 13 01:55:39.057262 kubelet[2832]: E1213 01:55:39.057214 2832 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.118:6443/api/v1/nodes\": dial tcp 172.31.18.118:6443: connect: connection refused" node="ip-172-31-18-118" Dec 13 01:55:39.227494 kubelet[2832]: W1213 01:55:39.227214 2832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-118&limit=500&resourceVersion=0": dial tcp 172.31.18.118:6443: connect: connection refused Dec 13 01:55:39.227494 kubelet[2832]: E1213 01:55:39.227349 2832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-118&limit=500&resourceVersion=0\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:39.357820 kubelet[2832]: W1213 01:55:39.357680 2832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.118:6443: connect: connection refused Dec 13 01:55:39.357820 kubelet[2832]: E1213 01:55:39.357779 2832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:39.416592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517844663.mount: Deactivated successfully. Dec 13 01:55:39.423139 kubelet[2832]: W1213 01:55:39.423061 2832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.118:6443: connect: connection refused Dec 13 01:55:39.423436 kubelet[2832]: E1213 01:55:39.423373 2832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:39.433209 containerd[2029]: time="2024-12-13T01:55:39.433135388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:39.435007 containerd[2029]: time="2024-12-13T01:55:39.434950971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:55:39.437226 containerd[2029]: time="2024-12-13T01:55:39.437160723Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:39.439321 containerd[2029]: time="2024-12-13T01:55:39.439210604Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:39.441214 containerd[2029]: time="2024-12-13T01:55:39.441159262Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:39.444088 containerd[2029]: time="2024-12-13T01:55:39.443996327Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:39.445863 containerd[2029]: time="2024-12-13T01:55:39.445671981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:39.451717 containerd[2029]: time="2024-12-13T01:55:39.451627161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:39.454673 containerd[2029]: time="2024-12-13T01:55:39.454344239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 577.217797ms" Dec 13 01:55:39.456708 containerd[2029]: time="2024-12-13T01:55:39.456626615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.830785ms" Dec 13 01:55:39.460928 containerd[2029]: time="2024-12-13T01:55:39.460848766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.957265ms" Dec 13 01:55:39.516215 kubelet[2832]: W1213 01:55:39.514858 2832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.118:6443: connect: connection refused Dec 13 01:55:39.516215 kubelet[2832]: E1213 01:55:39.514965 2832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:39.600105 kubelet[2832]: E1213 01:55:39.600022 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-118?timeout=10s\": dial tcp 172.31.18.118:6443: connect: connection refused" interval="1.6s" Dec 13 01:55:39.672300 containerd[2029]: time="2024-12-13T01:55:39.671802987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:39.672300 containerd[2029]: time="2024-12-13T01:55:39.671960338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:39.672300 containerd[2029]: time="2024-12-13T01:55:39.671987412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.672573 containerd[2029]: time="2024-12-13T01:55:39.672495313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.681419 containerd[2029]: time="2024-12-13T01:55:39.679789391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:39.681419 containerd[2029]: time="2024-12-13T01:55:39.679880312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:39.681419 containerd[2029]: time="2024-12-13T01:55:39.679910075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.681419 containerd[2029]: time="2024-12-13T01:55:39.680054231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.682094 containerd[2029]: time="2024-12-13T01:55:39.679754717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:39.682094 containerd[2029]: time="2024-12-13T01:55:39.679849481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:39.682094 containerd[2029]: time="2024-12-13T01:55:39.679875462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.682094 containerd[2029]: time="2024-12-13T01:55:39.680033557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.731630 systemd[1]: Started cri-containerd-65758025b4c4e1da781a92949abbcca2261a62656b011187aa8aa5e8c271a586.scope - libcontainer container 65758025b4c4e1da781a92949abbcca2261a62656b011187aa8aa5e8c271a586. Dec 13 01:55:39.735513 systemd[1]: Started cri-containerd-8310315fd72715cae1ed278c29c2cd6574e2b6fbc3dc005e2322c7493876e2d3.scope - libcontainer container 8310315fd72715cae1ed278c29c2cd6574e2b6fbc3dc005e2322c7493876e2d3. Dec 13 01:55:39.750951 systemd[1]: Started cri-containerd-c85aef07698d7a2e4845f7f33d8c8309683e305d85edeb351f27fbdd28029ed4.scope - libcontainer container c85aef07698d7a2e4845f7f33d8c8309683e305d85edeb351f27fbdd28029ed4. Dec 13 01:55:39.839754 containerd[2029]: time="2024-12-13T01:55:39.839512449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-118,Uid:c83738dcbf9ce515ce19b2be45af10e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8310315fd72715cae1ed278c29c2cd6574e2b6fbc3dc005e2322c7493876e2d3\"" Dec 13 01:55:39.861654 kubelet[2832]: I1213 01:55:39.861196 2832 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-118" Dec 13 01:55:39.862063 containerd[2029]: time="2024-12-13T01:55:39.862001428Z" level=info msg="CreateContainer within sandbox \"8310315fd72715cae1ed278c29c2cd6574e2b6fbc3dc005e2322c7493876e2d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:55:39.862741 kubelet[2832]: E1213 01:55:39.862641 2832 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.118:6443/api/v1/nodes\": dial tcp 172.31.18.118:6443: connect: connection refused" node="ip-172-31-18-118" Dec 13 01:55:39.874801 containerd[2029]: time="2024-12-13T01:55:39.874646184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-118,Uid:ace863fd36c4828eaa05360adced59f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c85aef07698d7a2e4845f7f33d8c8309683e305d85edeb351f27fbdd28029ed4\"" Dec 13 01:55:39.881360 containerd[2029]: time="2024-12-13T01:55:39.880903495Z" level=info msg="CreateContainer within sandbox \"c85aef07698d7a2e4845f7f33d8c8309683e305d85edeb351f27fbdd28029ed4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:55:39.881360 containerd[2029]: time="2024-12-13T01:55:39.881158538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-118,Uid:26d01e39c311317df02608f7dcc5904c,Namespace:kube-system,Attempt:0,} returns sandbox id \"65758025b4c4e1da781a92949abbcca2261a62656b011187aa8aa5e8c271a586\"" Dec 13 01:55:39.891174 containerd[2029]: time="2024-12-13T01:55:39.891108655Z" level=info msg="CreateContainer within sandbox \"65758025b4c4e1da781a92949abbcca2261a62656b011187aa8aa5e8c271a586\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:55:39.925633 containerd[2029]: time="2024-12-13T01:55:39.925563738Z" level=info msg="CreateContainer within sandbox \"8310315fd72715cae1ed278c29c2cd6574e2b6fbc3dc005e2322c7493876e2d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c86923e205758853b732663ad239ca6ec38c08310010392a890a63bee7d9f08e\"" Dec 13 01:55:39.926775 containerd[2029]: time="2024-12-13T01:55:39.926619894Z" level=info msg="StartContainer for \"c86923e205758853b732663ad239ca6ec38c08310010392a890a63bee7d9f08e\"" Dec 13 01:55:39.930318 containerd[2029]: time="2024-12-13T01:55:39.930109114Z" level=info msg="CreateContainer within sandbox \"c85aef07698d7a2e4845f7f33d8c8309683e305d85edeb351f27fbdd28029ed4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6\"" Dec 13 01:55:39.932159 containerd[2029]: time="2024-12-13T01:55:39.930884317Z" level=info msg="StartContainer for \"c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6\"" Dec 13 01:55:39.935241 containerd[2029]: time="2024-12-13T01:55:39.935166149Z" level=info msg="CreateContainer within sandbox \"65758025b4c4e1da781a92949abbcca2261a62656b011187aa8aa5e8c271a586\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6\"" Dec 13 01:55:39.936906 containerd[2029]: time="2024-12-13T01:55:39.936740040Z" level=info msg="StartContainer for \"236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6\"" Dec 13 01:55:39.988032 systemd[1]: Started cri-containerd-c86923e205758853b732663ad239ca6ec38c08310010392a890a63bee7d9f08e.scope - libcontainer container c86923e205758853b732663ad239ca6ec38c08310010392a890a63bee7d9f08e. Dec 13 01:55:40.023553 systemd[1]: Started cri-containerd-c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6.scope - libcontainer container c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6. Dec 13 01:55:40.042628 systemd[1]: Started cri-containerd-236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6.scope - libcontainer container 236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6. Dec 13 01:55:40.085523 kubelet[2832]: E1213 01:55:40.085442 2832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.118:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:55:40.117172 containerd[2029]: time="2024-12-13T01:55:40.116025663Z" level=info msg="StartContainer for \"c86923e205758853b732663ad239ca6ec38c08310010392a890a63bee7d9f08e\" returns successfully" Dec 13 01:55:40.147315 containerd[2029]: time="2024-12-13T01:55:40.147207142Z" level=info msg="StartContainer for \"c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6\" returns successfully" Dec 13 01:55:40.212656 containerd[2029]: time="2024-12-13T01:55:40.212577831Z" level=info msg="StartContainer for \"236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6\" returns successfully" Dec 13 01:55:41.466372 kubelet[2832]: I1213 01:55:41.465921 2832 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-118" Dec 13 01:55:44.904303 kubelet[2832]: E1213 01:55:44.903186 2832 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-118\" not found" node="ip-172-31-18-118" Dec 13 01:55:44.965615 kubelet[2832]: I1213 01:55:44.965199 2832 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-118" Dec 13 01:55:45.131313 kubelet[2832]: I1213 01:55:45.129157 2832 apiserver.go:52] "Watching apiserver" Dec 13 01:55:45.195256 kubelet[2832]: I1213 01:55:45.194745 2832 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:55:47.109522 systemd[1]: Reloading requested from client PID 3116 ('systemctl') (unit session-7.scope)... Dec 13 01:55:47.110004 systemd[1]: Reloading... Dec 13 01:55:47.286335 zram_generator::config[3162]: No configuration found. Dec 13 01:55:47.575918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:47.776981 systemd[1]: Reloading finished in 666 ms. Dec 13 01:55:47.850663 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:47.865069 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:55:47.865577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:47.865671 systemd[1]: kubelet.service: Consumed 1.396s CPU time, 118.1M memory peak, 0B memory swap peak. Dec 13 01:55:47.873851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:48.182113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:48.199896 (kubelet)[3216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:48.289094 kubelet[3216]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:48.289094 kubelet[3216]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:48.289094 kubelet[3216]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:48.289684 kubelet[3216]: I1213 01:55:48.289206 3216 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:48.300654 kubelet[3216]: I1213 01:55:48.300593 3216 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:55:48.300654 kubelet[3216]: I1213 01:55:48.300642 3216 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:48.301533 kubelet[3216]: I1213 01:55:48.301118 3216 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:55:48.304023 kubelet[3216]: I1213 01:55:48.303893 3216 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:55:48.308779 kubelet[3216]: I1213 01:55:48.308053 3216 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:48.322942 kubelet[3216]: E1213 01:55:48.322819 3216 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:55:48.323104 kubelet[3216]: I1213 01:55:48.322933 3216 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:55:48.329433 kubelet[3216]: I1213 01:55:48.329372 3216 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:48.331244 kubelet[3216]: I1213 01:55:48.329636 3216 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:55:48.331244 kubelet[3216]: I1213 01:55:48.329976 3216 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:48.331244 kubelet[3216]: I1213 01:55:48.330016 3216 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-118","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:55:48.331244 kubelet[3216]: I1213 01:55:48.330406 3216 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:48.334715 kubelet[3216]: I1213 01:55:48.330429 3216 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:55:48.334715 kubelet[3216]: I1213 01:55:48.330490 3216 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:48.334715 kubelet[3216]: I1213 01:55:48.330709 3216 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:55:48.334715 kubelet[3216]: I1213 01:55:48.332037 3216 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:48.334715 kubelet[3216]: I1213 01:55:48.332144 3216 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:55:48.334715 kubelet[3216]: I1213 01:55:48.333732 3216 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:48.341626 kubelet[3216]: I1213 01:55:48.340182 3216 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:48.341626 kubelet[3216]: I1213 01:55:48.341098 3216 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:48.344484 kubelet[3216]: I1213 01:55:48.343794 3216 server.go:1269] "Started kubelet" Dec 13 01:55:48.347879 kubelet[3216]: I1213 01:55:48.346443 3216 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:48.348222 kubelet[3216]: I1213 01:55:48.348182 3216 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:55:48.350172 kubelet[3216]: I1213 01:55:48.350104 3216 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:48.353307 kubelet[3216]: I1213 01:55:48.352468 3216 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:48.353307 kubelet[3216]: I1213 01:55:48.352835 3216 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:48.361077 kubelet[3216]: I1213 01:55:48.361007 3216 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:55:48.366196 kubelet[3216]: I1213 01:55:48.364995 3216 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:55:48.366196 kubelet[3216]: E1213 01:55:48.365513 3216 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-118\" not found" Dec 13 01:55:48.365405 sudo[3229]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:55:48.367177 kubelet[3216]: I1213 01:55:48.367079 3216 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:55:48.367504 sudo[3229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:55:48.378818 kubelet[3216]: I1213 01:55:48.375240 3216 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:55:48.439401 kubelet[3216]: I1213 01:55:48.438588 3216 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:48.439401 kubelet[3216]: I1213 01:55:48.438630 3216 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:48.439401 kubelet[3216]: I1213 01:55:48.438764 3216 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:48.484418 kubelet[3216]: I1213 01:55:48.484173 3216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:48.491533 kubelet[3216]: I1213 01:55:48.491487 3216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:48.498452 kubelet[3216]: I1213 01:55:48.498414 3216 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:48.498801 kubelet[3216]: I1213 01:55:48.498679 3216 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:55:48.499034 kubelet[3216]: E1213 01:55:48.498986 3216 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:55:48.504545 kubelet[3216]: E1213 01:55:48.496242 3216 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:48.602469 kubelet[3216]: E1213 01:55:48.602424 3216 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:55:48.617359 kubelet[3216]: I1213 01:55:48.617323 3216 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:48.617679 kubelet[3216]: I1213 01:55:48.617656 3216 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:48.617971 kubelet[3216]: I1213 01:55:48.617922 3216 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:48.618938 kubelet[3216]: I1213 01:55:48.618891 3216 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:55:48.619168 kubelet[3216]: I1213 01:55:48.619126 3216 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:55:48.619416 kubelet[3216]: I1213 01:55:48.619396 3216 policy_none.go:49] "None policy: Start" Dec 13 01:55:48.622790 kubelet[3216]: I1213 01:55:48.622746 3216 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:48.623209 kubelet[3216]: I1213 01:55:48.622984 3216 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:48.623637 kubelet[3216]: I1213 01:55:48.623513 3216 state_mem.go:75] "Updated machine memory state" Dec 13 01:55:48.643564 kubelet[3216]: I1213 01:55:48.642610 3216 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:48.643564 kubelet[3216]: I1213 01:55:48.642889 3216 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:55:48.643564 kubelet[3216]: I1213 01:55:48.642908 3216 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:55:48.648434 kubelet[3216]: I1213 01:55:48.647226 3216 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:48.789607 kubelet[3216]: I1213 01:55:48.789474 3216 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-118" Dec 13 01:55:48.838450 kubelet[3216]: I1213 01:55:48.838201 3216 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-18-118" Dec 13 01:55:48.838450 kubelet[3216]: I1213 01:55:48.838388 3216 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-118" Dec 13 01:55:48.885158 kubelet[3216]: I1213 01:55:48.881581 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c83738dcbf9ce515ce19b2be45af10e9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-118\" (UID: \"c83738dcbf9ce515ce19b2be45af10e9\") " pod="kube-system/kube-apiserver-ip-172-31-18-118" Dec 13 01:55:48.885158 kubelet[3216]: I1213 01:55:48.883155 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:48.885158 kubelet[3216]: I1213 01:55:48.883216 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:48.885158 kubelet[3216]: I1213 01:55:48.883259 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26d01e39c311317df02608f7dcc5904c-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-118\" (UID: \"26d01e39c311317df02608f7dcc5904c\") " pod="kube-system/kube-scheduler-ip-172-31-18-118" Dec 13 01:55:48.885158 kubelet[3216]: I1213 01:55:48.883316 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c83738dcbf9ce515ce19b2be45af10e9-ca-certs\") pod \"kube-apiserver-ip-172-31-18-118\" (UID: \"c83738dcbf9ce515ce19b2be45af10e9\") " pod="kube-system/kube-apiserver-ip-172-31-18-118" Dec 13 01:55:48.886215 kubelet[3216]: I1213 01:55:48.883358 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c83738dcbf9ce515ce19b2be45af10e9-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-118\" (UID: \"c83738dcbf9ce515ce19b2be45af10e9\") " pod="kube-system/kube-apiserver-ip-172-31-18-118" Dec 13 01:55:48.886215 kubelet[3216]: I1213 01:55:48.883394 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:48.886215 kubelet[3216]: I1213 01:55:48.883429 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:48.886215 kubelet[3216]: I1213 01:55:48.883464 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ace863fd36c4828eaa05360adced59f6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-118\" (UID: \"ace863fd36c4828eaa05360adced59f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-118" Dec 13 01:55:49.335856 kubelet[3216]: I1213 01:55:49.335290 3216 apiserver.go:52] "Watching apiserver" Dec 13 01:55:49.368019 kubelet[3216]: I1213 01:55:49.367964 3216 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:55:49.380496 sudo[3229]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:49.476812 kubelet[3216]: I1213 01:55:49.476440 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-118" podStartSLOduration=1.47641526 podStartE2EDuration="1.47641526s" podCreationTimestamp="2024-12-13 01:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:49.476416124 +0000 UTC m=+1.267705064" watchObservedRunningTime="2024-12-13 01:55:49.47641526 +0000 UTC m=+1.267704140" Dec 13 01:55:49.476812 kubelet[3216]: I1213 01:55:49.476655 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-118" podStartSLOduration=1.476613092 podStartE2EDuration="1.476613092s" podCreationTimestamp="2024-12-13 01:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:49.460155008 +0000 UTC m=+1.251443912" watchObservedRunningTime="2024-12-13 01:55:49.476613092 +0000 UTC m=+1.267901960" Dec 13 01:55:49.577500 kubelet[3216]: I1213 01:55:49.576430 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-118" podStartSLOduration=1.5763447080000001 podStartE2EDuration="1.576344708s" podCreationTimestamp="2024-12-13 01:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:49.496772444 +0000 UTC m=+1.288061348" watchObservedRunningTime="2024-12-13 01:55:49.576344708 +0000 UTC m=+1.367633576" Dec 13 01:55:51.936867 sudo[2343]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:51.961646 sshd[2340]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:51.966998 systemd[1]: sshd@6-172.31.18.118:22-139.178.68.195:42158.service: Deactivated successfully. Dec 13 01:55:51.970087 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:55:51.971453 systemd[1]: session-7.scope: Consumed 11.241s CPU time, 153.9M memory peak, 0B memory swap peak. Dec 13 01:55:51.973916 systemd-logind[2008]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:55:51.976236 systemd-logind[2008]: Removed session 7. Dec 13 01:55:53.094376 kubelet[3216]: I1213 01:55:53.094058 3216 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:55:53.096224 kubelet[3216]: I1213 01:55:53.095762 3216 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:55:53.096407 containerd[2029]: time="2024-12-13T01:55:53.095376742Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:55:53.499808 update_engine[2009]: I20241213 01:55:53.499596 2009 update_attempter.cc:509] Updating boot flags... Dec 13 01:55:53.589368 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3298) Dec 13 01:55:53.964800 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3300) Dec 13 01:55:54.072430 systemd[1]: Created slice kubepods-besteffort-pod09fa9ed0_130d_451c_8bc0_58fc70a70c23.slice - libcontainer container kubepods-besteffort-pod09fa9ed0_130d_451c_8bc0_58fc70a70c23.slice. Dec 13 01:55:54.124411 kubelet[3216]: W1213 01:55:54.123902 3216 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-18-118" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-118' and this object Dec 13 01:55:54.124411 kubelet[3216]: E1213 01:55:54.123971 3216 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-18-118\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-118' and this object" logger="UnhandledError" Dec 13 01:55:54.124411 kubelet[3216]: W1213 01:55:54.124054 3216 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-18-118" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-118' and this object Dec 13 01:55:54.124411 kubelet[3216]: E1213 01:55:54.124080 3216 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-18-118\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-118' and this object" logger="UnhandledError" Dec 13 01:55:54.124411 kubelet[3216]: W1213 01:55:54.124214 3216 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-18-118" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-118' and this object Dec 13 01:55:54.127126 kubelet[3216]: E1213 01:55:54.124240 3216 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-18-118\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-118' and this object" logger="UnhandledError" Dec 13 01:55:54.129652 kubelet[3216]: I1213 01:55:54.128660 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09fa9ed0-130d-451c-8bc0-58fc70a70c23-lib-modules\") pod \"kube-proxy-xvtfb\" (UID: \"09fa9ed0-130d-451c-8bc0-58fc70a70c23\") " pod="kube-system/kube-proxy-xvtfb" Dec 13 01:55:54.129652 kubelet[3216]: I1213 01:55:54.128723 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cni-path\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.129652 kubelet[3216]: I1213 01:55:54.128768 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-xtables-lock\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.129652 kubelet[3216]: I1213 01:55:54.128806 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-clustermesh-secrets\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.129652 kubelet[3216]: I1213 01:55:54.128845 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csn5z\" (UniqueName: \"kubernetes.io/projected/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-kube-api-access-csn5z\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.129652 kubelet[3216]: I1213 01:55:54.128885 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-cgroup\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.132534 kubelet[3216]: I1213 01:55:54.128919 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-etc-cni-netd\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.132534 kubelet[3216]: I1213 01:55:54.128954 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-lib-modules\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.132534 kubelet[3216]: I1213 01:55:54.128987 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/09fa9ed0-130d-451c-8bc0-58fc70a70c23-kube-proxy\") pod \"kube-proxy-xvtfb\" (UID: \"09fa9ed0-130d-451c-8bc0-58fc70a70c23\") " pod="kube-system/kube-proxy-xvtfb" Dec 13 01:55:54.132534 kubelet[3216]: I1213 01:55:54.129021 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09fa9ed0-130d-451c-8bc0-58fc70a70c23-xtables-lock\") pod \"kube-proxy-xvtfb\" (UID: \"09fa9ed0-130d-451c-8bc0-58fc70a70c23\") " pod="kube-system/kube-proxy-xvtfb" Dec 13 01:55:54.132534 kubelet[3216]: I1213 01:55:54.129059 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-config-path\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.132534 kubelet[3216]: I1213 01:55:54.129093 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-host-proc-sys-kernel\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.136412 kubelet[3216]: I1213 01:55:54.129131 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-run\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.136412 kubelet[3216]: I1213 01:55:54.129180 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-bpf-maps\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.136412 kubelet[3216]: I1213 01:55:54.130675 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-hostproc\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.136412 kubelet[3216]: I1213 01:55:54.135348 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-hubble-tls\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.136412 kubelet[3216]: I1213 01:55:54.135401 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-host-proc-sys-net\") pod \"cilium-ch58n\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " pod="kube-system/cilium-ch58n" Dec 13 01:55:54.136412 kubelet[3216]: I1213 01:55:54.135440 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7s79\" (UniqueName: \"kubernetes.io/projected/09fa9ed0-130d-451c-8bc0-58fc70a70c23-kube-api-access-g7s79\") pod \"kube-proxy-xvtfb\" (UID: \"09fa9ed0-130d-451c-8bc0-58fc70a70c23\") " pod="kube-system/kube-proxy-xvtfb" Dec 13 01:55:54.145195 systemd[1]: Created slice kubepods-burstable-poddcb7b7c1_d706_40f5_9868_d12d58dd3d63.slice - libcontainer container kubepods-burstable-poddcb7b7c1_d706_40f5_9868_d12d58dd3d63.slice. Dec 13 01:55:54.436320 containerd[2029]: time="2024-12-13T01:55:54.435076056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvtfb,Uid:09fa9ed0-130d-451c-8bc0-58fc70a70c23,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:54.438688 systemd[1]: Created slice kubepods-besteffort-podcfec0ce3_a62f_4e4d_9b65_b06acc6cbf7c.slice - libcontainer container kubepods-besteffort-podcfec0ce3_a62f_4e4d_9b65_b06acc6cbf7c.slice. Dec 13 01:55:54.455741 kubelet[3216]: I1213 01:55:54.455692 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psd89\" (UniqueName: \"kubernetes.io/projected/cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c-kube-api-access-psd89\") pod \"cilium-operator-5d85765b45-v4mjq\" (UID: \"cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c\") " pod="kube-system/cilium-operator-5d85765b45-v4mjq" Dec 13 01:55:54.457399 kubelet[3216]: I1213 01:55:54.457066 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c-cilium-config-path\") pod \"cilium-operator-5d85765b45-v4mjq\" (UID: \"cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c\") " pod="kube-system/cilium-operator-5d85765b45-v4mjq" Dec 13 01:55:54.585199 containerd[2029]: time="2024-12-13T01:55:54.584346937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:54.585199 containerd[2029]: time="2024-12-13T01:55:54.584447185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:54.585199 containerd[2029]: time="2024-12-13T01:55:54.584501725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:54.585199 containerd[2029]: time="2024-12-13T01:55:54.584673505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:54.603304 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3300) Dec 13 01:55:54.669958 systemd[1]: Started cri-containerd-ff91f2a4932e0a0f79778a6a4f1dda717274f0c11fd418eb4e60df5123860c94.scope - libcontainer container ff91f2a4932e0a0f79778a6a4f1dda717274f0c11fd418eb4e60df5123860c94. Dec 13 01:55:54.768316 containerd[2029]: time="2024-12-13T01:55:54.767965646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvtfb,Uid:09fa9ed0-130d-451c-8bc0-58fc70a70c23,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff91f2a4932e0a0f79778a6a4f1dda717274f0c11fd418eb4e60df5123860c94\"" Dec 13 01:55:54.777708 containerd[2029]: time="2024-12-13T01:55:54.777416006Z" level=info msg="CreateContainer within sandbox \"ff91f2a4932e0a0f79778a6a4f1dda717274f0c11fd418eb4e60df5123860c94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:55:54.838301 containerd[2029]: time="2024-12-13T01:55:54.837375422Z" level=info msg="CreateContainer within sandbox \"ff91f2a4932e0a0f79778a6a4f1dda717274f0c11fd418eb4e60df5123860c94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"911f3773b495e907cf92f6dc15cfb1fc5d824a9c358f8891c84af40839d19ec4\"" Dec 13 01:55:54.841780 containerd[2029]: time="2024-12-13T01:55:54.841590974Z" level=info msg="StartContainer for \"911f3773b495e907cf92f6dc15cfb1fc5d824a9c358f8891c84af40839d19ec4\"" Dec 13 01:55:54.948642 systemd[1]: Started cri-containerd-911f3773b495e907cf92f6dc15cfb1fc5d824a9c358f8891c84af40839d19ec4.scope - libcontainer container 911f3773b495e907cf92f6dc15cfb1fc5d824a9c358f8891c84af40839d19ec4. Dec 13 01:55:55.004896 containerd[2029]: time="2024-12-13T01:55:55.004682603Z" level=info msg="StartContainer for \"911f3773b495e907cf92f6dc15cfb1fc5d824a9c358f8891c84af40839d19ec4\" returns successfully" Dec 13 01:55:55.246290 kubelet[3216]: E1213 01:55:55.246207 3216 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:55:55.247484 kubelet[3216]: E1213 01:55:55.246370 3216 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-config-path podName:dcb7b7c1-d706-40f5-9868-d12d58dd3d63 nodeName:}" failed. No retries permitted until 2024-12-13 01:55:55.746338028 +0000 UTC m=+7.537626896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-config-path") pod "cilium-ch58n" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:55:55.664577 containerd[2029]: time="2024-12-13T01:55:55.664519214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v4mjq,Uid:cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:55.715353 containerd[2029]: time="2024-12-13T01:55:55.715114191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:55.716207 containerd[2029]: time="2024-12-13T01:55:55.716097987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:55.716447 containerd[2029]: time="2024-12-13T01:55:55.716315523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:55.717255 containerd[2029]: time="2024-12-13T01:55:55.717077415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:55.758526 systemd[1]: Started cri-containerd-708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80.scope - libcontainer container 708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80. Dec 13 01:55:55.818785 containerd[2029]: time="2024-12-13T01:55:55.818687775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v4mjq,Uid:cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\"" Dec 13 01:55:55.825675 containerd[2029]: time="2024-12-13T01:55:55.825384507Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:55:55.956208 containerd[2029]: time="2024-12-13T01:55:55.956054860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ch58n,Uid:dcb7b7c1-d706-40f5-9868-d12d58dd3d63,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:56.004531 containerd[2029]: time="2024-12-13T01:55:56.004390092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:56.005703 containerd[2029]: time="2024-12-13T01:55:56.004952220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:56.005821 containerd[2029]: time="2024-12-13T01:55:56.005744616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:56.006065 containerd[2029]: time="2024-12-13T01:55:56.005975412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:56.039628 systemd[1]: Started cri-containerd-5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec.scope - libcontainer container 5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec. Dec 13 01:55:56.080798 containerd[2029]: time="2024-12-13T01:55:56.080707332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ch58n,Uid:dcb7b7c1-d706-40f5-9868-d12d58dd3d63,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\"" Dec 13 01:55:57.292446 kubelet[3216]: I1213 01:55:57.292255 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xvtfb" podStartSLOduration=4.2922317660000004 podStartE2EDuration="4.292231766s" podCreationTimestamp="2024-12-13 01:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:55.591996362 +0000 UTC m=+7.383285230" watchObservedRunningTime="2024-12-13 01:55:57.292231766 +0000 UTC m=+9.083520634" Dec 13 01:55:59.794388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650335758.mount: Deactivated successfully. Dec 13 01:56:00.715650 containerd[2029]: time="2024-12-13T01:56:00.715573255Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:00.717146 containerd[2029]: time="2024-12-13T01:56:00.717066283Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138274" Dec 13 01:56:00.718938 containerd[2029]: time="2024-12-13T01:56:00.718669123Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:00.722604 containerd[2029]: time="2024-12-13T01:56:00.722519383Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.897029936s" Dec 13 01:56:00.722604 containerd[2029]: time="2024-12-13T01:56:00.722586535Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:56:00.724979 containerd[2029]: time="2024-12-13T01:56:00.724905103Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:56:00.729438 containerd[2029]: time="2024-12-13T01:56:00.729041216Z" level=info msg="CreateContainer within sandbox \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:56:00.751457 containerd[2029]: time="2024-12-13T01:56:00.751381388Z" level=info msg="CreateContainer within sandbox \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\"" Dec 13 01:56:00.755345 containerd[2029]: time="2024-12-13T01:56:00.753473156Z" level=info msg="StartContainer for \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\"" Dec 13 01:56:00.757593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619025667.mount: Deactivated successfully. Dec 13 01:56:00.812644 systemd[1]: Started cri-containerd-d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8.scope - libcontainer container d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8. Dec 13 01:56:00.865400 containerd[2029]: time="2024-12-13T01:56:00.865070264Z" level=info msg="StartContainer for \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\" returns successfully" Dec 13 01:56:09.077860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253678018.mount: Deactivated successfully. Dec 13 01:56:11.950175 containerd[2029]: time="2024-12-13T01:56:11.950080135Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:11.953060 containerd[2029]: time="2024-12-13T01:56:11.952988383Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650886" Dec 13 01:56:11.955343 containerd[2029]: time="2024-12-13T01:56:11.955164355Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:11.960496 containerd[2029]: time="2024-12-13T01:56:11.960085795Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.235069608s" Dec 13 01:56:11.960496 containerd[2029]: time="2024-12-13T01:56:11.960212023Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:56:11.965195 containerd[2029]: time="2024-12-13T01:56:11.965122459Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:56:11.998402 containerd[2029]: time="2024-12-13T01:56:11.998320591Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\"" Dec 13 01:56:12.002127 containerd[2029]: time="2024-12-13T01:56:12.000540819Z" level=info msg="StartContainer for \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\"" Dec 13 01:56:12.058664 systemd[1]: run-containerd-runc-k8s.io-28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f-runc.nJrM2E.mount: Deactivated successfully. Dec 13 01:56:12.072640 systemd[1]: Started cri-containerd-28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f.scope - libcontainer container 28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f. Dec 13 01:56:12.124941 containerd[2029]: time="2024-12-13T01:56:12.124837888Z" level=info msg="StartContainer for \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\" returns successfully" Dec 13 01:56:12.144938 systemd[1]: cri-containerd-28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f.scope: Deactivated successfully. Dec 13 01:56:12.650729 containerd[2029]: time="2024-12-13T01:56:12.650551627Z" level=info msg="shim disconnected" id=28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f namespace=k8s.io Dec 13 01:56:12.650729 containerd[2029]: time="2024-12-13T01:56:12.650721535Z" level=warning msg="cleaning up after shim disconnected" id=28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f namespace=k8s.io Dec 13 01:56:12.651085 containerd[2029]: time="2024-12-13T01:56:12.650770567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:12.677652 kubelet[3216]: I1213 01:56:12.676558 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-v4mjq" podStartSLOduration=13.775338447 podStartE2EDuration="18.676532851s" podCreationTimestamp="2024-12-13 01:55:54 +0000 UTC" firstStartedPulling="2024-12-13 01:55:55.822752727 +0000 UTC m=+7.614041583" lastFinishedPulling="2024-12-13 01:56:00.723947131 +0000 UTC m=+12.515235987" observedRunningTime="2024-12-13 01:56:01.751348473 +0000 UTC m=+13.542637365" watchObservedRunningTime="2024-12-13 01:56:12.676532851 +0000 UTC m=+24.467821719" Dec 13 01:56:12.684328 containerd[2029]: time="2024-12-13T01:56:12.682743595Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:56:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:56:12.988330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f-rootfs.mount: Deactivated successfully. Dec 13 01:56:13.648925 containerd[2029]: time="2024-12-13T01:56:13.648672740Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:56:13.672672 containerd[2029]: time="2024-12-13T01:56:13.672586652Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\"" Dec 13 01:56:13.677529 containerd[2029]: time="2024-12-13T01:56:13.677461988Z" level=info msg="StartContainer for \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\"" Dec 13 01:56:13.744611 systemd[1]: Started cri-containerd-2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7.scope - libcontainer container 2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7. Dec 13 01:56:13.785143 containerd[2029]: time="2024-12-13T01:56:13.784972760Z" level=info msg="StartContainer for \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\" returns successfully" Dec 13 01:56:13.813027 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:56:13.813609 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:56:13.813727 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:56:13.828919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:56:13.829361 systemd[1]: cri-containerd-2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7.scope: Deactivated successfully. Dec 13 01:56:13.873379 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:56:13.897840 containerd[2029]: time="2024-12-13T01:56:13.897609981Z" level=info msg="shim disconnected" id=2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7 namespace=k8s.io Dec 13 01:56:13.897840 containerd[2029]: time="2024-12-13T01:56:13.897728517Z" level=warning msg="cleaning up after shim disconnected" id=2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7 namespace=k8s.io Dec 13 01:56:13.897840 containerd[2029]: time="2024-12-13T01:56:13.897774057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:13.987788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7-rootfs.mount: Deactivated successfully. Dec 13 01:56:14.657624 containerd[2029]: time="2024-12-13T01:56:14.657403653Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:56:14.691625 containerd[2029]: time="2024-12-13T01:56:14.691500345Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\"" Dec 13 01:56:14.695025 containerd[2029]: time="2024-12-13T01:56:14.694103841Z" level=info msg="StartContainer for \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\"" Dec 13 01:56:14.754614 systemd[1]: Started cri-containerd-b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982.scope - libcontainer container b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982. Dec 13 01:56:14.821448 containerd[2029]: time="2024-12-13T01:56:14.820890429Z" level=info msg="StartContainer for \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\" returns successfully" Dec 13 01:56:14.825303 systemd[1]: cri-containerd-b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982.scope: Deactivated successfully. Dec 13 01:56:14.868826 containerd[2029]: time="2024-12-13T01:56:14.868702810Z" level=info msg="shim disconnected" id=b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982 namespace=k8s.io Dec 13 01:56:14.868826 containerd[2029]: time="2024-12-13T01:56:14.868771870Z" level=warning msg="cleaning up after shim disconnected" id=b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982 namespace=k8s.io Dec 13 01:56:14.868826 containerd[2029]: time="2024-12-13T01:56:14.868795426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:14.891182 containerd[2029]: time="2024-12-13T01:56:14.891088258Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:56:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:56:14.987615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982-rootfs.mount: Deactivated successfully. Dec 13 01:56:15.662953 containerd[2029]: time="2024-12-13T01:56:15.662866534Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:56:15.704389 containerd[2029]: time="2024-12-13T01:56:15.704241034Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\"" Dec 13 01:56:15.705602 containerd[2029]: time="2024-12-13T01:56:15.705512074Z" level=info msg="StartContainer for \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\"" Dec 13 01:56:15.778676 systemd[1]: Started cri-containerd-4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5.scope - libcontainer container 4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5. Dec 13 01:56:15.840114 systemd[1]: cri-containerd-4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5.scope: Deactivated successfully. Dec 13 01:56:15.841066 containerd[2029]: time="2024-12-13T01:56:15.840839411Z" level=info msg="StartContainer for \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\" returns successfully" Dec 13 01:56:15.887896 containerd[2029]: time="2024-12-13T01:56:15.887792735Z" level=info msg="shim disconnected" id=4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5 namespace=k8s.io Dec 13 01:56:15.888246 containerd[2029]: time="2024-12-13T01:56:15.888202319Z" level=warning msg="cleaning up after shim disconnected" id=4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5 namespace=k8s.io Dec 13 01:56:15.888688 containerd[2029]: time="2024-12-13T01:56:15.888411047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:15.988854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5-rootfs.mount: Deactivated successfully. Dec 13 01:56:16.673738 containerd[2029]: time="2024-12-13T01:56:16.673601531Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:56:16.700907 containerd[2029]: time="2024-12-13T01:56:16.700180787Z" level=info msg="CreateContainer within sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\"" Dec 13 01:56:16.705132 containerd[2029]: time="2024-12-13T01:56:16.705065843Z" level=info msg="StartContainer for \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\"" Dec 13 01:56:16.779623 systemd[1]: Started cri-containerd-8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331.scope - libcontainer container 8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331. Dec 13 01:56:16.837486 containerd[2029]: time="2024-12-13T01:56:16.837415692Z" level=info msg="StartContainer for \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\" returns successfully" Dec 13 01:56:17.071942 kubelet[3216]: I1213 01:56:17.070882 3216 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:56:17.144754 systemd[1]: Created slice kubepods-burstable-pod9812087d_afbe_4660_960e_7d2099087f2c.slice - libcontainer container kubepods-burstable-pod9812087d_afbe_4660_960e_7d2099087f2c.slice. Dec 13 01:56:17.162976 systemd[1]: Created slice kubepods-burstable-pod1dd2fcc8_4ae0_4b4a_aaf7_929cfb26ea44.slice - libcontainer container kubepods-burstable-pod1dd2fcc8_4ae0_4b4a_aaf7_929cfb26ea44.slice. Dec 13 01:56:17.234669 kubelet[3216]: I1213 01:56:17.234594 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm92s\" (UniqueName: \"kubernetes.io/projected/9812087d-afbe-4660-960e-7d2099087f2c-kube-api-access-vm92s\") pod \"coredns-6f6b679f8f-4bqsg\" (UID: \"9812087d-afbe-4660-960e-7d2099087f2c\") " pod="kube-system/coredns-6f6b679f8f-4bqsg" Dec 13 01:56:17.234839 kubelet[3216]: I1213 01:56:17.234674 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd2fcc8-4ae0-4b4a-aaf7-929cfb26ea44-config-volume\") pod \"coredns-6f6b679f8f-v69rd\" (UID: \"1dd2fcc8-4ae0-4b4a-aaf7-929cfb26ea44\") " pod="kube-system/coredns-6f6b679f8f-v69rd" Dec 13 01:56:17.234839 kubelet[3216]: I1213 01:56:17.234743 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9812087d-afbe-4660-960e-7d2099087f2c-config-volume\") pod \"coredns-6f6b679f8f-4bqsg\" (UID: \"9812087d-afbe-4660-960e-7d2099087f2c\") " pod="kube-system/coredns-6f6b679f8f-4bqsg" Dec 13 01:56:17.234839 kubelet[3216]: I1213 01:56:17.234808 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rs7\" (UniqueName: \"kubernetes.io/projected/1dd2fcc8-4ae0-4b4a-aaf7-929cfb26ea44-kube-api-access-v8rs7\") pod \"coredns-6f6b679f8f-v69rd\" (UID: \"1dd2fcc8-4ae0-4b4a-aaf7-929cfb26ea44\") " pod="kube-system/coredns-6f6b679f8f-v69rd" Dec 13 01:56:17.460859 containerd[2029]: time="2024-12-13T01:56:17.460772063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4bqsg,Uid:9812087d-afbe-4660-960e-7d2099087f2c,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:17.475647 containerd[2029]: time="2024-12-13T01:56:17.475561151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v69rd,Uid:1dd2fcc8-4ae0-4b4a-aaf7-929cfb26ea44,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:19.801234 (udev-worker)[4309]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:19.802216 (udev-worker)[4310]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:19.804319 systemd-networkd[1926]: cilium_host: Link UP Dec 13 01:56:19.804678 systemd-networkd[1926]: cilium_net: Link UP Dec 13 01:56:19.805017 systemd-networkd[1926]: cilium_net: Gained carrier Dec 13 01:56:19.808640 systemd-networkd[1926]: cilium_host: Gained carrier Dec 13 01:56:20.004099 systemd-networkd[1926]: cilium_vxlan: Link UP Dec 13 01:56:20.004120 systemd-networkd[1926]: cilium_vxlan: Gained carrier Dec 13 01:56:20.201451 systemd-networkd[1926]: cilium_host: Gained IPv6LL Dec 13 01:56:20.442824 systemd-networkd[1926]: cilium_net: Gained IPv6LL Dec 13 01:56:20.503669 kernel: NET: Registered PF_ALG protocol family Dec 13 01:56:21.658241 systemd-networkd[1926]: cilium_vxlan: Gained IPv6LL Dec 13 01:56:21.896989 systemd-networkd[1926]: lxc_health: Link UP Dec 13 01:56:21.906250 systemd-networkd[1926]: lxc_health: Gained carrier Dec 13 01:56:21.999604 kubelet[3216]: I1213 01:56:21.999451 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ch58n" podStartSLOduration=13.12040777 podStartE2EDuration="28.999419945s" podCreationTimestamp="2024-12-13 01:55:53 +0000 UTC" firstStartedPulling="2024-12-13 01:55:56.083256672 +0000 UTC m=+7.874545540" lastFinishedPulling="2024-12-13 01:56:11.962268859 +0000 UTC m=+23.753557715" observedRunningTime="2024-12-13 01:56:17.724112544 +0000 UTC m=+29.515401436" watchObservedRunningTime="2024-12-13 01:56:21.999419945 +0000 UTC m=+33.790708873" Dec 13 01:56:22.657113 systemd-networkd[1926]: lxc293bc7676010: Link UP Dec 13 01:56:22.666504 (udev-worker)[4316]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:22.669828 systemd-networkd[1926]: lxcf0bd0b0676b5: Link UP Dec 13 01:56:22.685449 kernel: eth0: renamed from tmp4586b Dec 13 01:56:22.690603 (udev-worker)[4270]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:22.695446 kernel: eth0: renamed from tmp08464 Dec 13 01:56:22.701927 systemd-networkd[1926]: lxc293bc7676010: Gained carrier Dec 13 01:56:22.708010 systemd-networkd[1926]: lxcf0bd0b0676b5: Gained carrier Dec 13 01:56:23.449573 systemd-networkd[1926]: lxc_health: Gained IPv6LL Dec 13 01:56:24.601635 systemd-networkd[1926]: lxcf0bd0b0676b5: Gained IPv6LL Dec 13 01:56:24.665553 systemd-networkd[1926]: lxc293bc7676010: Gained IPv6LL Dec 13 01:56:25.284802 systemd[1]: Started sshd@7-172.31.18.118:22-139.178.68.195:59888.service - OpenSSH per-connection server daemon (139.178.68.195:59888). Dec 13 01:56:25.483040 sshd[4674]: Accepted publickey for core from 139.178.68.195 port 59888 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:25.485231 sshd[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:25.493917 systemd-logind[2008]: New session 8 of user core. Dec 13 01:56:25.503586 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:56:25.884240 sshd[4674]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:25.891877 systemd[1]: sshd@7-172.31.18.118:22-139.178.68.195:59888.service: Deactivated successfully. Dec 13 01:56:25.899844 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:56:25.906345 systemd-logind[2008]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:56:25.911701 systemd-logind[2008]: Removed session 8. Dec 13 01:56:26.787049 ntpd[1998]: Listen normally on 7 cilium_host 192.168.0.229:123 Dec 13 01:56:26.788262 ntpd[1998]: 13 Dec 01:56:26 ntpd[1998]: Listen normally on 7 cilium_host 192.168.0.229:123 Dec 13 01:56:26.788262 ntpd[1998]: 13 Dec 01:56:26 ntpd[1998]: Listen normally on 8 cilium_net [fe80::ac42:1cff:fee6:5264%4]:123 Dec 13 01:56:26.788262 ntpd[1998]: 13 Dec 01:56:26 ntpd[1998]: Listen normally on 9 cilium_host [fe80::e89b:1eff:fea1:6da6%5]:123 Dec 13 01:56:26.787180 ntpd[1998]: Listen normally on 8 cilium_net [fe80::ac42:1cff:fee6:5264%4]:123 Dec 13 01:56:26.787259 ntpd[1998]: Listen normally on 9 cilium_host [fe80::e89b:1eff:fea1:6da6%5]:123 Dec 13 01:56:26.789107 ntpd[1998]: Listen normally on 10 cilium_vxlan [fe80::ac05:97ff:feb5:3929%6]:123 Dec 13 01:56:26.790038 ntpd[1998]: 13 Dec 01:56:26 ntpd[1998]: Listen normally on 10 cilium_vxlan [fe80::ac05:97ff:feb5:3929%6]:123 Dec 13 01:56:26.790038 ntpd[1998]: 13 Dec 01:56:26 ntpd[1998]: Listen normally on 11 lxc_health [fe80::9036:9fff:fe64:55f6%8]:123 Dec 13 01:56:26.790038 ntpd[1998]: 13 Dec 01:56:26 ntpd[1998]: Listen normally on 12 lxc293bc7676010 [fe80::473:61ff:fe22:dd8d%10]:123 Dec 13 01:56:26.790038 ntpd[1998]: 13 Dec 01:56:26 ntpd[1998]: Listen normally on 13 lxcf0bd0b0676b5 [fe80::184c:abff:fe42:b89a%12]:123 Dec 13 01:56:26.789240 ntpd[1998]: Listen normally on 11 lxc_health [fe80::9036:9fff:fe64:55f6%8]:123 Dec 13 01:56:26.789348 ntpd[1998]: Listen normally on 12 lxc293bc7676010 [fe80::473:61ff:fe22:dd8d%10]:123 Dec 13 01:56:26.789420 ntpd[1998]: Listen normally on 13 lxcf0bd0b0676b5 [fe80::184c:abff:fe42:b89a%12]:123 Dec 13 01:56:30.923503 systemd[1]: Started sshd@8-172.31.18.118:22-139.178.68.195:58956.service - OpenSSH per-connection server daemon (139.178.68.195:58956). Dec 13 01:56:31.111321 sshd[4708]: Accepted publickey for core from 139.178.68.195 port 58956 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:31.112722 sshd[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:31.121786 systemd-logind[2008]: New session 9 of user core. Dec 13 01:56:31.132640 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:56:31.463693 sshd[4708]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:31.473488 systemd-logind[2008]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:56:31.474789 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:56:31.478376 systemd[1]: sshd@8-172.31.18.118:22-139.178.68.195:58956.service: Deactivated successfully. Dec 13 01:56:31.489831 systemd-logind[2008]: Removed session 9. Dec 13 01:56:31.689221 containerd[2029]: time="2024-12-13T01:56:31.688433197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:31.689221 containerd[2029]: time="2024-12-13T01:56:31.688565053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:31.689221 containerd[2029]: time="2024-12-13T01:56:31.688603009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:31.689221 containerd[2029]: time="2024-12-13T01:56:31.688788433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:31.744183 containerd[2029]: time="2024-12-13T01:56:31.741515714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:31.744183 containerd[2029]: time="2024-12-13T01:56:31.741620906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:31.744183 containerd[2029]: time="2024-12-13T01:56:31.741648410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:31.744183 containerd[2029]: time="2024-12-13T01:56:31.741786518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:31.778672 systemd[1]: Started cri-containerd-08464c6d7c7bddfa79cad0d9f518a6e978e30dcb402ac2865e070ac852bdd8a2.scope - libcontainer container 08464c6d7c7bddfa79cad0d9f518a6e978e30dcb402ac2865e070ac852bdd8a2. Dec 13 01:56:31.823566 systemd[1]: Started cri-containerd-4586b03e063addf3effcb8a979f2e5c96e156c435285775a97b12e12548c65c3.scope - libcontainer container 4586b03e063addf3effcb8a979f2e5c96e156c435285775a97b12e12548c65c3. Dec 13 01:56:31.910703 containerd[2029]: time="2024-12-13T01:56:31.910538642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4bqsg,Uid:9812087d-afbe-4660-960e-7d2099087f2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"08464c6d7c7bddfa79cad0d9f518a6e978e30dcb402ac2865e070ac852bdd8a2\"" Dec 13 01:56:31.921810 containerd[2029]: time="2024-12-13T01:56:31.921738398Z" level=info msg="CreateContainer within sandbox \"08464c6d7c7bddfa79cad0d9f518a6e978e30dcb402ac2865e070ac852bdd8a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:56:31.956899 containerd[2029]: time="2024-12-13T01:56:31.956768679Z" level=info msg="CreateContainer within sandbox \"08464c6d7c7bddfa79cad0d9f518a6e978e30dcb402ac2865e070ac852bdd8a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ec42935d13b8b2df9335c752f0352ad7bbf8520603ef698a15d419bd8166844\"" Dec 13 01:56:31.962150 containerd[2029]: time="2024-12-13T01:56:31.960501927Z" level=info msg="StartContainer for \"1ec42935d13b8b2df9335c752f0352ad7bbf8520603ef698a15d419bd8166844\"" Dec 13 01:56:31.988576 containerd[2029]: time="2024-12-13T01:56:31.988482651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v69rd,Uid:1dd2fcc8-4ae0-4b4a-aaf7-929cfb26ea44,Namespace:kube-system,Attempt:0,} returns sandbox id \"4586b03e063addf3effcb8a979f2e5c96e156c435285775a97b12e12548c65c3\"" Dec 13 01:56:31.997955 containerd[2029]: time="2024-12-13T01:56:31.997799523Z" level=info msg="CreateContainer within sandbox \"4586b03e063addf3effcb8a979f2e5c96e156c435285775a97b12e12548c65c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:56:32.037604 systemd[1]: Started cri-containerd-1ec42935d13b8b2df9335c752f0352ad7bbf8520603ef698a15d419bd8166844.scope - libcontainer container 1ec42935d13b8b2df9335c752f0352ad7bbf8520603ef698a15d419bd8166844. Dec 13 01:56:32.043139 containerd[2029]: time="2024-12-13T01:56:32.040580123Z" level=info msg="CreateContainer within sandbox \"4586b03e063addf3effcb8a979f2e5c96e156c435285775a97b12e12548c65c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"570eb82aea6ce0dd999869a48b114f3d5646b9a227e532614b9ef877070bed04\"" Dec 13 01:56:32.048188 containerd[2029]: time="2024-12-13T01:56:32.046655171Z" level=info msg="StartContainer for \"570eb82aea6ce0dd999869a48b114f3d5646b9a227e532614b9ef877070bed04\"" Dec 13 01:56:32.122617 systemd[1]: Started cri-containerd-570eb82aea6ce0dd999869a48b114f3d5646b9a227e532614b9ef877070bed04.scope - libcontainer container 570eb82aea6ce0dd999869a48b114f3d5646b9a227e532614b9ef877070bed04. Dec 13 01:56:32.162166 containerd[2029]: time="2024-12-13T01:56:32.161970204Z" level=info msg="StartContainer for \"1ec42935d13b8b2df9335c752f0352ad7bbf8520603ef698a15d419bd8166844\" returns successfully" Dec 13 01:56:32.217002 containerd[2029]: time="2024-12-13T01:56:32.216016452Z" level=info msg="StartContainer for \"570eb82aea6ce0dd999869a48b114f3d5646b9a227e532614b9ef877070bed04\" returns successfully" Dec 13 01:56:32.764490 kubelet[3216]: I1213 01:56:32.764387 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v69rd" podStartSLOduration=38.764364843 podStartE2EDuration="38.764364843s" podCreationTimestamp="2024-12-13 01:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:32.761733471 +0000 UTC m=+44.553022375" watchObservedRunningTime="2024-12-13 01:56:32.764364843 +0000 UTC m=+44.555653711" Dec 13 01:56:32.789183 kubelet[3216]: I1213 01:56:32.787609 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4bqsg" podStartSLOduration=38.787586439 podStartE2EDuration="38.787586439s" podCreationTimestamp="2024-12-13 01:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:32.784761819 +0000 UTC m=+44.576050723" watchObservedRunningTime="2024-12-13 01:56:32.787586439 +0000 UTC m=+44.578875307" Dec 13 01:56:36.503796 systemd[1]: Started sshd@9-172.31.18.118:22-139.178.68.195:43192.service - OpenSSH per-connection server daemon (139.178.68.195:43192). Dec 13 01:56:36.684046 sshd[4891]: Accepted publickey for core from 139.178.68.195 port 43192 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:36.688065 sshd[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:36.697062 systemd-logind[2008]: New session 10 of user core. Dec 13 01:56:36.710622 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:56:36.971128 sshd[4891]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:36.976238 systemd[1]: sshd@9-172.31.18.118:22-139.178.68.195:43192.service: Deactivated successfully. Dec 13 01:56:36.980632 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:56:36.984937 systemd-logind[2008]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:56:36.987191 systemd-logind[2008]: Removed session 10. Dec 13 01:56:42.011833 systemd[1]: Started sshd@10-172.31.18.118:22-139.178.68.195:43194.service - OpenSSH per-connection server daemon (139.178.68.195:43194). Dec 13 01:56:42.190829 sshd[4908]: Accepted publickey for core from 139.178.68.195 port 43194 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:42.193488 sshd[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:42.201053 systemd-logind[2008]: New session 11 of user core. Dec 13 01:56:42.212547 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:56:42.450702 sshd[4908]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:42.456899 systemd[1]: sshd@10-172.31.18.118:22-139.178.68.195:43194.service: Deactivated successfully. Dec 13 01:56:42.461748 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:56:42.463656 systemd-logind[2008]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:56:42.466324 systemd-logind[2008]: Removed session 11. Dec 13 01:56:47.495834 systemd[1]: Started sshd@11-172.31.18.118:22-139.178.68.195:34696.service - OpenSSH per-connection server daemon (139.178.68.195:34696). Dec 13 01:56:47.671220 sshd[4921]: Accepted publickey for core from 139.178.68.195 port 34696 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:47.673894 sshd[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:47.682402 systemd-logind[2008]: New session 12 of user core. Dec 13 01:56:47.689601 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:56:47.939660 sshd[4921]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:47.944873 systemd-logind[2008]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:56:47.946088 systemd[1]: sshd@11-172.31.18.118:22-139.178.68.195:34696.service: Deactivated successfully. Dec 13 01:56:47.950207 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:56:47.954219 systemd-logind[2008]: Removed session 12. Dec 13 01:56:47.979816 systemd[1]: Started sshd@12-172.31.18.118:22-139.178.68.195:34712.service - OpenSSH per-connection server daemon (139.178.68.195:34712). Dec 13 01:56:48.146035 sshd[4935]: Accepted publickey for core from 139.178.68.195 port 34712 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:48.148774 sshd[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:48.158047 systemd-logind[2008]: New session 13 of user core. Dec 13 01:56:48.164525 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:56:48.487436 sshd[4935]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:48.497918 systemd[1]: sshd@12-172.31.18.118:22-139.178.68.195:34712.service: Deactivated successfully. Dec 13 01:56:48.509049 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:56:48.511919 systemd-logind[2008]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:56:48.539015 systemd[1]: Started sshd@13-172.31.18.118:22-139.178.68.195:34716.service - OpenSSH per-connection server daemon (139.178.68.195:34716). Dec 13 01:56:48.546388 systemd-logind[2008]: Removed session 13. Dec 13 01:56:48.720051 sshd[4947]: Accepted publickey for core from 139.178.68.195 port 34716 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:48.723838 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:48.732134 systemd-logind[2008]: New session 14 of user core. Dec 13 01:56:48.736588 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:56:48.979688 sshd[4947]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:48.993530 systemd[1]: sshd@13-172.31.18.118:22-139.178.68.195:34716.service: Deactivated successfully. Dec 13 01:56:49.000332 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:56:49.008568 systemd-logind[2008]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:56:49.012162 systemd-logind[2008]: Removed session 14. Dec 13 01:56:54.021828 systemd[1]: Started sshd@14-172.31.18.118:22-139.178.68.195:34724.service - OpenSSH per-connection server daemon (139.178.68.195:34724). Dec 13 01:56:54.190086 sshd[4964]: Accepted publickey for core from 139.178.68.195 port 34724 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:54.192776 sshd[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:54.201533 systemd-logind[2008]: New session 15 of user core. Dec 13 01:56:54.209534 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:56:54.452553 sshd[4964]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:54.459664 systemd[1]: sshd@14-172.31.18.118:22-139.178.68.195:34724.service: Deactivated successfully. Dec 13 01:56:54.464063 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:56:54.466075 systemd-logind[2008]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:56:54.468000 systemd-logind[2008]: Removed session 15. Dec 13 01:56:59.498634 systemd[1]: Started sshd@15-172.31.18.118:22-139.178.68.195:34848.service - OpenSSH per-connection server daemon (139.178.68.195:34848). Dec 13 01:56:59.667973 sshd[4978]: Accepted publickey for core from 139.178.68.195 port 34848 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:59.670888 sshd[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:59.680647 systemd-logind[2008]: New session 16 of user core. Dec 13 01:56:59.686568 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:56:59.941442 sshd[4978]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:59.947980 systemd[1]: sshd@15-172.31.18.118:22-139.178.68.195:34848.service: Deactivated successfully. Dec 13 01:56:59.951810 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:56:59.953846 systemd-logind[2008]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:56:59.956539 systemd-logind[2008]: Removed session 16. Dec 13 01:57:04.980807 systemd[1]: Started sshd@16-172.31.18.118:22-139.178.68.195:34860.service - OpenSSH per-connection server daemon (139.178.68.195:34860). Dec 13 01:57:05.158360 sshd[4990]: Accepted publickey for core from 139.178.68.195 port 34860 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:05.161004 sshd[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:05.169969 systemd-logind[2008]: New session 17 of user core. Dec 13 01:57:05.179608 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:57:05.425261 sshd[4990]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:05.431149 systemd[1]: sshd@16-172.31.18.118:22-139.178.68.195:34860.service: Deactivated successfully. Dec 13 01:57:05.435421 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:57:05.437240 systemd-logind[2008]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:57:05.439805 systemd-logind[2008]: Removed session 17. Dec 13 01:57:05.465806 systemd[1]: Started sshd@17-172.31.18.118:22-139.178.68.195:34868.service - OpenSSH per-connection server daemon (139.178.68.195:34868). Dec 13 01:57:05.646799 sshd[5003]: Accepted publickey for core from 139.178.68.195 port 34868 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:05.649596 sshd[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:05.658410 systemd-logind[2008]: New session 18 of user core. Dec 13 01:57:05.664731 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:57:05.974394 sshd[5003]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:05.979773 systemd-logind[2008]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:57:05.982245 systemd[1]: sshd@17-172.31.18.118:22-139.178.68.195:34868.service: Deactivated successfully. Dec 13 01:57:05.987167 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:57:05.989343 systemd-logind[2008]: Removed session 18. Dec 13 01:57:06.015813 systemd[1]: Started sshd@18-172.31.18.118:22-139.178.68.195:34870.service - OpenSSH per-connection server daemon (139.178.68.195:34870). Dec 13 01:57:06.184457 sshd[5014]: Accepted publickey for core from 139.178.68.195 port 34870 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:06.187115 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:06.196156 systemd-logind[2008]: New session 19 of user core. Dec 13 01:57:06.206517 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:57:08.755053 sshd[5014]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:08.766048 systemd[1]: sshd@18-172.31.18.118:22-139.178.68.195:34870.service: Deactivated successfully. Dec 13 01:57:08.776754 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:57:08.782865 systemd-logind[2008]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:57:08.801956 systemd[1]: Started sshd@19-172.31.18.118:22-139.178.68.195:52356.service - OpenSSH per-connection server daemon (139.178.68.195:52356). Dec 13 01:57:08.806685 systemd-logind[2008]: Removed session 19. Dec 13 01:57:08.985425 sshd[5032]: Accepted publickey for core from 139.178.68.195 port 52356 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:08.988340 sshd[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:08.996398 systemd-logind[2008]: New session 20 of user core. Dec 13 01:57:09.006542 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:57:09.486410 sshd[5032]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:09.491854 systemd-logind[2008]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:57:09.492616 systemd[1]: sshd@19-172.31.18.118:22-139.178.68.195:52356.service: Deactivated successfully. Dec 13 01:57:09.496508 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:57:09.501241 systemd-logind[2008]: Removed session 20. Dec 13 01:57:09.538820 systemd[1]: Started sshd@20-172.31.18.118:22-139.178.68.195:52366.service - OpenSSH per-connection server daemon (139.178.68.195:52366). Dec 13 01:57:09.713188 sshd[5043]: Accepted publickey for core from 139.178.68.195 port 52366 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:09.716002 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:09.723990 systemd-logind[2008]: New session 21 of user core. Dec 13 01:57:09.732572 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:57:09.964855 sshd[5043]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:09.972447 systemd[1]: sshd@20-172.31.18.118:22-139.178.68.195:52366.service: Deactivated successfully. Dec 13 01:57:09.976014 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:57:09.977688 systemd-logind[2008]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:57:09.979968 systemd-logind[2008]: Removed session 21. Dec 13 01:57:15.004865 systemd[1]: Started sshd@21-172.31.18.118:22-139.178.68.195:52380.service - OpenSSH per-connection server daemon (139.178.68.195:52380). Dec 13 01:57:15.172571 sshd[5056]: Accepted publickey for core from 139.178.68.195 port 52380 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:15.175367 sshd[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:15.183247 systemd-logind[2008]: New session 22 of user core. Dec 13 01:57:15.189545 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:57:15.427033 sshd[5056]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:15.432565 systemd-logind[2008]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:57:15.433220 systemd[1]: sshd@21-172.31.18.118:22-139.178.68.195:52380.service: Deactivated successfully. Dec 13 01:57:15.438004 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:57:15.442459 systemd-logind[2008]: Removed session 22. Dec 13 01:57:20.474798 systemd[1]: Started sshd@22-172.31.18.118:22-139.178.68.195:58560.service - OpenSSH per-connection server daemon (139.178.68.195:58560). Dec 13 01:57:20.645513 sshd[5072]: Accepted publickey for core from 139.178.68.195 port 58560 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:20.648850 sshd[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:20.657194 systemd-logind[2008]: New session 23 of user core. Dec 13 01:57:20.662552 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:57:20.945714 sshd[5072]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:20.950855 systemd-logind[2008]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:57:20.952006 systemd[1]: sshd@22-172.31.18.118:22-139.178.68.195:58560.service: Deactivated successfully. Dec 13 01:57:20.956821 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:57:20.960983 systemd-logind[2008]: Removed session 23. Dec 13 01:57:25.991874 systemd[1]: Started sshd@23-172.31.18.118:22-139.178.68.195:58572.service - OpenSSH per-connection server daemon (139.178.68.195:58572). Dec 13 01:57:26.172018 sshd[5088]: Accepted publickey for core from 139.178.68.195 port 58572 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:26.174806 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:26.183759 systemd-logind[2008]: New session 24 of user core. Dec 13 01:57:26.191554 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:57:26.427186 sshd[5088]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:26.433762 systemd[1]: sshd@23-172.31.18.118:22-139.178.68.195:58572.service: Deactivated successfully. Dec 13 01:57:26.437077 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:57:26.438739 systemd-logind[2008]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:57:26.440567 systemd-logind[2008]: Removed session 24. Dec 13 01:57:31.467838 systemd[1]: Started sshd@24-172.31.18.118:22-139.178.68.195:34980.service - OpenSSH per-connection server daemon (139.178.68.195:34980). Dec 13 01:57:31.653622 sshd[5100]: Accepted publickey for core from 139.178.68.195 port 34980 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:31.656239 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:31.664624 systemd-logind[2008]: New session 25 of user core. Dec 13 01:57:31.672570 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:57:31.910548 sshd[5100]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:31.917377 systemd-logind[2008]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:57:31.917801 systemd[1]: sshd@24-172.31.18.118:22-139.178.68.195:34980.service: Deactivated successfully. Dec 13 01:57:31.921203 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:57:31.924038 systemd-logind[2008]: Removed session 25. Dec 13 01:57:31.949839 systemd[1]: Started sshd@25-172.31.18.118:22-139.178.68.195:34984.service - OpenSSH per-connection server daemon (139.178.68.195:34984). Dec 13 01:57:32.133556 sshd[5112]: Accepted publickey for core from 139.178.68.195 port 34984 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:32.136178 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:32.145130 systemd-logind[2008]: New session 26 of user core. Dec 13 01:57:32.151565 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:57:35.230233 containerd[2029]: time="2024-12-13T01:57:35.230164105Z" level=info msg="StopContainer for \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\" with timeout 30 (s)" Dec 13 01:57:35.235419 containerd[2029]: time="2024-12-13T01:57:35.233504677Z" level=info msg="Stop container \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\" with signal terminated" Dec 13 01:57:35.271732 systemd[1]: cri-containerd-d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8.scope: Deactivated successfully. Dec 13 01:57:35.277902 containerd[2029]: time="2024-12-13T01:57:35.277804321Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:57:35.300659 containerd[2029]: time="2024-12-13T01:57:35.300423349Z" level=info msg="StopContainer for \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\" with timeout 2 (s)" Dec 13 01:57:35.301777 containerd[2029]: time="2024-12-13T01:57:35.301727761Z" level=info msg="Stop container \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\" with signal terminated" Dec 13 01:57:35.322834 systemd-networkd[1926]: lxc_health: Link DOWN Dec 13 01:57:35.322854 systemd-networkd[1926]: lxc_health: Lost carrier Dec 13 01:57:35.331064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8-rootfs.mount: Deactivated successfully. Dec 13 01:57:35.356435 containerd[2029]: time="2024-12-13T01:57:35.356140826Z" level=info msg="shim disconnected" id=d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8 namespace=k8s.io Dec 13 01:57:35.356435 containerd[2029]: time="2024-12-13T01:57:35.356396174Z" level=warning msg="cleaning up after shim disconnected" id=d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8 namespace=k8s.io Dec 13 01:57:35.357222 containerd[2029]: time="2024-12-13T01:57:35.356519750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:35.359161 systemd[1]: cri-containerd-8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331.scope: Deactivated successfully. Dec 13 01:57:35.359647 systemd[1]: cri-containerd-8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331.scope: Consumed 14.905s CPU time. Dec 13 01:57:35.391219 containerd[2029]: time="2024-12-13T01:57:35.390996794Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:57:35.400067 containerd[2029]: time="2024-12-13T01:57:35.400009634Z" level=info msg="StopContainer for \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\" returns successfully" Dec 13 01:57:35.402354 containerd[2029]: time="2024-12-13T01:57:35.402149246Z" level=info msg="StopPodSandbox for \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\"" Dec 13 01:57:35.402354 containerd[2029]: time="2024-12-13T01:57:35.402301262Z" level=info msg="Container to stop \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:35.406678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331-rootfs.mount: Deactivated successfully. Dec 13 01:57:35.415431 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80-shm.mount: Deactivated successfully. Dec 13 01:57:35.421850 containerd[2029]: time="2024-12-13T01:57:35.421643066Z" level=info msg="shim disconnected" id=8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331 namespace=k8s.io Dec 13 01:57:35.422070 containerd[2029]: time="2024-12-13T01:57:35.421834490Z" level=warning msg="cleaning up after shim disconnected" id=8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331 namespace=k8s.io Dec 13 01:57:35.422070 containerd[2029]: time="2024-12-13T01:57:35.421973438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:35.426131 systemd[1]: cri-containerd-708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80.scope: Deactivated successfully. Dec 13 01:57:35.471731 containerd[2029]: time="2024-12-13T01:57:35.471520142Z" level=info msg="StopContainer for \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\" returns successfully" Dec 13 01:57:35.473266 containerd[2029]: time="2024-12-13T01:57:35.472999130Z" level=info msg="StopPodSandbox for \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\"" Dec 13 01:57:35.473266 containerd[2029]: time="2024-12-13T01:57:35.473158430Z" level=info msg="Container to stop \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:35.473495 containerd[2029]: time="2024-12-13T01:57:35.473190494Z" level=info msg="Container to stop \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:35.473495 containerd[2029]: time="2024-12-13T01:57:35.473477318Z" level=info msg="Container to stop \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:35.473606 containerd[2029]: time="2024-12-13T01:57:35.473503790Z" level=info msg="Container to stop \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:35.473606 containerd[2029]: time="2024-12-13T01:57:35.473554070Z" level=info msg="Container to stop \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:57:35.489436 systemd[1]: cri-containerd-5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec.scope: Deactivated successfully. Dec 13 01:57:35.495591 containerd[2029]: time="2024-12-13T01:57:35.495153038Z" level=info msg="shim disconnected" id=708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80 namespace=k8s.io Dec 13 01:57:35.495591 containerd[2029]: time="2024-12-13T01:57:35.495414662Z" level=warning msg="cleaning up after shim disconnected" id=708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80 namespace=k8s.io Dec 13 01:57:35.495591 containerd[2029]: time="2024-12-13T01:57:35.495437330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:35.522920 containerd[2029]: time="2024-12-13T01:57:35.522865682Z" level=info msg="TearDown network for sandbox \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\" successfully" Dec 13 01:57:35.523405 containerd[2029]: time="2024-12-13T01:57:35.523056302Z" level=info msg="StopPodSandbox for \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\" returns successfully" Dec 13 01:57:35.552064 containerd[2029]: time="2024-12-13T01:57:35.551770418Z" level=info msg="shim disconnected" id=5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec namespace=k8s.io Dec 13 01:57:35.552064 containerd[2029]: time="2024-12-13T01:57:35.551841866Z" level=warning msg="cleaning up after shim disconnected" id=5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec namespace=k8s.io Dec 13 01:57:35.552064 containerd[2029]: time="2024-12-13T01:57:35.551862098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:35.578691 containerd[2029]: time="2024-12-13T01:57:35.578489511Z" level=info msg="TearDown network for sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" successfully" Dec 13 01:57:35.578691 containerd[2029]: time="2024-12-13T01:57:35.578552943Z" level=info msg="StopPodSandbox for \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" returns successfully" Dec 13 01:57:35.674302 kubelet[3216]: I1213 01:57:35.672435 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-cgroup\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.674302 kubelet[3216]: I1213 01:57:35.672502 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-config-path\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.674302 kubelet[3216]: I1213 01:57:35.672536 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-run\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.674302 kubelet[3216]: I1213 01:57:35.672540 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.674302 kubelet[3216]: I1213 01:57:35.672569 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-host-proc-sys-kernel\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.674302 kubelet[3216]: I1213 01:57:35.672645 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-lib-modules\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675149 kubelet[3216]: I1213 01:57:35.672682 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cni-path\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675149 kubelet[3216]: I1213 01:57:35.672723 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-hubble-tls\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675149 kubelet[3216]: I1213 01:57:35.672760 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-clustermesh-secrets\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675149 kubelet[3216]: I1213 01:57:35.672798 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csn5z\" (UniqueName: \"kubernetes.io/projected/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-kube-api-access-csn5z\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675149 kubelet[3216]: I1213 01:57:35.672833 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-hostproc\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675149 kubelet[3216]: I1213 01:57:35.672865 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-host-proc-sys-net\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675552 kubelet[3216]: I1213 01:57:35.672902 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psd89\" (UniqueName: \"kubernetes.io/projected/cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c-kube-api-access-psd89\") pod \"cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c\" (UID: \"cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c\") " Dec 13 01:57:35.675552 kubelet[3216]: I1213 01:57:35.672937 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-xtables-lock\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675552 kubelet[3216]: I1213 01:57:35.672972 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c-cilium-config-path\") pod \"cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c\" (UID: \"cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c\") " Dec 13 01:57:35.675552 kubelet[3216]: I1213 01:57:35.673009 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-bpf-maps\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675552 kubelet[3216]: I1213 01:57:35.673042 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-etc-cni-netd\") pod \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\" (UID: \"dcb7b7c1-d706-40f5-9868-d12d58dd3d63\") " Dec 13 01:57:35.675552 kubelet[3216]: I1213 01:57:35.673102 3216 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-cgroup\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.675864 kubelet[3216]: I1213 01:57:35.672599 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.675864 kubelet[3216]: I1213 01:57:35.673143 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.675864 kubelet[3216]: I1213 01:57:35.673194 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.675864 kubelet[3216]: I1213 01:57:35.673229 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cni-path" (OuterVolumeSpecName: "cni-path") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.675864 kubelet[3216]: I1213 01:57:35.675062 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.676134 kubelet[3216]: I1213 01:57:35.675162 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.680510 kubelet[3216]: I1213 01:57:35.680431 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.684861 kubelet[3216]: I1213 01:57:35.684787 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.685074 kubelet[3216]: I1213 01:57:35.685043 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-hostproc" (OuterVolumeSpecName: "hostproc") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:57:35.687396 kubelet[3216]: I1213 01:57:35.687231 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:57:35.687554 kubelet[3216]: I1213 01:57:35.687334 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c-kube-api-access-psd89" (OuterVolumeSpecName: "kube-api-access-psd89") pod "cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c" (UID: "cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c"). InnerVolumeSpecName "kube-api-access-psd89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:57:35.689965 kubelet[3216]: I1213 01:57:35.689889 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-kube-api-access-csn5z" (OuterVolumeSpecName: "kube-api-access-csn5z") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "kube-api-access-csn5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:57:35.691560 kubelet[3216]: I1213 01:57:35.690643 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:57:35.691560 kubelet[3216]: I1213 01:57:35.691501 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dcb7b7c1-d706-40f5-9868-d12d58dd3d63" (UID: "dcb7b7c1-d706-40f5-9868-d12d58dd3d63"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:57:35.692772 kubelet[3216]: I1213 01:57:35.692697 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c" (UID: "cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:57:35.774330 kubelet[3216]: I1213 01:57:35.774158 3216 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-hubble-tls\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774330 kubelet[3216]: I1213 01:57:35.774214 3216 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cni-path\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774330 kubelet[3216]: I1213 01:57:35.774240 3216 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-xtables-lock\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774330 kubelet[3216]: I1213 01:57:35.774260 3216 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-clustermesh-secrets\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774330 kubelet[3216]: I1213 01:57:35.774308 3216 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-csn5z\" (UniqueName: \"kubernetes.io/projected/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-kube-api-access-csn5z\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774678 kubelet[3216]: I1213 01:57:35.774354 3216 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-hostproc\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774678 kubelet[3216]: I1213 01:57:35.774377 3216 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-host-proc-sys-net\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774678 kubelet[3216]: I1213 01:57:35.774397 3216 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-psd89\" (UniqueName: \"kubernetes.io/projected/cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c-kube-api-access-psd89\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774678 kubelet[3216]: I1213 01:57:35.774420 3216 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c-cilium-config-path\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774678 kubelet[3216]: I1213 01:57:35.774439 3216 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-bpf-maps\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774678 kubelet[3216]: I1213 01:57:35.774460 3216 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-etc-cni-netd\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774678 kubelet[3216]: I1213 01:57:35.774479 3216 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-config-path\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.774678 kubelet[3216]: I1213 01:57:35.774498 3216 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-cilium-run\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.775076 kubelet[3216]: I1213 01:57:35.774518 3216 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-host-proc-sys-kernel\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.775076 kubelet[3216]: I1213 01:57:35.774539 3216 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcb7b7c1-d706-40f5-9868-d12d58dd3d63-lib-modules\") on node \"ip-172-31-18-118\" DevicePath \"\"" Dec 13 01:57:35.915304 kubelet[3216]: I1213 01:57:35.915247 3216 scope.go:117] "RemoveContainer" containerID="d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8" Dec 13 01:57:35.922305 containerd[2029]: time="2024-12-13T01:57:35.920515876Z" level=info msg="RemoveContainer for \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\"" Dec 13 01:57:35.933951 containerd[2029]: time="2024-12-13T01:57:35.933863752Z" level=info msg="RemoveContainer for \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\" returns successfully" Dec 13 01:57:35.937339 systemd[1]: Removed slice kubepods-besteffort-podcfec0ce3_a62f_4e4d_9b65_b06acc6cbf7c.slice - libcontainer container kubepods-besteffort-podcfec0ce3_a62f_4e4d_9b65_b06acc6cbf7c.slice. Dec 13 01:57:35.944693 kubelet[3216]: I1213 01:57:35.944490 3216 scope.go:117] "RemoveContainer" containerID="d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8" Dec 13 01:57:35.945188 systemd[1]: Removed slice kubepods-burstable-poddcb7b7c1_d706_40f5_9868_d12d58dd3d63.slice - libcontainer container kubepods-burstable-poddcb7b7c1_d706_40f5_9868_d12d58dd3d63.slice. Dec 13 01:57:35.946120 containerd[2029]: time="2024-12-13T01:57:35.945201580Z" level=error msg="ContainerStatus for \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\": not found" Dec 13 01:57:35.945466 systemd[1]: kubepods-burstable-poddcb7b7c1_d706_40f5_9868_d12d58dd3d63.slice: Consumed 15.056s CPU time. Dec 13 01:57:35.946570 kubelet[3216]: E1213 01:57:35.946527 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\": not found" containerID="d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8" Dec 13 01:57:35.948568 kubelet[3216]: I1213 01:57:35.948139 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8"} err="failed to get container status \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d458a1cbadca01e24a52aa34779bd0198b09fb17d38614e90df22de191b37de8\": not found" Dec 13 01:57:35.948568 kubelet[3216]: I1213 01:57:35.948319 3216 scope.go:117] "RemoveContainer" containerID="8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331" Dec 13 01:57:35.953373 containerd[2029]: time="2024-12-13T01:57:35.953213092Z" level=info msg="RemoveContainer for \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\"" Dec 13 01:57:35.961763 containerd[2029]: time="2024-12-13T01:57:35.961696085Z" level=info msg="RemoveContainer for \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\" returns successfully" Dec 13 01:57:35.963261 kubelet[3216]: I1213 01:57:35.962042 3216 scope.go:117] "RemoveContainer" containerID="4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5" Dec 13 01:57:35.965794 containerd[2029]: time="2024-12-13T01:57:35.965744357Z" level=info msg="RemoveContainer for \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\"" Dec 13 01:57:35.974587 containerd[2029]: time="2024-12-13T01:57:35.973558865Z" level=info msg="RemoveContainer for \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\" returns successfully" Dec 13 01:57:35.974787 kubelet[3216]: I1213 01:57:35.973894 3216 scope.go:117] "RemoveContainer" containerID="b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982" Dec 13 01:57:35.976643 containerd[2029]: time="2024-12-13T01:57:35.976130453Z" level=info msg="RemoveContainer for \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\"" Dec 13 01:57:35.982972 containerd[2029]: time="2024-12-13T01:57:35.982882757Z" level=info msg="RemoveContainer for \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\" returns successfully" Dec 13 01:57:35.984328 kubelet[3216]: I1213 01:57:35.983656 3216 scope.go:117] "RemoveContainer" containerID="2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7" Dec 13 01:57:35.987764 containerd[2029]: time="2024-12-13T01:57:35.987695369Z" level=info msg="RemoveContainer for \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\"" Dec 13 01:57:35.996744 containerd[2029]: time="2024-12-13T01:57:35.996392825Z" level=info msg="RemoveContainer for \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\" returns successfully" Dec 13 01:57:35.997941 kubelet[3216]: I1213 01:57:35.997868 3216 scope.go:117] "RemoveContainer" containerID="28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f" Dec 13 01:57:36.000446 containerd[2029]: time="2024-12-13T01:57:36.000378985Z" level=info msg="RemoveContainer for \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\"" Dec 13 01:57:36.006356 containerd[2029]: time="2024-12-13T01:57:36.006299953Z" level=info msg="RemoveContainer for \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\" returns successfully" Dec 13 01:57:36.007057 kubelet[3216]: I1213 01:57:36.006952 3216 scope.go:117] "RemoveContainer" containerID="8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331" Dec 13 01:57:36.007636 containerd[2029]: time="2024-12-13T01:57:36.007577197Z" level=error msg="ContainerStatus for \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\": not found" Dec 13 01:57:36.008078 kubelet[3216]: E1213 01:57:36.007889 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\": not found" containerID="8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331" Dec 13 01:57:36.008078 kubelet[3216]: I1213 01:57:36.007939 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331"} err="failed to get container status \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\": rpc error: code = NotFound desc = an error occurred when try to find container \"8066d62428d4120948e646b02925a6e3e6f4b3c4c1ebf3028905b9a4c2ee3331\": not found" Dec 13 01:57:36.008078 kubelet[3216]: I1213 01:57:36.007974 3216 scope.go:117] "RemoveContainer" containerID="4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5" Dec 13 01:57:36.008632 containerd[2029]: time="2024-12-13T01:57:36.008518705Z" level=error msg="ContainerStatus for \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\": not found" Dec 13 01:57:36.008948 kubelet[3216]: E1213 01:57:36.008907 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\": not found" containerID="4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5" Dec 13 01:57:36.009058 kubelet[3216]: I1213 01:57:36.008985 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5"} err="failed to get container status \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\": rpc error: code = NotFound desc = an error occurred when try to find container \"4235868a770c64e7f9adb1c1730f2f50fa88b6620ce39431cf0cbc0d44bb6ab5\": not found" Dec 13 01:57:36.009058 kubelet[3216]: I1213 01:57:36.009021 3216 scope.go:117] "RemoveContainer" containerID="b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982" Dec 13 01:57:36.009592 containerd[2029]: time="2024-12-13T01:57:36.009516001Z" level=error msg="ContainerStatus for \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\": not found" Dec 13 01:57:36.009858 kubelet[3216]: E1213 01:57:36.009801 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\": not found" containerID="b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982" Dec 13 01:57:36.009964 kubelet[3216]: I1213 01:57:36.009893 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982"} err="failed to get container status \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\": rpc error: code = NotFound desc = an error occurred when try to find container \"b84bc484b681f50e612cc1831dd262ea6809c3b4a9eee78ff9e60d3636ac9982\": not found" Dec 13 01:57:36.010081 kubelet[3216]: I1213 01:57:36.010016 3216 scope.go:117] "RemoveContainer" containerID="2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7" Dec 13 01:57:36.010491 containerd[2029]: time="2024-12-13T01:57:36.010378753Z" level=error msg="ContainerStatus for \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\": not found" Dec 13 01:57:36.010624 kubelet[3216]: E1213 01:57:36.010592 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\": not found" containerID="2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7" Dec 13 01:57:36.010688 kubelet[3216]: I1213 01:57:36.010648 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7"} err="failed to get container status \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f4e9f1e9c54cae50aec2295ac74bbef8942ddd61a0878ad8c1f027a57d704b7\": not found" Dec 13 01:57:36.010751 kubelet[3216]: I1213 01:57:36.010687 3216 scope.go:117] "RemoveContainer" containerID="28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f" Dec 13 01:57:36.011257 containerd[2029]: time="2024-12-13T01:57:36.011198797Z" level=error msg="ContainerStatus for \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\": not found" Dec 13 01:57:36.011690 kubelet[3216]: E1213 01:57:36.011649 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\": not found" containerID="28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f" Dec 13 01:57:36.011793 kubelet[3216]: I1213 01:57:36.011719 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f"} err="failed to get container status \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\": rpc error: code = NotFound desc = an error occurred when try to find container \"28b0295069fd84b69144b57bc55daa070dc5f90241af2c20ecaaff89a911c70f\": not found" Dec 13 01:57:36.231730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec-rootfs.mount: Deactivated successfully. Dec 13 01:57:36.231916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec-shm.mount: Deactivated successfully. Dec 13 01:57:36.232055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80-rootfs.mount: Deactivated successfully. Dec 13 01:57:36.232201 systemd[1]: var-lib-kubelet-pods-dcb7b7c1\x2dd706\x2d40f5\x2d9868\x2dd12d58dd3d63-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:57:36.232360 systemd[1]: var-lib-kubelet-pods-dcb7b7c1\x2dd706\x2d40f5\x2d9868\x2dd12d58dd3d63-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:57:36.232500 systemd[1]: var-lib-kubelet-pods-cfec0ce3\x2da62f\x2d4e4d\x2d9b65\x2db06acc6cbf7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsd89.mount: Deactivated successfully. Dec 13 01:57:36.232640 systemd[1]: var-lib-kubelet-pods-dcb7b7c1\x2dd706\x2d40f5\x2d9868\x2dd12d58dd3d63-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcsn5z.mount: Deactivated successfully. Dec 13 01:57:36.511796 kubelet[3216]: I1213 01:57:36.510644 3216 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c" path="/var/lib/kubelet/pods/cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c/volumes" Dec 13 01:57:36.514410 kubelet[3216]: I1213 01:57:36.514339 3216 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb7b7c1-d706-40f5-9868-d12d58dd3d63" path="/var/lib/kubelet/pods/dcb7b7c1-d706-40f5-9868-d12d58dd3d63/volumes" Dec 13 01:57:37.160657 sshd[5112]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:37.166901 systemd-logind[2008]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:57:37.167196 systemd[1]: sshd@25-172.31.18.118:22-139.178.68.195:34984.service: Deactivated successfully. Dec 13 01:57:37.172014 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:57:37.172571 systemd[1]: session-26.scope: Consumed 2.323s CPU time. Dec 13 01:57:37.176403 systemd-logind[2008]: Removed session 26. Dec 13 01:57:37.198815 systemd[1]: Started sshd@26-172.31.18.118:22-139.178.68.195:39458.service - OpenSSH per-connection server daemon (139.178.68.195:39458). Dec 13 01:57:37.376124 sshd[5273]: Accepted publickey for core from 139.178.68.195 port 39458 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:37.378817 sshd[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:37.387241 systemd-logind[2008]: New session 27 of user core. Dec 13 01:57:37.395573 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:57:37.787072 ntpd[1998]: Deleting interface #11 lxc_health, fe80::9036:9fff:fe64:55f6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Dec 13 01:57:37.787617 ntpd[1998]: 13 Dec 01:57:37 ntpd[1998]: Deleting interface #11 lxc_health, fe80::9036:9fff:fe64:55f6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Dec 13 01:57:38.691534 kubelet[3216]: E1213 01:57:38.691481 3216 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:57:39.290167 sshd[5273]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:39.298998 systemd[1]: sshd@26-172.31.18.118:22-139.178.68.195:39458.service: Deactivated successfully. Dec 13 01:57:39.304819 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:57:39.308505 systemd[1]: session-27.scope: Consumed 1.675s CPU time. Dec 13 01:57:39.314311 systemd-logind[2008]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:57:39.324406 kubelet[3216]: E1213 01:57:39.324311 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c" containerName="cilium-operator" Dec 13 01:57:39.324406 kubelet[3216]: E1213 01:57:39.324372 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcb7b7c1-d706-40f5-9868-d12d58dd3d63" containerName="mount-cgroup" Dec 13 01:57:39.324406 kubelet[3216]: E1213 01:57:39.324390 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcb7b7c1-d706-40f5-9868-d12d58dd3d63" containerName="apply-sysctl-overwrites" Dec 13 01:57:39.324406 kubelet[3216]: E1213 01:57:39.324406 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcb7b7c1-d706-40f5-9868-d12d58dd3d63" containerName="mount-bpf-fs" Dec 13 01:57:39.324672 kubelet[3216]: E1213 01:57:39.324424 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcb7b7c1-d706-40f5-9868-d12d58dd3d63" containerName="clean-cilium-state" Dec 13 01:57:39.324672 kubelet[3216]: E1213 01:57:39.324441 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcb7b7c1-d706-40f5-9868-d12d58dd3d63" containerName="cilium-agent" Dec 13 01:57:39.324672 kubelet[3216]: I1213 01:57:39.324489 3216 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfec0ce3-a62f-4e4d-9b65-b06acc6cbf7c" containerName="cilium-operator" Dec 13 01:57:39.324672 kubelet[3216]: I1213 01:57:39.324504 3216 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb7b7c1-d706-40f5-9868-d12d58dd3d63" containerName="cilium-agent" Dec 13 01:57:39.349473 systemd[1]: Started sshd@27-172.31.18.118:22-139.178.68.195:39466.service - OpenSSH per-connection server daemon (139.178.68.195:39466). Dec 13 01:57:39.352864 systemd-logind[2008]: Removed session 27. Dec 13 01:57:39.366609 systemd[1]: Created slice kubepods-burstable-pod28df52f0_4f1c_46ce_b13d_31164c0da844.slice - libcontainer container kubepods-burstable-pod28df52f0_4f1c_46ce_b13d_31164c0da844.slice. Dec 13 01:57:39.399227 kubelet[3216]: I1213 01:57:39.398475 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-cilium-run\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.399227 kubelet[3216]: I1213 01:57:39.398539 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-cni-path\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.399227 kubelet[3216]: I1213 01:57:39.398580 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28df52f0-4f1c-46ce-b13d-31164c0da844-cilium-config-path\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.399227 kubelet[3216]: I1213 01:57:39.398618 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-host-proc-sys-net\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.399227 kubelet[3216]: I1213 01:57:39.398654 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-host-proc-sys-kernel\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400061 kubelet[3216]: I1213 01:57:39.398692 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2dt6\" (UniqueName: \"kubernetes.io/projected/28df52f0-4f1c-46ce-b13d-31164c0da844-kube-api-access-l2dt6\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400061 kubelet[3216]: I1213 01:57:39.398734 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28df52f0-4f1c-46ce-b13d-31164c0da844-clustermesh-secrets\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400061 kubelet[3216]: I1213 01:57:39.398772 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-lib-modules\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400061 kubelet[3216]: I1213 01:57:39.398805 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-xtables-lock\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400061 kubelet[3216]: I1213 01:57:39.398849 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-hostproc\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400061 kubelet[3216]: I1213 01:57:39.398885 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28df52f0-4f1c-46ce-b13d-31164c0da844-hubble-tls\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400556 kubelet[3216]: I1213 01:57:39.398923 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/28df52f0-4f1c-46ce-b13d-31164c0da844-cilium-ipsec-secrets\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400556 kubelet[3216]: I1213 01:57:39.398959 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-bpf-maps\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400556 kubelet[3216]: I1213 01:57:39.398996 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-etc-cni-netd\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.400556 kubelet[3216]: I1213 01:57:39.399038 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28df52f0-4f1c-46ce-b13d-31164c0da844-cilium-cgroup\") pod \"cilium-q56fs\" (UID: \"28df52f0-4f1c-46ce-b13d-31164c0da844\") " pod="kube-system/cilium-q56fs" Dec 13 01:57:39.554189 sshd[5284]: Accepted publickey for core from 139.178.68.195 port 39466 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:39.562310 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:39.575352 systemd-logind[2008]: New session 28 of user core. Dec 13 01:57:39.585556 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:57:39.683782 containerd[2029]: time="2024-12-13T01:57:39.683706715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q56fs,Uid:28df52f0-4f1c-46ce-b13d-31164c0da844,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:39.716032 sshd[5284]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:39.728657 systemd[1]: sshd@27-172.31.18.118:22-139.178.68.195:39466.service: Deactivated successfully. Dec 13 01:57:39.735770 containerd[2029]: time="2024-12-13T01:57:39.735568099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:39.735727 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:57:39.737459 containerd[2029]: time="2024-12-13T01:57:39.736102903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:39.737459 containerd[2029]: time="2024-12-13T01:57:39.736872703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:39.737459 containerd[2029]: time="2024-12-13T01:57:39.737051611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:39.739752 systemd-logind[2008]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:57:39.760839 systemd[1]: Started sshd@28-172.31.18.118:22-139.178.68.195:39482.service - OpenSSH per-connection server daemon (139.178.68.195:39482). Dec 13 01:57:39.762861 systemd-logind[2008]: Removed session 28. Dec 13 01:57:39.777640 systemd[1]: Started cri-containerd-703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3.scope - libcontainer container 703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3. Dec 13 01:57:39.826839 containerd[2029]: time="2024-12-13T01:57:39.826676156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q56fs,Uid:28df52f0-4f1c-46ce-b13d-31164c0da844,Namespace:kube-system,Attempt:0,} returns sandbox id \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\"" Dec 13 01:57:39.833625 containerd[2029]: time="2024-12-13T01:57:39.833191388Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:57:39.859478 containerd[2029]: time="2024-12-13T01:57:39.859396700Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd40385d938d7ebdecf5a5c3cd60184bf39dd41fe8643e0c750e8717a586087a\"" Dec 13 01:57:39.862745 containerd[2029]: time="2024-12-13T01:57:39.861589472Z" level=info msg="StartContainer for \"cd40385d938d7ebdecf5a5c3cd60184bf39dd41fe8643e0c750e8717a586087a\"" Dec 13 01:57:39.905525 systemd[1]: Started cri-containerd-cd40385d938d7ebdecf5a5c3cd60184bf39dd41fe8643e0c750e8717a586087a.scope - libcontainer container cd40385d938d7ebdecf5a5c3cd60184bf39dd41fe8643e0c750e8717a586087a. Dec 13 01:57:39.954207 sshd[5317]: Accepted publickey for core from 139.178.68.195 port 39482 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:39.957184 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:39.970935 containerd[2029]: time="2024-12-13T01:57:39.970851968Z" level=info msg="StartContainer for \"cd40385d938d7ebdecf5a5c3cd60184bf39dd41fe8643e0c750e8717a586087a\" returns successfully" Dec 13 01:57:39.974002 systemd-logind[2008]: New session 29 of user core. Dec 13 01:57:39.982152 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:57:39.993369 systemd[1]: cri-containerd-cd40385d938d7ebdecf5a5c3cd60184bf39dd41fe8643e0c750e8717a586087a.scope: Deactivated successfully. Dec 13 01:57:40.044684 containerd[2029]: time="2024-12-13T01:57:40.044483501Z" level=info msg="shim disconnected" id=cd40385d938d7ebdecf5a5c3cd60184bf39dd41fe8643e0c750e8717a586087a namespace=k8s.io Dec 13 01:57:40.044684 containerd[2029]: time="2024-12-13T01:57:40.044607989Z" level=warning msg="cleaning up after shim disconnected" id=cd40385d938d7ebdecf5a5c3cd60184bf39dd41fe8643e0c750e8717a586087a namespace=k8s.io Dec 13 01:57:40.044684 containerd[2029]: time="2024-12-13T01:57:40.044629241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:40.957882 containerd[2029]: time="2024-12-13T01:57:40.957806001Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:57:40.990467 containerd[2029]: time="2024-12-13T01:57:40.990400582Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e\"" Dec 13 01:57:40.992057 containerd[2029]: time="2024-12-13T01:57:40.991599130Z" level=info msg="StartContainer for \"290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e\"" Dec 13 01:57:41.051580 systemd[1]: Started cri-containerd-290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e.scope - libcontainer container 290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e. Dec 13 01:57:41.098584 containerd[2029]: time="2024-12-13T01:57:41.098459370Z" level=info msg="StartContainer for \"290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e\" returns successfully" Dec 13 01:57:41.111246 systemd[1]: cri-containerd-290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e.scope: Deactivated successfully. Dec 13 01:57:41.155492 containerd[2029]: time="2024-12-13T01:57:41.155029950Z" level=info msg="shim disconnected" id=290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e namespace=k8s.io Dec 13 01:57:41.155492 containerd[2029]: time="2024-12-13T01:57:41.155208906Z" level=warning msg="cleaning up after shim disconnected" id=290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e namespace=k8s.io Dec 13 01:57:41.155492 containerd[2029]: time="2024-12-13T01:57:41.155230290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:41.359501 kubelet[3216]: I1213 01:57:41.357007 3216 setters.go:600] "Node became not ready" node="ip-172-31-18-118" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:57:41Z","lastTransitionTime":"2024-12-13T01:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:57:41.507578 systemd[1]: run-containerd-runc-k8s.io-290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e-runc.LLTzzF.mount: Deactivated successfully. Dec 13 01:57:41.507745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-290a4a08c33a32fec8aa5fb2455d1347ccf18a0716562750c89e2064a4ca146e-rootfs.mount: Deactivated successfully. Dec 13 01:57:41.963747 containerd[2029]: time="2024-12-13T01:57:41.963684562Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:57:41.995784 containerd[2029]: time="2024-12-13T01:57:41.995606098Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54\"" Dec 13 01:57:42.002551 containerd[2029]: time="2024-12-13T01:57:41.998381783Z" level=info msg="StartContainer for \"7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54\"" Dec 13 01:57:42.082605 systemd[1]: Started cri-containerd-7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54.scope - libcontainer container 7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54. Dec 13 01:57:42.134485 containerd[2029]: time="2024-12-13T01:57:42.134413447Z" level=info msg="StartContainer for \"7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54\" returns successfully" Dec 13 01:57:42.140214 systemd[1]: cri-containerd-7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54.scope: Deactivated successfully. Dec 13 01:57:42.196458 containerd[2029]: time="2024-12-13T01:57:42.196355071Z" level=info msg="shim disconnected" id=7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54 namespace=k8s.io Dec 13 01:57:42.196458 containerd[2029]: time="2024-12-13T01:57:42.196435207Z" level=warning msg="cleaning up after shim disconnected" id=7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54 namespace=k8s.io Dec 13 01:57:42.196458 containerd[2029]: time="2024-12-13T01:57:42.196457935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:42.507603 systemd[1]: run-containerd-runc-k8s.io-7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54-runc.ACWKSi.mount: Deactivated successfully. Dec 13 01:57:42.507770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f34ae37eb672958224bcd665123892742058dd808c29b5cd5c91db4db586f54-rootfs.mount: Deactivated successfully. Dec 13 01:57:42.969571 containerd[2029]: time="2024-12-13T01:57:42.969397007Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:57:42.997008 containerd[2029]: time="2024-12-13T01:57:42.996788627Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"18d1c4e7091e57a6f4b76f849843b58b46edb32193ed38dc123f379adda80826\"" Dec 13 01:57:42.997953 containerd[2029]: time="2024-12-13T01:57:42.997883255Z" level=info msg="StartContainer for \"18d1c4e7091e57a6f4b76f849843b58b46edb32193ed38dc123f379adda80826\"" Dec 13 01:57:43.059585 systemd[1]: Started cri-containerd-18d1c4e7091e57a6f4b76f849843b58b46edb32193ed38dc123f379adda80826.scope - libcontainer container 18d1c4e7091e57a6f4b76f849843b58b46edb32193ed38dc123f379adda80826. Dec 13 01:57:43.104045 systemd[1]: cri-containerd-18d1c4e7091e57a6f4b76f849843b58b46edb32193ed38dc123f379adda80826.scope: Deactivated successfully. Dec 13 01:57:43.108381 containerd[2029]: time="2024-12-13T01:57:43.107362964Z" level=info msg="StartContainer for \"18d1c4e7091e57a6f4b76f849843b58b46edb32193ed38dc123f379adda80826\" returns successfully" Dec 13 01:57:43.147793 containerd[2029]: time="2024-12-13T01:57:43.147653144Z" level=info msg="shim disconnected" id=18d1c4e7091e57a6f4b76f849843b58b46edb32193ed38dc123f379adda80826 namespace=k8s.io Dec 13 01:57:43.147793 containerd[2029]: time="2024-12-13T01:57:43.147754412Z" level=warning msg="cleaning up after shim disconnected" id=18d1c4e7091e57a6f4b76f849843b58b46edb32193ed38dc123f379adda80826 namespace=k8s.io Dec 13 01:57:43.147793 containerd[2029]: time="2024-12-13T01:57:43.147775004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:43.507683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18d1c4e7091e57a6f4b76f849843b58b46edb32193ed38dc123f379adda80826-rootfs.mount: Deactivated successfully. Dec 13 01:57:43.694005 kubelet[3216]: E1213 01:57:43.693880 3216 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:57:43.979619 containerd[2029]: time="2024-12-13T01:57:43.979534824Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:57:44.020865 containerd[2029]: time="2024-12-13T01:57:44.020646177Z" level=info msg="CreateContainer within sandbox \"703529484d10e6e96d00bf85815e13688ad9a9d997e9cb338cbeafc579c17df3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c283aad598b33a0d25dd838bec34e6ff9cd9768dc06ac0ba64702480602be5a5\"" Dec 13 01:57:44.022199 containerd[2029]: time="2024-12-13T01:57:44.022127469Z" level=info msg="StartContainer for \"c283aad598b33a0d25dd838bec34e6ff9cd9768dc06ac0ba64702480602be5a5\"" Dec 13 01:57:44.081605 systemd[1]: Started cri-containerd-c283aad598b33a0d25dd838bec34e6ff9cd9768dc06ac0ba64702480602be5a5.scope - libcontainer container c283aad598b33a0d25dd838bec34e6ff9cd9768dc06ac0ba64702480602be5a5. Dec 13 01:57:44.135485 containerd[2029]: time="2024-12-13T01:57:44.135410229Z" level=info msg="StartContainer for \"c283aad598b33a0d25dd838bec34e6ff9cd9768dc06ac0ba64702480602be5a5\" returns successfully" Dec 13 01:57:45.076326 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:57:48.465696 containerd[2029]: time="2024-12-13T01:57:48.465597159Z" level=info msg="StopPodSandbox for \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\"" Dec 13 01:57:48.466516 containerd[2029]: time="2024-12-13T01:57:48.465780039Z" level=info msg="TearDown network for sandbox \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\" successfully" Dec 13 01:57:48.466516 containerd[2029]: time="2024-12-13T01:57:48.465806127Z" level=info msg="StopPodSandbox for \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\" returns successfully" Dec 13 01:57:48.468051 containerd[2029]: time="2024-12-13T01:57:48.467950335Z" level=info msg="RemovePodSandbox for \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\"" Dec 13 01:57:48.468051 containerd[2029]: time="2024-12-13T01:57:48.468019971Z" level=info msg="Forcibly stopping sandbox \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\"" Dec 13 01:57:48.468329 containerd[2029]: time="2024-12-13T01:57:48.468128163Z" level=info msg="TearDown network for sandbox \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\" successfully" Dec 13 01:57:48.476321 containerd[2029]: time="2024-12-13T01:57:48.476161659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:48.476477 containerd[2029]: time="2024-12-13T01:57:48.476348067Z" level=info msg="RemovePodSandbox \"708515f2ef5387e4375b4b41d3c424c8d3c209518cab16b4586a837a3cd48e80\" returns successfully" Dec 13 01:57:48.478450 containerd[2029]: time="2024-12-13T01:57:48.478375575Z" level=info msg="StopPodSandbox for \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\"" Dec 13 01:57:48.478617 containerd[2029]: time="2024-12-13T01:57:48.478538499Z" level=info msg="TearDown network for sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" successfully" Dec 13 01:57:48.478617 containerd[2029]: time="2024-12-13T01:57:48.478564455Z" level=info msg="StopPodSandbox for \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" returns successfully" Dec 13 01:57:48.480438 containerd[2029]: time="2024-12-13T01:57:48.480375675Z" level=info msg="RemovePodSandbox for \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\"" Dec 13 01:57:48.480438 containerd[2029]: time="2024-12-13T01:57:48.480437043Z" level=info msg="Forcibly stopping sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\"" Dec 13 01:57:48.480438 containerd[2029]: time="2024-12-13T01:57:48.480543855Z" level=info msg="TearDown network for sandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" successfully" Dec 13 01:57:48.487817 containerd[2029]: time="2024-12-13T01:57:48.487558887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:48.487817 containerd[2029]: time="2024-12-13T01:57:48.487661667Z" level=info msg="RemovePodSandbox \"5f52e465b5fbe1760ba5a1d2ede3c7ab3d77acb80d940d403a4f078d55a013ec\" returns successfully" Dec 13 01:57:49.309814 systemd-networkd[1926]: lxc_health: Link UP Dec 13 01:57:49.328082 systemd-networkd[1926]: lxc_health: Gained carrier Dec 13 01:57:49.331639 (udev-worker)[6128]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:57:49.729770 kubelet[3216]: I1213 01:57:49.729652 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q56fs" podStartSLOduration=10.729606257 podStartE2EDuration="10.729606257s" podCreationTimestamp="2024-12-13 01:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:45.035370418 +0000 UTC m=+116.826659310" watchObservedRunningTime="2024-12-13 01:57:49.729606257 +0000 UTC m=+121.520895125" Dec 13 01:57:50.553564 systemd-networkd[1926]: lxc_health: Gained IPv6LL Dec 13 01:57:51.260756 systemd[1]: run-containerd-runc-k8s.io-c283aad598b33a0d25dd838bec34e6ff9cd9768dc06ac0ba64702480602be5a5-runc.wPLsOY.mount: Deactivated successfully. Dec 13 01:57:52.787107 ntpd[1998]: Listen normally on 14 lxc_health [fe80::a80f:a2ff:fe3e:5fa%14]:123 Dec 13 01:57:52.788136 ntpd[1998]: 13 Dec 01:57:52 ntpd[1998]: Listen normally on 14 lxc_health [fe80::a80f:a2ff:fe3e:5fa%14]:123 Dec 13 01:57:55.921620 kubelet[3216]: E1213 01:57:55.921387 3216 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:39142->127.0.0.1:42943: write tcp 172.31.18.118:10250->172.31.18.118:53816: write: broken pipe Dec 13 01:57:55.980443 sshd[5317]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:55.988890 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:57:55.990901 systemd[1]: sshd@28-172.31.18.118:22-139.178.68.195:39482.service: Deactivated successfully. Dec 13 01:57:55.991464 systemd-logind[2008]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:57:56.004064 systemd-logind[2008]: Removed session 29. Dec 13 01:58:10.961886 systemd[1]: cri-containerd-c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6.scope: Deactivated successfully. Dec 13 01:58:10.962399 systemd[1]: cri-containerd-c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6.scope: Consumed 4.703s CPU time, 20.1M memory peak, 0B memory swap peak. Dec 13 01:58:11.003610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6-rootfs.mount: Deactivated successfully. Dec 13 01:58:11.028526 containerd[2029]: time="2024-12-13T01:58:11.028417667Z" level=info msg="shim disconnected" id=c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6 namespace=k8s.io Dec 13 01:58:11.028526 containerd[2029]: time="2024-12-13T01:58:11.028522283Z" level=warning msg="cleaning up after shim disconnected" id=c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6 namespace=k8s.io Dec 13 01:58:11.029396 containerd[2029]: time="2024-12-13T01:58:11.028565363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:58:11.063153 kubelet[3216]: I1213 01:58:11.063080 3216 scope.go:117] "RemoveContainer" containerID="c1cde3ccdce839c83d8b7be741506836054335e8848d882ad24ae7aa4c8b14b6" Dec 13 01:58:11.066782 containerd[2029]: time="2024-12-13T01:58:11.066699071Z" level=info msg="CreateContainer within sandbox \"c85aef07698d7a2e4845f7f33d8c8309683e305d85edeb351f27fbdd28029ed4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:58:11.093247 containerd[2029]: time="2024-12-13T01:58:11.093164447Z" level=info msg="CreateContainer within sandbox \"c85aef07698d7a2e4845f7f33d8c8309683e305d85edeb351f27fbdd28029ed4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a0ce9d9f42cb98b0c4f4532eca94e1c43e899d1a55938b069c0b9e312627bda4\"" Dec 13 01:58:11.094005 containerd[2029]: time="2024-12-13T01:58:11.093942443Z" level=info msg="StartContainer for \"a0ce9d9f42cb98b0c4f4532eca94e1c43e899d1a55938b069c0b9e312627bda4\"" Dec 13 01:58:11.143646 systemd[1]: Started cri-containerd-a0ce9d9f42cb98b0c4f4532eca94e1c43e899d1a55938b069c0b9e312627bda4.scope - libcontainer container a0ce9d9f42cb98b0c4f4532eca94e1c43e899d1a55938b069c0b9e312627bda4. Dec 13 01:58:11.211123 containerd[2029]: time="2024-12-13T01:58:11.210935004Z" level=info msg="StartContainer for \"a0ce9d9f42cb98b0c4f4532eca94e1c43e899d1a55938b069c0b9e312627bda4\" returns successfully" Dec 13 01:58:11.514549 kubelet[3216]: E1213 01:58:11.514464 3216 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-118?timeout=10s\": context deadline exceeded" Dec 13 01:58:14.440583 systemd[1]: cri-containerd-236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6.scope: Deactivated successfully. Dec 13 01:58:14.441060 systemd[1]: cri-containerd-236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6.scope: Consumed 2.880s CPU time, 16.0M memory peak, 0B memory swap peak. Dec 13 01:58:14.477433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6-rootfs.mount: Deactivated successfully. Dec 13 01:58:14.492416 containerd[2029]: time="2024-12-13T01:58:14.492303532Z" level=info msg="shim disconnected" id=236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6 namespace=k8s.io Dec 13 01:58:14.492416 containerd[2029]: time="2024-12-13T01:58:14.492381040Z" level=warning msg="cleaning up after shim disconnected" id=236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6 namespace=k8s.io Dec 13 01:58:14.492416 containerd[2029]: time="2024-12-13T01:58:14.492402184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:58:15.081030 kubelet[3216]: I1213 01:58:15.080707 3216 scope.go:117] "RemoveContainer" containerID="236922bb391e0b8a7dd5a91bae8cd351a5782620f2b058f017103948b4fbd1f6" Dec 13 01:58:15.084259 containerd[2029]: time="2024-12-13T01:58:15.084182523Z" level=info msg="CreateContainer within sandbox \"65758025b4c4e1da781a92949abbcca2261a62656b011187aa8aa5e8c271a586\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:58:15.116180 containerd[2029]: time="2024-12-13T01:58:15.116044767Z" level=info msg="CreateContainer within sandbox \"65758025b4c4e1da781a92949abbcca2261a62656b011187aa8aa5e8c271a586\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b12b5ae44c24d6172a0682a786e2f40eb7bab25757f8ea8e41b7ec08f4dc8a8c\"" Dec 13 01:58:15.117003 containerd[2029]: time="2024-12-13T01:58:15.116953935Z" level=info msg="StartContainer for \"b12b5ae44c24d6172a0682a786e2f40eb7bab25757f8ea8e41b7ec08f4dc8a8c\"" Dec 13 01:58:15.174574 systemd[1]: Started cri-containerd-b12b5ae44c24d6172a0682a786e2f40eb7bab25757f8ea8e41b7ec08f4dc8a8c.scope - libcontainer container b12b5ae44c24d6172a0682a786e2f40eb7bab25757f8ea8e41b7ec08f4dc8a8c. Dec 13 01:58:15.237546 containerd[2029]: time="2024-12-13T01:58:15.237469732Z" level=info msg="StartContainer for \"b12b5ae44c24d6172a0682a786e2f40eb7bab25757f8ea8e41b7ec08f4dc8a8c\" returns successfully"