Jan 13 20:06:52.273683 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 20:06:52.273731 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:06:52.273757 kernel: KASLR disabled due to lack of seed Jan 13 20:06:52.273775 kernel: efi: EFI v2.7 by EDK II Jan 13 20:06:52.273792 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Jan 13 20:06:52.273808 kernel: secureboot: Secure boot disabled Jan 13 20:06:52.273826 kernel: ACPI: Early table checksum verification disabled Jan 13 20:06:52.273842 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 20:06:52.273859 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 20:06:52.273875 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:06:52.273899 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 20:06:52.273919 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:06:52.273935 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 20:06:52.273952 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 20:06:52.273974 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 20:06:52.273996 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:06:52.274016 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 20:06:52.274034 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 20:06:52.274053 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 20:06:52.274071 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 20:06:52.274089 kernel: printk: bootconsole [uart0] enabled Jan 13 20:06:52.274106 kernel: NUMA: Failed to initialise from firmware Jan 13 20:06:52.274126 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:06:52.274144 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 20:06:52.274161 kernel: Zone ranges: Jan 13 20:06:52.274178 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:06:52.274203 kernel: DMA32 empty Jan 13 20:06:52.274221 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 20:06:52.274237 kernel: Movable zone start for each node Jan 13 20:06:52.274255 kernel: Early memory node ranges Jan 13 20:06:52.274271 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 20:06:52.274288 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 20:06:52.274306 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 20:06:52.274323 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 20:06:52.274340 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 20:06:52.274357 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 20:06:52.274374 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 20:06:52.274391 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 20:06:52.274417 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:06:52.274436 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 20:06:52.276280 kernel: psci: probing for conduit method from ACPI. Jan 13 20:06:52.276314 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 20:06:52.276334 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:06:52.276358 kernel: psci: Trusted OS migration not required Jan 13 20:06:52.276377 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:06:52.276396 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:06:52.276414 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:06:52.276434 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:06:52.276493 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:06:52.276517 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:06:52.276535 kernel: CPU features: detected: Spectre-v2 Jan 13 20:06:52.276553 kernel: CPU features: detected: Spectre-v3a Jan 13 20:06:52.276594 kernel: CPU features: detected: Spectre-BHB Jan 13 20:06:52.276614 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 20:06:52.276632 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 20:06:52.276661 kernel: alternatives: applying boot alternatives Jan 13 20:06:52.276681 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:06:52.276700 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:06:52.276718 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:06:52.276736 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:06:52.276754 kernel: Fallback order for Node 0: 0 Jan 13 20:06:52.276772 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 20:06:52.276790 kernel: Policy zone: Normal Jan 13 20:06:52.276808 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:06:52.276826 kernel: software IO TLB: area num 2. Jan 13 20:06:52.276850 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 20:06:52.276869 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Jan 13 20:06:52.276887 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:06:52.276904 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:06:52.276924 kernel: rcu: RCU event tracing is enabled. Jan 13 20:06:52.276942 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:06:52.276961 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:06:52.276979 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:06:52.276996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:06:52.277015 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:06:52.277032 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:06:52.277056 kernel: GICv3: 96 SPIs implemented Jan 13 20:06:52.277075 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:06:52.277093 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:06:52.277111 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 20:06:52.277129 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 20:06:52.277147 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 20:06:52.277164 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:06:52.277183 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:06:52.277201 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 20:06:52.277219 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 20:06:52.277236 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 20:06:52.277254 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:06:52.277276 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 20:06:52.277294 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 20:06:52.277313 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 20:06:52.277334 kernel: Console: colour dummy device 80x25 Jan 13 20:06:52.277354 kernel: printk: console [tty1] enabled Jan 13 20:06:52.277375 kernel: ACPI: Core revision 20230628 Jan 13 20:06:52.277397 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 20:06:52.277415 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:06:52.277434 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:06:52.277497 kernel: landlock: Up and running. Jan 13 20:06:52.277525 kernel: SELinux: Initializing. Jan 13 20:06:52.277548 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:06:52.277567 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:06:52.277585 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:06:52.277603 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:06:52.277623 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:06:52.277646 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:06:52.277669 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 20:06:52.277698 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 20:06:52.277721 kernel: Remapping and enabling EFI services. Jan 13 20:06:52.277742 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:06:52.277761 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:06:52.277784 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 20:06:52.277803 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 20:06:52.277822 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 20:06:52.277840 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:06:52.277858 kernel: SMP: Total of 2 processors activated. Jan 13 20:06:52.277885 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:06:52.277905 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 20:06:52.277923 kernel: CPU features: detected: CRC32 instructions Jan 13 20:06:52.277954 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:06:52.277978 kernel: alternatives: applying system-wide alternatives Jan 13 20:06:52.277996 kernel: devtmpfs: initialized Jan 13 20:06:52.278019 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:06:52.278038 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:06:52.278057 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:06:52.278077 kernel: SMBIOS 3.0.0 present. Jan 13 20:06:52.278101 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 20:06:52.278120 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:06:52.278139 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:06:52.278158 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:06:52.278179 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:06:52.278198 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:06:52.278218 kernel: audit: type=2000 audit(0.241:1): state=initialized audit_enabled=0 res=1 Jan 13 20:06:52.278242 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:06:52.278262 kernel: cpuidle: using governor menu Jan 13 20:06:52.278281 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:06:52.278300 kernel: ASID allocator initialised with 65536 entries Jan 13 20:06:52.278319 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:06:52.278338 kernel: Serial: AMBA PL011 UART driver Jan 13 20:06:52.278358 kernel: Modules: 17440 pages in range for non-PLT usage Jan 13 20:06:52.278377 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:06:52.278396 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:06:52.278419 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:06:52.278439 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:06:52.278513 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:06:52.278538 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:06:52.278557 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:06:52.278576 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:06:52.278595 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:06:52.278614 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:06:52.278633 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:06:52.278662 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:06:52.278681 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:06:52.278700 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:06:52.278721 kernel: ACPI: Interpreter enabled Jan 13 20:06:52.278745 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:06:52.278789 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:06:52.278851 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 20:06:52.279231 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:06:52.279538 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:06:52.279761 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:06:52.279977 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 20:06:52.280189 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 20:06:52.280216 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 20:06:52.280236 kernel: acpiphp: Slot [1] registered Jan 13 20:06:52.280255 kernel: acpiphp: Slot [2] registered Jan 13 20:06:52.280274 kernel: acpiphp: Slot [3] registered Jan 13 20:06:52.280303 kernel: acpiphp: Slot [4] registered Jan 13 20:06:52.280322 kernel: acpiphp: Slot [5] registered Jan 13 20:06:52.280340 kernel: acpiphp: Slot [6] registered Jan 13 20:06:52.280358 kernel: acpiphp: Slot [7] registered Jan 13 20:06:52.280377 kernel: acpiphp: Slot [8] registered Jan 13 20:06:52.280395 kernel: acpiphp: Slot [9] registered Jan 13 20:06:52.280414 kernel: acpiphp: Slot [10] registered Jan 13 20:06:52.280432 kernel: acpiphp: Slot [11] registered Jan 13 20:06:52.282515 kernel: acpiphp: Slot [12] registered Jan 13 20:06:52.282567 kernel: acpiphp: Slot [13] registered Jan 13 20:06:52.282605 kernel: acpiphp: Slot [14] registered Jan 13 20:06:52.282625 kernel: acpiphp: Slot [15] registered Jan 13 20:06:52.282644 kernel: acpiphp: Slot [16] registered Jan 13 20:06:52.282663 kernel: acpiphp: Slot [17] registered Jan 13 20:06:52.282682 kernel: acpiphp: Slot [18] registered Jan 13 20:06:52.282701 kernel: acpiphp: Slot [19] registered Jan 13 20:06:52.282721 kernel: acpiphp: Slot [20] registered Jan 13 20:06:52.282741 kernel: acpiphp: Slot [21] registered Jan 13 20:06:52.282761 kernel: acpiphp: Slot [22] registered Jan 13 20:06:52.282786 kernel: acpiphp: Slot [23] registered Jan 13 20:06:52.282805 kernel: acpiphp: Slot [24] registered Jan 13 20:06:52.282823 kernel: acpiphp: Slot [25] registered Jan 13 20:06:52.282842 kernel: acpiphp: Slot [26] registered Jan 13 20:06:52.282861 kernel: acpiphp: Slot [27] registered Jan 13 20:06:52.282882 kernel: acpiphp: Slot [28] registered Jan 13 20:06:52.282901 kernel: acpiphp: Slot [29] registered Jan 13 20:06:52.282920 kernel: acpiphp: Slot [30] registered Jan 13 20:06:52.282939 kernel: acpiphp: Slot [31] registered Jan 13 20:06:52.282958 kernel: PCI host bridge to bus 0000:00 Jan 13 20:06:52.283733 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 20:06:52.283953 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:06:52.284149 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 20:06:52.284341 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 20:06:52.287804 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 20:06:52.288165 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 20:06:52.288409 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 20:06:52.288825 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:06:52.289043 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 20:06:52.289251 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:06:52.290704 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:06:52.290991 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 20:06:52.291210 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 20:06:52.291407 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 20:06:52.291650 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:06:52.294150 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 20:06:52.294432 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 20:06:52.294756 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 20:06:52.295023 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 20:06:52.295275 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 20:06:52.295642 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 20:06:52.295868 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:06:52.296088 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 20:06:52.296118 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:06:52.296139 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:06:52.296159 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:06:52.296178 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:06:52.296198 kernel: iommu: Default domain type: Translated Jan 13 20:06:52.296232 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:06:52.296252 kernel: efivars: Registered efivars operations Jan 13 20:06:52.296271 kernel: vgaarb: loaded Jan 13 20:06:52.296291 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:06:52.296310 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:06:52.296330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:06:52.296349 kernel: pnp: PnP ACPI init Jan 13 20:06:52.296662 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 20:06:52.296710 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:06:52.296731 kernel: NET: Registered PF_INET protocol family Jan 13 20:06:52.296751 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:06:52.296770 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:06:52.296789 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:06:52.296808 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:06:52.296827 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:06:52.296848 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:06:52.296867 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:06:52.296893 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:06:52.296912 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:06:52.296931 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:06:52.296950 kernel: kvm [1]: HYP mode not available Jan 13 20:06:52.296968 kernel: Initialise system trusted keyrings Jan 13 20:06:52.296988 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:06:52.297007 kernel: Key type asymmetric registered Jan 13 20:06:52.297027 kernel: Asymmetric key parser 'x509' registered Jan 13 20:06:52.297084 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:06:52.297120 kernel: io scheduler mq-deadline registered Jan 13 20:06:52.297140 kernel: io scheduler kyber registered Jan 13 20:06:52.297159 kernel: io scheduler bfq registered Jan 13 20:06:52.297523 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 20:06:52.297566 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:06:52.297586 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:06:52.297606 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 20:06:52.297625 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 20:06:52.297655 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:06:52.297676 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:06:52.297932 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 20:06:52.297962 kernel: printk: console [ttyS0] disabled Jan 13 20:06:52.297981 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 20:06:52.298001 kernel: printk: console [ttyS0] enabled Jan 13 20:06:52.298020 kernel: printk: bootconsole [uart0] disabled Jan 13 20:06:52.298039 kernel: thunder_xcv, ver 1.0 Jan 13 20:06:52.298059 kernel: thunder_bgx, ver 1.0 Jan 13 20:06:52.298085 kernel: nicpf, ver 1.0 Jan 13 20:06:52.298105 kernel: nicvf, ver 1.0 Jan 13 20:06:52.298336 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:06:52.298606 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:06:51 UTC (1736798811) Jan 13 20:06:52.298636 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:06:52.298656 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 20:06:52.298675 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:06:52.298693 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:06:52.298721 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:06:52.298740 kernel: Segment Routing with IPv6 Jan 13 20:06:52.298759 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:06:52.298778 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:06:52.298796 kernel: Key type dns_resolver registered Jan 13 20:06:52.298817 kernel: registered taskstats version 1 Jan 13 20:06:52.298836 kernel: Loading compiled-in X.509 certificates Jan 13 20:06:52.298855 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:06:52.298873 kernel: Key type .fscrypt registered Jan 13 20:06:52.298896 kernel: Key type fscrypt-provisioning registered Jan 13 20:06:52.298915 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:06:52.298934 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:06:52.298953 kernel: ima: No architecture policies found Jan 13 20:06:52.298972 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:06:52.298991 kernel: clk: Disabling unused clocks Jan 13 20:06:52.299009 kernel: Freeing unused kernel memory: 39680K Jan 13 20:06:52.299028 kernel: Run /init as init process Jan 13 20:06:52.299046 kernel: with arguments: Jan 13 20:06:52.299064 kernel: /init Jan 13 20:06:52.299087 kernel: with environment: Jan 13 20:06:52.299105 kernel: HOME=/ Jan 13 20:06:52.299123 kernel: TERM=linux Jan 13 20:06:52.299141 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:06:52.299165 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:06:52.299189 systemd[1]: Detected virtualization amazon. Jan 13 20:06:52.299209 systemd[1]: Detected architecture arm64. Jan 13 20:06:52.299234 systemd[1]: Running in initrd. Jan 13 20:06:52.299253 systemd[1]: No hostname configured, using default hostname. Jan 13 20:06:52.299273 systemd[1]: Hostname set to . Jan 13 20:06:52.299294 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:06:52.299314 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:06:52.299334 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:06:52.299355 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:06:52.299376 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:06:52.299402 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:06:52.299423 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:06:52.299445 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:06:52.299580 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:06:52.299604 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:06:52.299625 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:06:52.299647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:06:52.299679 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:06:52.299700 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:06:52.299720 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:06:52.299741 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:06:52.299761 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:06:52.299782 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:06:52.299802 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:06:52.299830 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:06:52.299859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:06:52.299885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:06:52.299906 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:06:52.299926 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:06:52.299947 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:06:52.299967 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:06:52.299988 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:06:52.300009 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:06:52.300029 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:06:52.300057 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:06:52.300078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:06:52.300102 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:06:52.300122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:06:52.300143 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:06:52.300165 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:06:52.300244 systemd-journald[252]: Collecting audit messages is disabled. Jan 13 20:06:52.300291 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:06:52.300317 kernel: Bridge firewalling registered Jan 13 20:06:52.300338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:06:52.300359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:06:52.300379 systemd-journald[252]: Journal started Jan 13 20:06:52.300420 systemd-journald[252]: Runtime Journal (/run/log/journal/ec27b6b8a9476381a19b3bceb79b88dc) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:06:52.253131 systemd-modules-load[253]: Inserted module 'overlay' Jan 13 20:06:52.286572 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 13 20:06:52.309210 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:06:52.314584 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:06:52.330894 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:06:52.336825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:06:52.354792 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:06:52.368829 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:06:52.409985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:06:52.429673 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:06:52.430614 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:06:52.438910 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:06:52.461516 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:06:52.486825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:06:52.513704 systemd-resolved[282]: Positive Trust Anchors: Jan 13 20:06:52.513768 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:06:52.513831 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:06:52.532531 dracut-cmdline[288]: dracut-dracut-053 Jan 13 20:06:52.539968 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:06:52.691499 kernel: SCSI subsystem initialized Jan 13 20:06:52.700485 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:06:52.712491 kernel: iscsi: registered transport (tcp) Jan 13 20:06:52.735495 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:06:52.735571 kernel: QLogic iSCSI HBA Driver Jan 13 20:06:52.770522 kernel: random: crng init done Jan 13 20:06:52.771048 systemd-resolved[282]: Defaulting to hostname 'linux'. Jan 13 20:06:52.774651 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:06:52.778731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:06:52.823520 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:06:52.832778 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:06:52.870602 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:06:52.870703 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:06:52.870735 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:06:52.940513 kernel: raid6: neonx8 gen() 6604 MB/s Jan 13 20:06:52.957516 kernel: raid6: neonx4 gen() 6480 MB/s Jan 13 20:06:52.974510 kernel: raid6: neonx2 gen() 5380 MB/s Jan 13 20:06:52.991516 kernel: raid6: neonx1 gen() 3880 MB/s Jan 13 20:06:53.008523 kernel: raid6: int64x8 gen() 3710 MB/s Jan 13 20:06:53.025520 kernel: raid6: int64x4 gen() 3594 MB/s Jan 13 20:06:53.042514 kernel: raid6: int64x2 gen() 3508 MB/s Jan 13 20:06:53.060328 kernel: raid6: int64x1 gen() 2664 MB/s Jan 13 20:06:53.060396 kernel: raid6: using algorithm neonx8 gen() 6604 MB/s Jan 13 20:06:53.078312 kernel: raid6: .... xor() 4784 MB/s, rmw enabled Jan 13 20:06:53.078399 kernel: raid6: using neon recovery algorithm Jan 13 20:06:53.087270 kernel: xor: measuring software checksum speed Jan 13 20:06:53.087361 kernel: 8regs : 10982 MB/sec Jan 13 20:06:53.088494 kernel: 32regs : 11166 MB/sec Jan 13 20:06:53.090646 kernel: arm64_neon : 8533 MB/sec Jan 13 20:06:53.090691 kernel: xor: using function: 32regs (11166 MB/sec) Jan 13 20:06:53.176503 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:06:53.197782 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:06:53.207814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:06:53.253113 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 13 20:06:53.262552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:06:53.282803 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:06:53.314883 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Jan 13 20:06:53.377590 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:06:53.391813 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:06:53.508155 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:06:53.522869 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:06:53.578362 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:06:53.581820 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:06:53.586664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:06:53.593040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:06:53.606838 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:06:53.641346 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:06:53.730231 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:06:53.730312 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 20:06:53.748323 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:06:53.750587 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:06:53.750838 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:8a:68:61:23:0f Jan 13 20:06:53.742867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:06:53.743344 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:06:53.748100 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:06:53.751289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:06:53.751643 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:06:53.753998 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:06:53.776889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:06:53.780425 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:06:53.810523 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:06:53.812504 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:06:53.815549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:06:53.824513 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:06:53.828889 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:06:53.836790 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:06:53.836833 kernel: GPT:9289727 != 16777215 Jan 13 20:06:53.836859 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:06:53.836884 kernel: GPT:9289727 != 16777215 Jan 13 20:06:53.836908 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:06:53.836932 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:06:53.876954 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:06:53.916654 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (521) Jan 13 20:06:53.957567 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (515) Jan 13 20:06:53.966828 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:06:54.040235 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:06:54.097531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:06:54.112353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:06:54.117969 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:06:54.128865 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:06:54.147264 disk-uuid[660]: Primary Header is updated. Jan 13 20:06:54.147264 disk-uuid[660]: Secondary Entries is updated. Jan 13 20:06:54.147264 disk-uuid[660]: Secondary Header is updated. Jan 13 20:06:54.157486 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:06:55.173488 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:06:55.175106 disk-uuid[661]: The operation has completed successfully. Jan 13 20:06:55.346010 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:06:55.346222 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:06:55.405884 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:06:55.417265 sh[922]: Success Jan 13 20:06:55.442601 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:06:55.549216 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:06:55.566681 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:06:55.571352 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:06:55.608598 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:06:55.608667 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:06:55.610464 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:06:55.611763 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:06:55.612853 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:06:55.634476 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:06:55.651534 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:06:55.654608 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:06:55.665813 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:06:55.672896 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:06:55.701680 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:55.701745 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:06:55.703504 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:06:55.710496 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:06:55.727963 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:06:55.731861 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:55.741748 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:06:55.753916 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:06:55.881254 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:06:55.899883 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:06:55.953670 ignition[1021]: Ignition 2.20.0 Jan 13 20:06:55.953697 ignition[1021]: Stage: fetch-offline Jan 13 20:06:55.954581 ignition[1021]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:55.961025 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:06:55.954613 ignition[1021]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:55.955636 ignition[1021]: Ignition finished successfully Jan 13 20:06:55.969722 systemd-networkd[1116]: lo: Link UP Jan 13 20:06:55.969744 systemd-networkd[1116]: lo: Gained carrier Jan 13 20:06:55.978091 systemd-networkd[1116]: Enumeration completed Jan 13 20:06:55.978845 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:06:55.978851 systemd-networkd[1116]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:06:55.978995 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:06:55.981240 systemd[1]: Reached target network.target - Network. Jan 13 20:06:55.996690 systemd-networkd[1116]: eth0: Link UP Jan 13 20:06:55.996711 systemd-networkd[1116]: eth0: Gained carrier Jan 13 20:06:55.996731 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:06:56.002714 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:06:56.017557 systemd-networkd[1116]: eth0: DHCPv4 address 172.31.29.220/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:06:56.039291 ignition[1125]: Ignition 2.20.0 Jan 13 20:06:56.039321 ignition[1125]: Stage: fetch Jan 13 20:06:56.040276 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:56.040303 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:56.040784 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:56.050488 ignition[1125]: PUT result: OK Jan 13 20:06:56.053348 ignition[1125]: parsed url from cmdline: "" Jan 13 20:06:56.053365 ignition[1125]: no config URL provided Jan 13 20:06:56.053381 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:06:56.053417 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:06:56.053471 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:56.056423 ignition[1125]: PUT result: OK Jan 13 20:06:56.057002 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:06:56.064222 ignition[1125]: GET result: OK Jan 13 20:06:56.064365 ignition[1125]: parsing config with SHA512: 73d6b2751dd88da377a32a16cb65f384094c966969189b864f9cf4494afdf237b92519bfce2e4633e9c1f59c7a14421110b74176df19000bd247af1dc14034ee Jan 13 20:06:56.072628 unknown[1125]: fetched base config from "system" Jan 13 20:06:56.072869 unknown[1125]: fetched base config from "system" Jan 13 20:06:56.073537 ignition[1125]: fetch: fetch complete Jan 13 20:06:56.072884 unknown[1125]: fetched user config from "aws" Jan 13 20:06:56.073549 ignition[1125]: fetch: fetch passed Jan 13 20:06:56.073825 ignition[1125]: Ignition finished successfully Jan 13 20:06:56.085239 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:06:56.095764 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:06:56.131747 ignition[1132]: Ignition 2.20.0 Jan 13 20:06:56.131776 ignition[1132]: Stage: kargs Jan 13 20:06:56.133082 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:56.133109 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:56.133270 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:56.141026 ignition[1132]: PUT result: OK Jan 13 20:06:56.145402 ignition[1132]: kargs: kargs passed Jan 13 20:06:56.145555 ignition[1132]: Ignition finished successfully Jan 13 20:06:56.149824 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:06:56.161827 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:06:56.186399 ignition[1138]: Ignition 2.20.0 Jan 13 20:06:56.186968 ignition[1138]: Stage: disks Jan 13 20:06:56.187634 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:56.187661 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:56.187877 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:56.190972 ignition[1138]: PUT result: OK Jan 13 20:06:56.200047 ignition[1138]: disks: disks passed Jan 13 20:06:56.201330 ignition[1138]: Ignition finished successfully Jan 13 20:06:56.205020 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:06:56.209073 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:06:56.211324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:06:56.213710 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:06:56.215703 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:06:56.217595 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:06:56.235757 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:06:56.286182 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:06:56.290199 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:06:56.300806 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:06:56.383616 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:06:56.385037 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:06:56.387705 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:06:56.403626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:06:56.418048 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:06:56.423101 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:06:56.423198 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:06:56.423259 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:06:56.436400 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:06:56.447585 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165) Jan 13 20:06:56.447912 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:06:56.457217 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:56.457259 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:06:56.457285 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:06:56.468481 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:06:56.472684 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:06:56.562968 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:06:56.574078 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:06:56.583020 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:06:56.590851 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:06:56.733589 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:06:56.748801 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:06:56.754755 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:06:56.771758 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:06:56.773860 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:56.818218 ignition[1277]: INFO : Ignition 2.20.0 Jan 13 20:06:56.818218 ignition[1277]: INFO : Stage: mount Jan 13 20:06:56.821918 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:56.821918 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:56.821918 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:56.832142 ignition[1277]: INFO : PUT result: OK Jan 13 20:06:56.828500 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:06:56.838487 ignition[1277]: INFO : mount: mount passed Jan 13 20:06:56.839975 ignition[1277]: INFO : Ignition finished successfully Jan 13 20:06:56.843654 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:06:56.855775 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:06:56.879773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:06:56.901491 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1290) Jan 13 20:06:56.905345 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:06:56.905388 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:06:56.905413 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:06:56.912502 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:06:56.914566 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:06:56.946816 ignition[1307]: INFO : Ignition 2.20.0 Jan 13 20:06:56.946816 ignition[1307]: INFO : Stage: files Jan 13 20:06:56.951187 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:56.951187 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:56.951187 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:56.951187 ignition[1307]: INFO : PUT result: OK Jan 13 20:06:56.961744 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:06:56.964149 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:06:56.964149 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:06:56.971985 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:06:56.974821 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:06:56.977699 unknown[1307]: wrote ssh authorized keys file for user: core Jan 13 20:06:56.979922 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:06:56.983441 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:06:56.987023 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:06:57.095756 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:06:57.269875 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:06:57.269875 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:06:57.277038 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:06:57.792795 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:06:57.879614 systemd-networkd[1116]: eth0: Gained IPv6LL Jan 13 20:06:58.019516 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:06:58.019516 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:06:58.026521 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:06:58.026521 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:06:58.026521 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:06:58.026521 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:06:58.026521 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:06:58.026521 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:06:58.044931 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:06:58.048081 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:06:58.052117 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:06:58.052117 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:06:58.052117 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:06:58.052117 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:06:58.052117 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:06:58.467201 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:06:58.800925 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:06:58.800925 ignition[1307]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:06:58.807158 ignition[1307]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:06:58.807158 ignition[1307]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:06:58.807158 ignition[1307]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:06:58.807158 ignition[1307]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:06:58.807158 ignition[1307]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:06:58.807158 ignition[1307]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:06:58.807158 ignition[1307]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:06:58.807158 ignition[1307]: INFO : files: files passed Jan 13 20:06:58.807158 ignition[1307]: INFO : Ignition finished successfully Jan 13 20:06:58.831078 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:06:58.845740 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:06:58.856824 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:06:58.859864 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:06:58.860080 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:06:58.885346 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:06:58.885346 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:06:58.893540 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:06:58.900534 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:06:58.904369 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:06:58.921806 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:06:58.974391 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:06:58.974799 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:06:58.979124 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:06:58.981276 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:06:58.985174 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:06:59.002830 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:06:59.029857 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:06:59.045884 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:06:59.071104 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:06:59.074035 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:06:59.079031 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:06:59.084140 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:06:59.084854 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:06:59.090661 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:06:59.093215 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:06:59.099012 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:06:59.102413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:06:59.108311 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:06:59.111363 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:06:59.116938 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:06:59.119937 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:06:59.126901 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:06:59.130260 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:06:59.135635 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:06:59.135915 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:06:59.139318 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:06:59.142240 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:06:59.153025 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:06:59.156568 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:06:59.159983 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:06:59.162035 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:06:59.166521 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:06:59.166849 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:06:59.175828 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:06:59.176212 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:06:59.191779 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:06:59.200884 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:06:59.202844 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:06:59.203146 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:06:59.207421 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:06:59.210163 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:06:59.231171 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:06:59.232023 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:06:59.252910 ignition[1359]: INFO : Ignition 2.20.0 Jan 13 20:06:59.255402 ignition[1359]: INFO : Stage: umount Jan 13 20:06:59.255402 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:06:59.255402 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:06:59.255402 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:06:59.264577 ignition[1359]: INFO : PUT result: OK Jan 13 20:06:59.268799 ignition[1359]: INFO : umount: umount passed Jan 13 20:06:59.271240 ignition[1359]: INFO : Ignition finished successfully Jan 13 20:06:59.274664 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:06:59.279091 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:06:59.279294 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:06:59.287998 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:06:59.288969 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:06:59.295398 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:06:59.295558 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:06:59.299345 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:06:59.299446 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:06:59.306418 systemd[1]: Stopped target network.target - Network. Jan 13 20:06:59.309631 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:06:59.309733 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:06:59.311914 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:06:59.313500 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:06:59.324651 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:06:59.327234 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:06:59.333182 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:06:59.334984 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:06:59.335063 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:06:59.336909 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:06:59.336982 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:06:59.338899 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:06:59.338990 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:06:59.340831 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:06:59.340913 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:06:59.343109 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:06:59.345170 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:06:59.363861 systemd-networkd[1116]: eth0: DHCPv6 lease lost Jan 13 20:06:59.367236 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:06:59.367835 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:06:59.370928 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:06:59.371110 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:06:59.374568 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:06:59.374862 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:06:59.382210 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:06:59.383564 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:06:59.389134 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:06:59.389252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:06:59.401222 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:06:59.414740 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:06:59.414850 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:06:59.417298 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:06:59.417389 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:06:59.420362 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:06:59.420481 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:06:59.434163 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:06:59.434269 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:06:59.437044 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:06:59.464288 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:06:59.466574 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:06:59.470273 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:06:59.470409 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:06:59.475210 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:06:59.475292 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:06:59.478043 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:06:59.478141 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:06:59.490441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:06:59.490552 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:06:59.495969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:06:59.496065 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:06:59.521847 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:06:59.524181 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:06:59.524289 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:06:59.533181 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:06:59.533294 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:06:59.535716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:06:59.535808 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:06:59.538815 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:06:59.538894 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:06:59.546137 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:06:59.547538 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:06:59.556792 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:06:59.556998 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:06:59.559248 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:06:59.588818 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:06:59.603663 systemd[1]: Switching root. Jan 13 20:06:59.650737 systemd-journald[252]: Journal stopped Jan 13 20:07:01.456822 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 13 20:07:01.456969 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:07:01.457014 kernel: SELinux: policy capability open_perms=1 Jan 13 20:07:01.457047 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:07:01.457079 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:07:01.457116 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:07:01.457147 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:07:01.457178 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:07:01.457206 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:07:01.457235 kernel: audit: type=1403 audit(1736798819.935:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:07:01.457268 systemd[1]: Successfully loaded SELinux policy in 48.282ms. Jan 13 20:07:01.457321 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.799ms. Jan 13 20:07:01.457355 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:07:01.457388 systemd[1]: Detected virtualization amazon. Jan 13 20:07:01.457425 systemd[1]: Detected architecture arm64. Jan 13 20:07:01.459087 systemd[1]: Detected first boot. Jan 13 20:07:01.459152 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:07:01.459201 zram_generator::config[1403]: No configuration found. Jan 13 20:07:01.459241 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:07:01.459278 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:07:01.459312 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:07:01.459349 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:07:01.461563 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:07:01.461609 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:07:01.461642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:07:01.461675 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:07:01.461707 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:07:01.461740 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:07:01.461771 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:07:01.461803 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:07:01.461832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:07:01.461880 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:07:01.461910 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:07:01.461939 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:07:01.461971 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:07:01.462000 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:07:01.462030 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:07:01.462059 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:07:01.462088 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:07:01.462121 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:07:01.462157 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:07:01.462190 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:07:01.462220 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:07:01.462252 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:07:01.462284 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:07:01.462315 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:07:01.462346 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:07:01.462379 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:07:01.462409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:07:01.462440 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:07:01.462497 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:07:01.462531 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:07:01.462562 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:07:01.462595 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:07:01.462627 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:07:01.462659 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:07:01.462692 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:07:01.462728 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:07:01.462759 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:07:01.462800 systemd[1]: Reached target machines.target - Containers. Jan 13 20:07:01.462829 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:07:01.462859 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:01.462889 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:07:01.462918 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:07:01.462948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:01.462981 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:07:01.463011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:01.463042 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:07:01.463074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:07:01.463103 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:07:01.463133 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:07:01.463162 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:07:01.463192 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:07:01.463226 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:07:01.463254 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:07:01.463283 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:07:01.463311 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:07:01.463339 kernel: loop: module loaded Jan 13 20:07:01.463367 kernel: ACPI: bus type drm_connector registered Jan 13 20:07:01.463395 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:07:01.463423 kernel: fuse: init (API version 7.39) Jan 13 20:07:01.465207 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:07:01.465258 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:07:01.468634 systemd[1]: Stopped verity-setup.service. Jan 13 20:07:01.468712 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:07:01.468743 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:07:01.468775 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:07:01.468807 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:07:01.468838 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:07:01.468877 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:07:01.468908 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:07:01.468941 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:07:01.468973 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:07:01.469005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:01.469036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:01.469065 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:07:01.469100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:07:01.469132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:01.469162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:01.469192 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:07:01.469278 systemd-journald[1488]: Collecting audit messages is disabled. Jan 13 20:07:01.469350 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:07:01.469389 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:07:01.469423 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:07:01.469483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:07:01.469517 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:07:01.469550 systemd-journald[1488]: Journal started Jan 13 20:07:01.469598 systemd-journald[1488]: Runtime Journal (/run/log/journal/ec27b6b8a9476381a19b3bceb79b88dc) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:07:00.876986 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:07:01.472620 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:07:00.904888 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:07:00.905731 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:07:01.477173 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:07:01.481905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:07:01.511801 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:07:01.520753 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:07:01.533142 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:07:01.536731 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:07:01.536806 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:07:01.541265 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:07:01.554229 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:07:01.562777 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:07:01.565802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:01.577717 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:07:01.582416 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:07:01.584709 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:07:01.588338 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:07:01.590752 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:07:01.593798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:07:01.602797 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:07:01.619758 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:07:01.626396 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:07:01.630328 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:07:01.633135 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:07:01.710539 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:07:01.715056 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:07:01.729675 systemd-journald[1488]: Time spent on flushing to /var/log/journal/ec27b6b8a9476381a19b3bceb79b88dc is 78.315ms for 913 entries. Jan 13 20:07:01.729675 systemd-journald[1488]: System Journal (/var/log/journal/ec27b6b8a9476381a19b3bceb79b88dc) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:07:01.832776 systemd-journald[1488]: Received client request to flush runtime journal. Jan 13 20:07:01.833113 kernel: loop0: detected capacity change from 0 to 53784 Jan 13 20:07:01.733880 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:07:01.738550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:07:01.773036 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Jan 13 20:07:01.773062 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Jan 13 20:07:01.796611 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:07:01.814791 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:07:01.826035 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:07:01.842417 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:07:01.851554 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:07:01.873844 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:07:01.882655 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:07:01.902494 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:07:01.903011 udevadm[1544]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:07:01.927491 kernel: loop1: detected capacity change from 0 to 194512 Jan 13 20:07:01.956680 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:07:01.971212 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:07:02.003818 kernel: loop2: detected capacity change from 0 to 113536 Jan 13 20:07:02.051038 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Jan 13 20:07:02.051079 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Jan 13 20:07:02.063557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:07:02.084526 kernel: loop3: detected capacity change from 0 to 116808 Jan 13 20:07:02.149538 kernel: loop4: detected capacity change from 0 to 53784 Jan 13 20:07:02.178771 kernel: loop5: detected capacity change from 0 to 194512 Jan 13 20:07:02.227487 kernel: loop6: detected capacity change from 0 to 113536 Jan 13 20:07:02.258781 kernel: loop7: detected capacity change from 0 to 116808 Jan 13 20:07:02.297727 (sd-merge)[1561]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:07:02.300092 (sd-merge)[1561]: Merged extensions into '/usr'. Jan 13 20:07:02.314956 systemd[1]: Reloading requested from client PID 1532 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:07:02.314982 systemd[1]: Reloading... Jan 13 20:07:02.486488 zram_generator::config[1583]: No configuration found. Jan 13 20:07:02.599874 ldconfig[1527]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:07:02.774355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:02.887551 systemd[1]: Reloading finished in 570 ms. Jan 13 20:07:02.941105 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:07:02.945190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:07:02.954874 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:07:02.964837 systemd[1]: Starting ensure-sysext.service... Jan 13 20:07:02.970800 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:07:02.977880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:07:02.999778 systemd[1]: Reloading requested from client PID 1640 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:07:02.999807 systemd[1]: Reloading... Jan 13 20:07:03.058266 systemd-udevd[1642]: Using default interface naming scheme 'v255'. Jan 13 20:07:03.064232 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:07:03.065534 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:07:03.067505 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:07:03.070787 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jan 13 20:07:03.071138 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jan 13 20:07:03.079176 systemd-tmpfiles[1641]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:07:03.080541 systemd-tmpfiles[1641]: Skipping /boot Jan 13 20:07:03.126650 systemd-tmpfiles[1641]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:07:03.126849 systemd-tmpfiles[1641]: Skipping /boot Jan 13 20:07:03.235484 zram_generator::config[1684]: No configuration found. Jan 13 20:07:03.416910 (udev-worker)[1682]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:07:03.459936 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1673) Jan 13 20:07:03.638147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:03.790100 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:07:03.790799 systemd[1]: Reloading finished in 790 ms. Jan 13 20:07:03.821833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:07:03.834589 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:07:03.875402 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:07:03.885561 systemd[1]: Finished ensure-sysext.service. Jan 13 20:07:03.920845 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:07:03.938859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:07:03.945774 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:07:03.948854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:03.951314 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:07:03.958759 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:03.967778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:07:03.989678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:03.999090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:07:04.001336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:04.005173 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:07:04.018729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:07:04.032861 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:07:04.040756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:07:04.044657 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:07:04.048618 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:07:04.053508 lvm[1840]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:07:04.068092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:04.072716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:04.073488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:04.078945 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:07:04.079333 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:07:04.082252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:04.082682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:04.085600 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:07:04.085886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:07:04.113018 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:07:04.113164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:07:04.148017 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:07:04.169208 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:07:04.204976 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:07:04.210779 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:07:04.227682 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:07:04.230245 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:07:04.233874 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:07:04.249756 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:07:04.259155 augenrules[1882]: No rules Jan 13 20:07:04.263955 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:07:04.265581 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:07:04.280011 lvm[1883]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:07:04.301291 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:07:04.310054 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:07:04.316527 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:07:04.328105 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:07:04.346139 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:07:04.387550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:04.451233 systemd-networkd[1853]: lo: Link UP Jan 13 20:07:04.451764 systemd-networkd[1853]: lo: Gained carrier Jan 13 20:07:04.454703 systemd-networkd[1853]: Enumeration completed Jan 13 20:07:04.455127 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:07:04.457695 systemd-networkd[1853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:04.457840 systemd-networkd[1853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:07:04.460078 systemd-networkd[1853]: eth0: Link UP Jan 13 20:07:04.460625 systemd-networkd[1853]: eth0: Gained carrier Jan 13 20:07:04.460665 systemd-networkd[1853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:04.471910 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:07:04.479620 systemd-networkd[1853]: eth0: DHCPv4 address 172.31.29.220/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:07:04.480770 systemd-resolved[1854]: Positive Trust Anchors: Jan 13 20:07:04.480813 systemd-resolved[1854]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:07:04.480877 systemd-resolved[1854]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:07:04.490098 systemd-resolved[1854]: Defaulting to hostname 'linux'. Jan 13 20:07:04.493193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:07:04.495600 systemd[1]: Reached target network.target - Network. Jan 13 20:07:04.497407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:07:04.499632 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:07:04.501801 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:07:04.504153 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:07:04.506770 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:07:04.509203 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:07:04.511559 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:07:04.513831 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:07:04.513885 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:07:04.515566 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:07:04.518694 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:07:04.523227 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:07:04.531686 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:07:04.534884 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:07:04.537301 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:07:04.539510 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:07:04.541701 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:07:04.541755 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:07:04.561649 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:07:04.566422 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:07:04.577955 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:07:04.582118 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:07:04.589894 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:07:04.594483 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:07:04.611748 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:07:04.621392 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:07:04.628729 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:07:04.637694 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:07:04.644763 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:07:04.649121 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:07:04.672767 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:07:04.676695 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:07:04.677613 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:07:04.680776 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:07:04.693862 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:07:04.719782 jq[1908]: false Jan 13 20:07:04.723286 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:07:04.723651 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:07:04.762437 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:07:04.763982 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:07:04.786040 jq[1920]: true Jan 13 20:07:04.801187 dbus-daemon[1907]: [system] SELinux support is enabled Jan 13 20:07:04.801739 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:07:04.812074 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:07:04.812138 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:07:04.814695 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:07:04.814748 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:07:04.836480 extend-filesystems[1909]: Found loop4 Jan 13 20:07:04.836480 extend-filesystems[1909]: Found loop5 Jan 13 20:07:04.836480 extend-filesystems[1909]: Found loop6 Jan 13 20:07:04.836480 extend-filesystems[1909]: Found loop7 Jan 13 20:07:04.836480 extend-filesystems[1909]: Found nvme0n1 Jan 13 20:07:04.836480 extend-filesystems[1909]: Found nvme0n1p1 Jan 13 20:07:04.836480 extend-filesystems[1909]: Found nvme0n1p2 Jan 13 20:07:04.869015 extend-filesystems[1909]: Found nvme0n1p3 Jan 13 20:07:04.869015 extend-filesystems[1909]: Found usr Jan 13 20:07:04.869015 extend-filesystems[1909]: Found nvme0n1p4 Jan 13 20:07:04.869015 extend-filesystems[1909]: Found nvme0n1p6 Jan 13 20:07:04.869015 extend-filesystems[1909]: Found nvme0n1p7 Jan 13 20:07:04.869015 extend-filesystems[1909]: Found nvme0n1p9 Jan 13 20:07:04.869015 extend-filesystems[1909]: Checking size of /dev/nvme0n1p9 Jan 13 20:07:04.908112 update_engine[1919]: I20250113 20:07:04.900908 1919 main.cc:92] Flatcar Update Engine starting Jan 13 20:07:04.889284 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:07:04.881019 dbus-daemon[1907]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1853 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:07:04.891293 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:07:04.913597 update_engine[1919]: I20250113 20:07:04.910729 1919 update_check_scheduler.cc:74] Next update check in 9m17s Jan 13 20:07:04.913709 jq[1942]: true Jan 13 20:07:04.948056 tar[1926]: linux-arm64/helm Jan 13 20:07:04.919389 (ntainerd)[1936]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:07:04.932161 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:07:04.955934 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:07:04.961163 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:07:04.968065 extend-filesystems[1909]: Resized partition /dev/nvme0n1p9 Jan 13 20:07:04.988748 ntpd[1911]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting Jan 13 20:07:04.991944 ntpd[1911]: 13 Jan 20:07:04 ntpd[1911]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting Jan 13 20:07:04.991944 ntpd[1911]: 13 Jan 20:07:04 ntpd[1911]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:07:04.991944 ntpd[1911]: 13 Jan 20:07:04 ntpd[1911]: ---------------------------------------------------- Jan 13 20:07:04.991944 ntpd[1911]: 13 Jan 20:07:04 ntpd[1911]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:07:04.991944 ntpd[1911]: 13 Jan 20:07:04 ntpd[1911]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:07:04.991944 ntpd[1911]: 13 Jan 20:07:04 ntpd[1911]: corporation. Support and training for ntp-4 are Jan 13 20:07:04.991944 ntpd[1911]: 13 Jan 20:07:04 ntpd[1911]: available at https://www.nwtime.org/support Jan 13 20:07:04.991944 ntpd[1911]: 13 Jan 20:07:04 ntpd[1911]: ---------------------------------------------------- Jan 13 20:07:04.988804 ntpd[1911]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:07:04.988824 ntpd[1911]: ---------------------------------------------------- Jan 13 20:07:04.988843 ntpd[1911]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:07:04.988861 ntpd[1911]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:07:04.988880 ntpd[1911]: corporation. Support and training for ntp-4 are Jan 13 20:07:04.988898 ntpd[1911]: available at https://www.nwtime.org/support Jan 13 20:07:04.988918 ntpd[1911]: ---------------------------------------------------- Jan 13 20:07:04.998520 extend-filesystems[1960]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:07:05.012382 ntpd[1911]: proto: precision = 0.096 usec (-23) Jan 13 20:07:05.014907 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: proto: precision = 0.096 usec (-23) Jan 13 20:07:05.015592 ntpd[1911]: basedate set to 2025-01-01 Jan 13 20:07:05.017726 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: basedate set to 2025-01-01 Jan 13 20:07:05.017726 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: gps base set to 2025-01-05 (week 2348) Jan 13 20:07:05.015628 ntpd[1911]: gps base set to 2025-01-05 (week 2348) Jan 13 20:07:05.041972 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:07:05.042054 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:07:05.042054 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:07:05.040765 ntpd[1911]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:07:05.040889 ntpd[1911]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:07:05.044591 ntpd[1911]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:07:05.046534 ntpd[1911]: Listen normally on 3 eth0 172.31.29.220:123 Jan 13 20:07:05.047749 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:07:05.047749 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: Listen normally on 3 eth0 172.31.29.220:123 Jan 13 20:07:05.047749 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: Listen normally on 4 lo [::1]:123 Jan 13 20:07:05.047749 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: bind(21) AF_INET6 fe80::48a:68ff:fe61:230f%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:05.047749 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: unable to create socket on eth0 (5) for fe80::48a:68ff:fe61:230f%2#123 Jan 13 20:07:05.047749 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: failed to init interface for address fe80::48a:68ff:fe61:230f%2 Jan 13 20:07:05.047749 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: Listening on routing socket on fd #21 for interface updates Jan 13 20:07:05.046608 ntpd[1911]: Listen normally on 4 lo [::1]:123 Jan 13 20:07:05.046694 ntpd[1911]: bind(21) AF_INET6 fe80::48a:68ff:fe61:230f%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:05.046734 ntpd[1911]: unable to create socket on eth0 (5) for fe80::48a:68ff:fe61:230f%2#123 Jan 13 20:07:05.046763 ntpd[1911]: failed to init interface for address fe80::48a:68ff:fe61:230f%2 Jan 13 20:07:05.046840 ntpd[1911]: Listening on routing socket on fd #21 for interface updates Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.049 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.067 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.068 INFO Fetch successful Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.068 INFO Fetch successful Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.068 INFO Fetch successful Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.068 INFO Fetch successful Jan 13 20:07:05.068268 coreos-metadata[1906]: Jan 13 20:07:05.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:07:05.051516 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:07:05.069357 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:05.069357 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:05.069055 ntpd[1911]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:05.055164 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:07:05.080117 coreos-metadata[1906]: Jan 13 20:07:05.071 INFO Fetch failed with 404: resource not found Jan 13 20:07:05.080117 coreos-metadata[1906]: Jan 13 20:07:05.071 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:07:05.080117 coreos-metadata[1906]: Jan 13 20:07:05.073 INFO Fetch successful Jan 13 20:07:05.080117 coreos-metadata[1906]: Jan 13 20:07:05.073 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:07:05.080117 coreos-metadata[1906]: Jan 13 20:07:05.079 INFO Fetch successful Jan 13 20:07:05.080117 coreos-metadata[1906]: Jan 13 20:07:05.079 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:07:05.069105 ntpd[1911]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:05.058912 systemd-logind[1918]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:07:05.058946 systemd-logind[1918]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 20:07:05.059858 systemd-logind[1918]: New seat seat0. Jan 13 20:07:05.062183 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:07:05.099400 coreos-metadata[1906]: Jan 13 20:07:05.086 INFO Fetch successful Jan 13 20:07:05.099400 coreos-metadata[1906]: Jan 13 20:07:05.086 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:07:05.099400 coreos-metadata[1906]: Jan 13 20:07:05.089 INFO Fetch successful Jan 13 20:07:05.099400 coreos-metadata[1906]: Jan 13 20:07:05.089 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:07:05.099400 coreos-metadata[1906]: Jan 13 20:07:05.093 INFO Fetch successful Jan 13 20:07:05.171623 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:07:05.175371 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:07:05.203371 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:07:05.223019 bash[1980]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:07:05.234219 extend-filesystems[1960]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:07:05.234219 extend-filesystems[1960]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:07:05.234219 extend-filesystems[1960]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:07:05.229751 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:07:05.248882 extend-filesystems[1909]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:07:05.254128 systemd[1]: Starting sshkeys.service... Jan 13 20:07:05.257566 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:07:05.259525 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:07:05.315000 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:07:05.315537 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1674) Jan 13 20:07:05.352253 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:07:05.400675 locksmithd[1957]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:07:05.560965 coreos-metadata[2001]: Jan 13 20:07:05.555 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:07:05.560965 coreos-metadata[2001]: Jan 13 20:07:05.558 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:07:05.560965 coreos-metadata[2001]: Jan 13 20:07:05.559 INFO Fetch successful Jan 13 20:07:05.560965 coreos-metadata[2001]: Jan 13 20:07:05.559 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:07:05.564542 coreos-metadata[2001]: Jan 13 20:07:05.564 INFO Fetch successful Jan 13 20:07:05.569853 unknown[2001]: wrote ssh authorized keys file for user: core Jan 13 20:07:05.647517 update-ssh-keys[2067]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:07:05.649608 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:07:05.663135 systemd[1]: Finished sshkeys.service. Jan 13 20:07:05.674974 containerd[1936]: time="2025-01-13T20:07:05.674433790Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:07:05.683226 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:07:05.683553 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:07:05.692345 dbus-daemon[1907]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1955 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:07:05.708438 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:07:05.793204 polkitd[2083]: Started polkitd version 121 Jan 13 20:07:05.803090 containerd[1936]: time="2025-01-13T20:07:05.798525431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:05.803090 containerd[1936]: time="2025-01-13T20:07:05.801815315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:05.803090 containerd[1936]: time="2025-01-13T20:07:05.801873551Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:07:05.803090 containerd[1936]: time="2025-01-13T20:07:05.801909527Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:07:05.803304 containerd[1936]: time="2025-01-13T20:07:05.803215283Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:07:05.803304 containerd[1936]: time="2025-01-13T20:07:05.803273315Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.803391155Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.803435927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.803788439Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.803822267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.803853023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.803878823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.804051383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.804429119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.804795851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:05.804880 containerd[1936]: time="2025-01-13T20:07:05.804830711Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:07:05.807845 containerd[1936]: time="2025-01-13T20:07:05.805019771Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:07:05.807845 containerd[1936]: time="2025-01-13T20:07:05.805116083Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.814049663Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.814171055Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.814285367Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.814336631Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.814371839Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.814679591Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.815379959Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.815647979Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.815684279Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:07:05.815725 containerd[1936]: time="2025-01-13T20:07:05.815719703Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.815752931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.815783435Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.815812115Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.815844071Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.815875379Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.815904023Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.815931215Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.815958467Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.815997899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.816035519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.816070883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.816103391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.816132131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.816182 containerd[1936]: time="2025-01-13T20:07:05.816160631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.816190331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.816219455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.816249167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.816282407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.816309947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.816337907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.816365123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.816396371Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.816443567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.817357799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.817393079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.817570775Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.818594867Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:07:05.820335 containerd[1936]: time="2025-01-13T20:07:05.818652131Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:07:05.820971 containerd[1936]: time="2025-01-13T20:07:05.818687975Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:07:05.820971 containerd[1936]: time="2025-01-13T20:07:05.818716475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.820971 containerd[1936]: time="2025-01-13T20:07:05.818748899Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:07:05.820971 containerd[1936]: time="2025-01-13T20:07:05.818772455Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:07:05.820971 containerd[1936]: time="2025-01-13T20:07:05.818797007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:07:05.821181 containerd[1936]: time="2025-01-13T20:07:05.819327611Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:07:05.821181 containerd[1936]: time="2025-01-13T20:07:05.819413675Z" level=info msg="Connect containerd service" Jan 13 20:07:05.821181 containerd[1936]: time="2025-01-13T20:07:05.819500351Z" level=info msg="using legacy CRI server" Jan 13 20:07:05.821181 containerd[1936]: time="2025-01-13T20:07:05.819520775Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:07:05.821181 containerd[1936]: time="2025-01-13T20:07:05.819746627Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:07:05.828431 containerd[1936]: time="2025-01-13T20:07:05.822140447Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:07:05.828431 containerd[1936]: time="2025-01-13T20:07:05.823967843Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:07:05.828431 containerd[1936]: time="2025-01-13T20:07:05.824073527Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:07:05.828431 containerd[1936]: time="2025-01-13T20:07:05.825830243Z" level=info msg="Start subscribing containerd event" Jan 13 20:07:05.828431 containerd[1936]: time="2025-01-13T20:07:05.825917375Z" level=info msg="Start recovering state" Jan 13 20:07:05.828431 containerd[1936]: time="2025-01-13T20:07:05.826043555Z" level=info msg="Start event monitor" Jan 13 20:07:05.828431 containerd[1936]: time="2025-01-13T20:07:05.826081187Z" level=info msg="Start snapshots syncer" Jan 13 20:07:05.828431 containerd[1936]: time="2025-01-13T20:07:05.826109051Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:07:05.828431 containerd[1936]: time="2025-01-13T20:07:05.826127543Z" level=info msg="Start streaming server" Jan 13 20:07:05.831680 containerd[1936]: time="2025-01-13T20:07:05.828643439Z" level=info msg="containerd successfully booted in 0.156789s" Jan 13 20:07:05.828759 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:07:05.861594 polkitd[2083]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:07:05.861711 polkitd[2083]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:07:05.867682 polkitd[2083]: Finished loading, compiling and executing 2 rules Jan 13 20:07:05.876195 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:07:05.877667 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:07:05.880025 polkitd[2083]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:07:05.916007 systemd-resolved[1854]: System hostname changed to 'ip-172-31-29-220'. Jan 13 20:07:05.916010 systemd-hostnamed[1955]: Hostname set to (transient) Jan 13 20:07:05.989506 ntpd[1911]: bind(24) AF_INET6 fe80::48a:68ff:fe61:230f%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:05.990148 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: bind(24) AF_INET6 fe80::48a:68ff:fe61:230f%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:05.990148 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: unable to create socket on eth0 (6) for fe80::48a:68ff:fe61:230f%2#123 Jan 13 20:07:05.990148 ntpd[1911]: 13 Jan 20:07:05 ntpd[1911]: failed to init interface for address fe80::48a:68ff:fe61:230f%2 Jan 13 20:07:05.989570 ntpd[1911]: unable to create socket on eth0 (6) for fe80::48a:68ff:fe61:230f%2#123 Jan 13 20:07:05.989598 ntpd[1911]: failed to init interface for address fe80::48a:68ff:fe61:230f%2 Jan 13 20:07:06.354941 tar[1926]: linux-arm64/LICENSE Jan 13 20:07:06.357512 tar[1926]: linux-arm64/README.md Jan 13 20:07:06.381524 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:07:06.455672 systemd-networkd[1853]: eth0: Gained IPv6LL Jan 13 20:07:06.459732 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:07:06.467657 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:07:06.477888 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:07:06.492918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:06.504885 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:07:06.589623 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:07:06.594525 amazon-ssm-agent[2115]: Initializing new seelog logger Jan 13 20:07:06.594525 amazon-ssm-agent[2115]: New Seelog Logger Creation Complete Jan 13 20:07:06.594525 amazon-ssm-agent[2115]: 2025/01/13 20:07:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:06.594525 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:06.594525 amazon-ssm-agent[2115]: 2025/01/13 20:07:06 processing appconfig overrides Jan 13 20:07:06.595793 amazon-ssm-agent[2115]: 2025/01/13 20:07:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:06.595793 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:06.595793 amazon-ssm-agent[2115]: 2025/01/13 20:07:06 processing appconfig overrides Jan 13 20:07:06.595793 amazon-ssm-agent[2115]: 2025/01/13 20:07:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:06.595793 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:06.595793 amazon-ssm-agent[2115]: 2025/01/13 20:07:06 processing appconfig overrides Jan 13 20:07:06.596507 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO Proxy environment variables: Jan 13 20:07:06.599914 amazon-ssm-agent[2115]: 2025/01/13 20:07:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:06.599914 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:06.599914 amazon-ssm-agent[2115]: 2025/01/13 20:07:06 processing appconfig overrides Jan 13 20:07:06.698532 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO https_proxy: Jan 13 20:07:06.794976 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO http_proxy: Jan 13 20:07:06.893805 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO no_proxy: Jan 13 20:07:06.992158 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:07:07.010469 sshd_keygen[1927]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:07:07.066410 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:07:07.086136 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:07:07.096009 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:07:07.093701 systemd[1]: Started sshd@0-172.31.29.220:22-139.178.68.195:52526.service - OpenSSH per-connection server daemon (139.178.68.195:52526). Jan 13 20:07:07.124898 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:07:07.129100 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:07:07.142945 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:07:07.184767 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:07:07.190601 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO Agent will take identity from EC2 Jan 13 20:07:07.196202 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:07:07.209010 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:07:07.213003 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:07:07.288670 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:07.370209 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:07.370209 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:07.370362 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:07:07.370362 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 20:07:07.370362 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:07:07.370362 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:07:07.370362 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO [Registrar] Starting registrar module Jan 13 20:07:07.370362 amazon-ssm-agent[2115]: 2025-01-13 20:07:06 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:07:07.370362 amazon-ssm-agent[2115]: 2025-01-13 20:07:07 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:07:07.370767 amazon-ssm-agent[2115]: 2025-01-13 20:07:07 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:07:07.370767 amazon-ssm-agent[2115]: 2025-01-13 20:07:07 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:07:07.370767 amazon-ssm-agent[2115]: 2025-01-13 20:07:07 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:07:07.383230 sshd[2142]: Accepted publickey for core from 139.178.68.195 port 52526 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:07.385300 sshd-session[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:07.388261 amazon-ssm-agent[2115]: 2025-01-13 20:07:07 INFO [CredentialRefresher] Next credential rotation will be in 30.108323015833335 minutes Jan 13 20:07:07.407552 systemd-logind[1918]: New session 1 of user core. Jan 13 20:07:07.412073 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:07:07.426041 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:07:07.455518 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:07:07.468077 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:07:07.488942 (systemd)[2153]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:07:07.713321 systemd[2153]: Queued start job for default target default.target. Jan 13 20:07:07.720295 systemd[2153]: Created slice app.slice - User Application Slice. Jan 13 20:07:07.720365 systemd[2153]: Reached target paths.target - Paths. Jan 13 20:07:07.720397 systemd[2153]: Reached target timers.target - Timers. Jan 13 20:07:07.722967 systemd[2153]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:07:07.768590 systemd[2153]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:07:07.768702 systemd[2153]: Reached target sockets.target - Sockets. Jan 13 20:07:07.768733 systemd[2153]: Reached target basic.target - Basic System. Jan 13 20:07:07.768833 systemd[2153]: Reached target default.target - Main User Target. Jan 13 20:07:07.768898 systemd[2153]: Startup finished in 266ms. Jan 13 20:07:07.768921 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:07:07.777735 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:07:07.938166 systemd[1]: Started sshd@1-172.31.29.220:22-139.178.68.195:40336.service - OpenSSH per-connection server daemon (139.178.68.195:40336). Jan 13 20:07:07.994735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:07.997965 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:07:08.000272 systemd[1]: Startup finished in 1.211s (kernel) + 8.125s (initrd) + 8.111s (userspace) = 17.447s. Jan 13 20:07:08.015998 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:08.143998 sshd[2164]: Accepted publickey for core from 139.178.68.195 port 40336 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:08.147028 sshd-session[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:08.155877 systemd-logind[1918]: New session 2 of user core. Jan 13 20:07:08.164740 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:07:08.292592 sshd[2176]: Connection closed by 139.178.68.195 port 40336 Jan 13 20:07:08.292845 sshd-session[2164]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:08.301069 systemd[1]: sshd@1-172.31.29.220:22-139.178.68.195:40336.service: Deactivated successfully. Jan 13 20:07:08.304968 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:07:08.309890 systemd-logind[1918]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:07:08.312202 systemd-logind[1918]: Removed session 2. Jan 13 20:07:08.332014 systemd[1]: Started sshd@2-172.31.29.220:22-139.178.68.195:40340.service - OpenSSH per-connection server daemon (139.178.68.195:40340). Jan 13 20:07:08.399369 amazon-ssm-agent[2115]: 2025-01-13 20:07:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:07:08.500323 amazon-ssm-agent[2115]: 2025-01-13 20:07:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2188) started Jan 13 20:07:08.531649 sshd[2185]: Accepted publickey for core from 139.178.68.195 port 40340 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:08.534827 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:08.551563 systemd-logind[1918]: New session 3 of user core. Jan 13 20:07:08.557892 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:07:08.601177 amazon-ssm-agent[2115]: 2025-01-13 20:07:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:07:08.679359 sshd[2195]: Connection closed by 139.178.68.195 port 40340 Jan 13 20:07:08.680381 sshd-session[2185]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:08.690178 systemd[1]: sshd@2-172.31.29.220:22-139.178.68.195:40340.service: Deactivated successfully. Jan 13 20:07:08.695144 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:07:08.696736 systemd-logind[1918]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:07:08.699363 systemd-logind[1918]: Removed session 3. Jan 13 20:07:08.719167 systemd[1]: Started sshd@3-172.31.29.220:22-139.178.68.195:40356.service - OpenSSH per-connection server daemon (139.178.68.195:40356). Jan 13 20:07:08.901071 sshd[2204]: Accepted publickey for core from 139.178.68.195 port 40356 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:08.903678 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:08.912927 systemd-logind[1918]: New session 4 of user core. Jan 13 20:07:08.919729 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:07:08.989520 ntpd[1911]: Listen normally on 7 eth0 [fe80::48a:68ff:fe61:230f%2]:123 Jan 13 20:07:08.990114 ntpd[1911]: 13 Jan 20:07:08 ntpd[1911]: Listen normally on 7 eth0 [fe80::48a:68ff:fe61:230f%2]:123 Jan 13 20:07:09.058345 sshd[2208]: Connection closed by 139.178.68.195 port 40356 Jan 13 20:07:09.058222 sshd-session[2204]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:09.064856 systemd[1]: sshd@3-172.31.29.220:22-139.178.68.195:40356.service: Deactivated successfully. Jan 13 20:07:09.069956 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:07:09.074985 systemd-logind[1918]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:07:09.077125 systemd-logind[1918]: Removed session 4. Jan 13 20:07:09.101060 systemd[1]: Started sshd@4-172.31.29.220:22-139.178.68.195:40362.service - OpenSSH per-connection server daemon (139.178.68.195:40362). Jan 13 20:07:09.131606 kubelet[2171]: E0113 20:07:09.130343 2171 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:09.137199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:09.137730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:09.138186 systemd[1]: kubelet.service: Consumed 1.329s CPU time. Jan 13 20:07:09.283177 sshd[2214]: Accepted publickey for core from 139.178.68.195 port 40362 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:09.285567 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:09.293704 systemd-logind[1918]: New session 5 of user core. Jan 13 20:07:09.303728 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:07:09.419785 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:07:09.420420 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:09.436055 sudo[2218]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:09.459853 sshd[2217]: Connection closed by 139.178.68.195 port 40362 Jan 13 20:07:09.458736 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:09.465058 systemd[1]: sshd@4-172.31.29.220:22-139.178.68.195:40362.service: Deactivated successfully. Jan 13 20:07:09.469228 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:07:09.470996 systemd-logind[1918]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:07:09.474066 systemd-logind[1918]: Removed session 5. Jan 13 20:07:09.502898 systemd[1]: Started sshd@5-172.31.29.220:22-139.178.68.195:40366.service - OpenSSH per-connection server daemon (139.178.68.195:40366). Jan 13 20:07:09.685412 sshd[2223]: Accepted publickey for core from 139.178.68.195 port 40366 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:09.687938 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:09.695988 systemd-logind[1918]: New session 6 of user core. Jan 13 20:07:09.702717 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:07:09.808134 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:07:09.809287 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:09.815272 sudo[2227]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:09.825076 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:07:09.825735 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:09.850034 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:07:09.898160 augenrules[2249]: No rules Jan 13 20:07:09.900293 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:07:09.900806 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:07:09.902928 sudo[2226]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:09.926339 sshd[2225]: Connection closed by 139.178.68.195 port 40366 Jan 13 20:07:09.927297 sshd-session[2223]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:09.932117 systemd-logind[1918]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:07:09.932832 systemd[1]: sshd@5-172.31.29.220:22-139.178.68.195:40366.service: Deactivated successfully. Jan 13 20:07:09.935924 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:07:09.939634 systemd-logind[1918]: Removed session 6. Jan 13 20:07:09.971931 systemd[1]: Started sshd@6-172.31.29.220:22-139.178.68.195:40382.service - OpenSSH per-connection server daemon (139.178.68.195:40382). Jan 13 20:07:10.149471 sshd[2257]: Accepted publickey for core from 139.178.68.195 port 40382 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:10.151860 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:10.159331 systemd-logind[1918]: New session 7 of user core. Jan 13 20:07:10.169709 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:07:10.271797 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:07:10.272432 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:10.790891 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:07:10.791177 (dockerd)[2278]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:07:11.126652 dockerd[2278]: time="2025-01-13T20:07:11.126558361Z" level=info msg="Starting up" Jan 13 20:07:11.243109 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2002283045-merged.mount: Deactivated successfully. Jan 13 20:07:11.378763 systemd[1]: var-lib-docker-metacopy\x2dcheck2894458660-merged.mount: Deactivated successfully. Jan 13 20:07:11.390413 dockerd[2278]: time="2025-01-13T20:07:11.390078026Z" level=info msg="Loading containers: start." Jan 13 20:07:11.629533 kernel: Initializing XFRM netlink socket Jan 13 20:07:11.661825 (udev-worker)[2302]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:07:11.756094 systemd-networkd[1853]: docker0: Link UP Jan 13 20:07:11.796524 dockerd[2278]: time="2025-01-13T20:07:11.795806584Z" level=info msg="Loading containers: done." Jan 13 20:07:11.822373 dockerd[2278]: time="2025-01-13T20:07:11.822303196Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:07:11.822614 dockerd[2278]: time="2025-01-13T20:07:11.822533128Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:07:11.822771 dockerd[2278]: time="2025-01-13T20:07:11.822724348Z" level=info msg="Daemon has completed initialization" Jan 13 20:07:11.877781 dockerd[2278]: time="2025-01-13T20:07:11.877605929Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:07:11.878589 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:07:12.479992 systemd-resolved[1854]: Clock change detected. Flushing caches. Jan 13 20:07:13.545796 containerd[1936]: time="2025-01-13T20:07:13.545725469Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:07:14.239972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount51283427.mount: Deactivated successfully. Jan 13 20:07:17.308925 containerd[1936]: time="2025-01-13T20:07:17.308864899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:17.310755 containerd[1936]: time="2025-01-13T20:07:17.310606447Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Jan 13 20:07:17.311652 containerd[1936]: time="2025-01-13T20:07:17.311610031Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:17.317932 containerd[1936]: time="2025-01-13T20:07:17.317874368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:17.322698 containerd[1936]: time="2025-01-13T20:07:17.322327928Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 3.776512087s" Jan 13 20:07:17.322698 containerd[1936]: time="2025-01-13T20:07:17.322413260Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 20:07:17.362546 containerd[1936]: time="2025-01-13T20:07:17.362493272Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:07:19.820132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:07:19.833370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:20.135158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:20.140013 (kubelet)[2538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:20.239755 kubelet[2538]: E0113 20:07:20.239405 2538 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:20.249175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:20.249521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:21.040702 containerd[1936]: time="2025-01-13T20:07:21.038889562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:21.042118 containerd[1936]: time="2025-01-13T20:07:21.042046822Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Jan 13 20:07:21.044492 containerd[1936]: time="2025-01-13T20:07:21.044452270Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:21.052066 containerd[1936]: time="2025-01-13T20:07:21.052000018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:21.054295 containerd[1936]: time="2025-01-13T20:07:21.054249322Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 3.69169839s" Jan 13 20:07:21.054456 containerd[1936]: time="2025-01-13T20:07:21.054428494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 20:07:21.095314 containerd[1936]: time="2025-01-13T20:07:21.095261638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:07:23.145725 containerd[1936]: time="2025-01-13T20:07:23.145052472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:23.147251 containerd[1936]: time="2025-01-13T20:07:23.147161820Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Jan 13 20:07:23.148142 containerd[1936]: time="2025-01-13T20:07:23.148067148Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:23.154365 containerd[1936]: time="2025-01-13T20:07:23.154299085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:23.157861 containerd[1936]: time="2025-01-13T20:07:23.157037569Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 2.061716711s" Jan 13 20:07:23.157861 containerd[1936]: time="2025-01-13T20:07:23.157093309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 20:07:23.196635 containerd[1936]: time="2025-01-13T20:07:23.196582093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:07:24.715731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267353453.mount: Deactivated successfully. Jan 13 20:07:25.197037 containerd[1936]: time="2025-01-13T20:07:25.196953063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:25.198406 containerd[1936]: time="2025-01-13T20:07:25.198330555Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Jan 13 20:07:25.200385 containerd[1936]: time="2025-01-13T20:07:25.200280159Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:25.203826 containerd[1936]: time="2025-01-13T20:07:25.203728647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:25.205817 containerd[1936]: time="2025-01-13T20:07:25.205299999Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 2.00865805s" Jan 13 20:07:25.205817 containerd[1936]: time="2025-01-13T20:07:25.205353255Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:07:25.244476 containerd[1936]: time="2025-01-13T20:07:25.244422519Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:07:25.925082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151357446.mount: Deactivated successfully. Jan 13 20:07:27.422192 containerd[1936]: time="2025-01-13T20:07:27.422126202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:27.423556 containerd[1936]: time="2025-01-13T20:07:27.423441162Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 13 20:07:27.426538 containerd[1936]: time="2025-01-13T20:07:27.426451134Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:27.438695 containerd[1936]: time="2025-01-13T20:07:27.438266502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:27.444544 containerd[1936]: time="2025-01-13T20:07:27.444481578Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.199967931s" Jan 13 20:07:27.444848 containerd[1936]: time="2025-01-13T20:07:27.444804498Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:07:27.487377 containerd[1936]: time="2025-01-13T20:07:27.487244742Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:07:27.995798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2061975349.mount: Deactivated successfully. Jan 13 20:07:28.007338 containerd[1936]: time="2025-01-13T20:07:28.006530057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:28.008318 containerd[1936]: time="2025-01-13T20:07:28.008241665Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 13 20:07:28.011590 containerd[1936]: time="2025-01-13T20:07:28.011516285Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:28.016694 containerd[1936]: time="2025-01-13T20:07:28.016610465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:28.018775 containerd[1936]: time="2025-01-13T20:07:28.018460145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 531.157359ms" Jan 13 20:07:28.018775 containerd[1936]: time="2025-01-13T20:07:28.018547409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:07:28.058203 containerd[1936]: time="2025-01-13T20:07:28.058138769Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:07:28.680625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991369997.mount: Deactivated successfully. Jan 13 20:07:30.320182 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:07:30.335258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:30.626444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:30.644195 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:30.729520 kubelet[2682]: E0113 20:07:30.729419 2682 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:30.734305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:30.734720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:33.218709 containerd[1936]: time="2025-01-13T20:07:33.218616143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:33.220929 containerd[1936]: time="2025-01-13T20:07:33.220842527Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 13 20:07:33.223945 containerd[1936]: time="2025-01-13T20:07:33.223869251Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:33.230141 containerd[1936]: time="2025-01-13T20:07:33.230061851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:33.233145 containerd[1936]: time="2025-01-13T20:07:33.232544747Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 5.174347898s" Jan 13 20:07:33.233145 containerd[1936]: time="2025-01-13T20:07:33.232603295Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 20:07:36.442727 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:07:40.820203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:07:40.829125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:41.133922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:41.149119 (kubelet)[2767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:41.242689 kubelet[2767]: E0113 20:07:41.241008 2767 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:41.245498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:41.246755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:43.108317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:43.119190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:43.158284 systemd[1]: Reloading requested from client PID 2783 ('systemctl') (unit session-7.scope)... Jan 13 20:07:43.160757 systemd[1]: Reloading... Jan 13 20:07:43.438771 zram_generator::config[2832]: No configuration found. Jan 13 20:07:43.657150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:43.819889 systemd[1]: Reloading finished in 658 ms. Jan 13 20:07:43.899805 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:07:43.900001 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:07:43.900622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:43.908496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:44.183984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:44.195204 (kubelet)[2886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:07:44.280722 kubelet[2886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:44.280722 kubelet[2886]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:07:44.280722 kubelet[2886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:44.281382 kubelet[2886]: I0113 20:07:44.280852 2886 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:07:45.058873 kubelet[2886]: I0113 20:07:45.058818 2886 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:07:45.058873 kubelet[2886]: I0113 20:07:45.058874 2886 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:07:45.059265 kubelet[2886]: I0113 20:07:45.059227 2886 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:07:45.096812 kubelet[2886]: I0113 20:07:45.096288 2886 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:07:45.097247 kubelet[2886]: E0113 20:07:45.097208 2886 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.112528 kubelet[2886]: I0113 20:07:45.112488 2886 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:07:45.113170 kubelet[2886]: I0113 20:07:45.113147 2886 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:07:45.113597 kubelet[2886]: I0113 20:07:45.113558 2886 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:07:45.113880 kubelet[2886]: I0113 20:07:45.113857 2886 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:07:45.114370 kubelet[2886]: I0113 20:07:45.113966 2886 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:07:45.114370 kubelet[2886]: I0113 20:07:45.114145 2886 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:45.118569 kubelet[2886]: I0113 20:07:45.118535 2886 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:07:45.118815 kubelet[2886]: I0113 20:07:45.118792 2886 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:07:45.118954 kubelet[2886]: I0113 20:07:45.118935 2886 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:07:45.119086 kubelet[2886]: I0113 20:07:45.119066 2886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:07:45.123339 kubelet[2886]: W0113 20:07:45.123243 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-220&limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.124731 kubelet[2886]: E0113 20:07:45.123571 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-220&limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.125968 kubelet[2886]: W0113 20:07:45.125872 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.220:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.125968 kubelet[2886]: E0113 20:07:45.125974 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.220:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.128030 kubelet[2886]: I0113 20:07:45.127958 2886 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:07:45.128558 kubelet[2886]: I0113 20:07:45.128512 2886 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:07:45.128703 kubelet[2886]: W0113 20:07:45.128644 2886 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:07:45.130771 kubelet[2886]: I0113 20:07:45.130711 2886 server.go:1256] "Started kubelet" Jan 13 20:07:45.143363 kubelet[2886]: E0113 20:07:45.143319 2886 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.220:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-220.181a595daa69ce9e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-220,UID:ip-172-31-29-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-220,},FirstTimestamp:2025-01-13 20:07:45.130639006 +0000 UTC m=+0.928450290,LastTimestamp:2025-01-13 20:07:45.130639006 +0000 UTC m=+0.928450290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-220,}" Jan 13 20:07:45.144531 kubelet[2886]: I0113 20:07:45.144485 2886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:07:45.145703 kubelet[2886]: I0113 20:07:45.145622 2886 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:07:45.147333 kubelet[2886]: I0113 20:07:45.147291 2886 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:07:45.149365 kubelet[2886]: I0113 20:07:45.149316 2886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:07:45.149991 kubelet[2886]: I0113 20:07:45.149953 2886 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:07:45.152274 kubelet[2886]: I0113 20:07:45.151173 2886 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:07:45.154346 kubelet[2886]: I0113 20:07:45.154307 2886 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:07:45.154611 kubelet[2886]: I0113 20:07:45.154588 2886 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:07:45.155365 kubelet[2886]: W0113 20:07:45.155295 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.155586 kubelet[2886]: E0113 20:07:45.155562 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.156440 kubelet[2886]: E0113 20:07:45.156407 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-29-220\" not found" Jan 13 20:07:45.157198 kubelet[2886]: E0113 20:07:45.157162 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-220?timeout=10s\": dial tcp 172.31.29.220:6443: connect: connection refused" interval="200ms" Jan 13 20:07:45.158631 kubelet[2886]: I0113 20:07:45.158594 2886 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:07:45.159001 kubelet[2886]: I0113 20:07:45.158971 2886 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:07:45.161499 kubelet[2886]: I0113 20:07:45.161459 2886 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:07:45.173879 kubelet[2886]: E0113 20:07:45.173842 2886 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:07:45.178518 kubelet[2886]: I0113 20:07:45.178449 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:07:45.182112 kubelet[2886]: I0113 20:07:45.182053 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:07:45.182112 kubelet[2886]: I0113 20:07:45.182102 2886 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:07:45.182302 kubelet[2886]: I0113 20:07:45.182146 2886 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:07:45.182302 kubelet[2886]: E0113 20:07:45.182234 2886 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:07:45.193770 kubelet[2886]: W0113 20:07:45.193632 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.194114 kubelet[2886]: E0113 20:07:45.193967 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.206842 kubelet[2886]: I0113 20:07:45.206654 2886 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:07:45.207013 kubelet[2886]: I0113 20:07:45.206879 2886 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:07:45.207013 kubelet[2886]: I0113 20:07:45.206914 2886 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:45.211390 kubelet[2886]: I0113 20:07:45.211332 2886 policy_none.go:49] "None policy: Start" Jan 13 20:07:45.213239 kubelet[2886]: I0113 20:07:45.212728 2886 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:07:45.213239 kubelet[2886]: I0113 20:07:45.212798 2886 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:07:45.229282 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:07:45.247583 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:07:45.257518 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:07:45.261590 kubelet[2886]: I0113 20:07:45.261546 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-220" Jan 13 20:07:45.262496 kubelet[2886]: E0113 20:07:45.262432 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.220:6443/api/v1/nodes\": dial tcp 172.31.29.220:6443: connect: connection refused" node="ip-172-31-29-220" Jan 13 20:07:45.265590 kubelet[2886]: I0113 20:07:45.265535 2886 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:07:45.266178 kubelet[2886]: I0113 20:07:45.266108 2886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:07:45.271223 kubelet[2886]: E0113 20:07:45.271059 2886 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-220\" not found" Jan 13 20:07:45.282705 kubelet[2886]: I0113 20:07:45.282611 2886 topology_manager.go:215] "Topology Admit Handler" podUID="2902c0cd443a592f2f2a6f5fb1125e16" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:45.285019 kubelet[2886]: I0113 20:07:45.284644 2886 topology_manager.go:215] "Topology Admit Handler" podUID="501da319af8babe4a84db8cc33e022ff" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-220" Jan 13 20:07:45.288183 kubelet[2886]: I0113 20:07:45.287839 2886 topology_manager.go:215] "Topology Admit Handler" podUID="08d432e52f08a684c46e679a6eb26905" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-220" Jan 13 20:07:45.300319 systemd[1]: Created slice kubepods-burstable-pod2902c0cd443a592f2f2a6f5fb1125e16.slice - libcontainer container kubepods-burstable-pod2902c0cd443a592f2f2a6f5fb1125e16.slice. Jan 13 20:07:45.328453 systemd[1]: Created slice kubepods-burstable-pod501da319af8babe4a84db8cc33e022ff.slice - libcontainer container kubepods-burstable-pod501da319af8babe4a84db8cc33e022ff.slice. Jan 13 20:07:45.338077 systemd[1]: Created slice kubepods-burstable-pod08d432e52f08a684c46e679a6eb26905.slice - libcontainer container kubepods-burstable-pod08d432e52f08a684c46e679a6eb26905.slice. Jan 13 20:07:45.358325 kubelet[2886]: E0113 20:07:45.358257 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-220?timeout=10s\": dial tcp 172.31.29.220:6443: connect: connection refused" interval="400ms" Jan 13 20:07:45.455938 kubelet[2886]: I0113 20:07:45.455870 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:45.456097 kubelet[2886]: I0113 20:07:45.455959 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:45.456097 kubelet[2886]: I0113 20:07:45.456008 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/501da319af8babe4a84db8cc33e022ff-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-220\" (UID: \"501da319af8babe4a84db8cc33e022ff\") " pod="kube-system/kube-scheduler-ip-172-31-29-220" Jan 13 20:07:45.456097 kubelet[2886]: I0113 20:07:45.456054 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08d432e52f08a684c46e679a6eb26905-ca-certs\") pod \"kube-apiserver-ip-172-31-29-220\" (UID: \"08d432e52f08a684c46e679a6eb26905\") " pod="kube-system/kube-apiserver-ip-172-31-29-220" Jan 13 20:07:45.456097 kubelet[2886]: I0113 20:07:45.456100 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:45.456342 kubelet[2886]: I0113 20:07:45.456143 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:45.456342 kubelet[2886]: I0113 20:07:45.456189 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08d432e52f08a684c46e679a6eb26905-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-220\" (UID: \"08d432e52f08a684c46e679a6eb26905\") " pod="kube-system/kube-apiserver-ip-172-31-29-220" Jan 13 20:07:45.456342 kubelet[2886]: I0113 20:07:45.456244 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:45.456342 kubelet[2886]: I0113 20:07:45.456287 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08d432e52f08a684c46e679a6eb26905-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-220\" (UID: \"08d432e52f08a684c46e679a6eb26905\") " pod="kube-system/kube-apiserver-ip-172-31-29-220" Jan 13 20:07:45.466727 kubelet[2886]: I0113 20:07:45.466268 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-220" Jan 13 20:07:45.467096 kubelet[2886]: E0113 20:07:45.467069 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.220:6443/api/v1/nodes\": dial tcp 172.31.29.220:6443: connect: connection refused" node="ip-172-31-29-220" Jan 13 20:07:45.619907 containerd[1936]: time="2025-01-13T20:07:45.619745544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-220,Uid:2902c0cd443a592f2f2a6f5fb1125e16,Namespace:kube-system,Attempt:0,}" Jan 13 20:07:45.636414 containerd[1936]: time="2025-01-13T20:07:45.635995656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-220,Uid:501da319af8babe4a84db8cc33e022ff,Namespace:kube-system,Attempt:0,}" Jan 13 20:07:45.643200 containerd[1936]: time="2025-01-13T20:07:45.643139400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-220,Uid:08d432e52f08a684c46e679a6eb26905,Namespace:kube-system,Attempt:0,}" Jan 13 20:07:45.759902 kubelet[2886]: E0113 20:07:45.759769 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-220?timeout=10s\": dial tcp 172.31.29.220:6443: connect: connection refused" interval="800ms" Jan 13 20:07:45.870223 kubelet[2886]: I0113 20:07:45.869556 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-220" Jan 13 20:07:45.870223 kubelet[2886]: E0113 20:07:45.870043 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.220:6443/api/v1/nodes\": dial tcp 172.31.29.220:6443: connect: connection refused" node="ip-172-31-29-220" Jan 13 20:07:45.965451 kubelet[2886]: W0113 20:07:45.965399 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:45.965607 kubelet[2886]: E0113 20:07:45.965465 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:46.147628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530842218.mount: Deactivated successfully. Jan 13 20:07:46.166761 containerd[1936]: time="2025-01-13T20:07:46.165838031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:46.174614 containerd[1936]: time="2025-01-13T20:07:46.174322871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 20:07:46.177723 containerd[1936]: time="2025-01-13T20:07:46.176783255Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:46.180097 containerd[1936]: time="2025-01-13T20:07:46.179979155Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:46.184072 containerd[1936]: time="2025-01-13T20:07:46.183968747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:07:46.187599 containerd[1936]: time="2025-01-13T20:07:46.187488383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:46.188398 containerd[1936]: time="2025-01-13T20:07:46.188271635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:07:46.190323 containerd[1936]: time="2025-01-13T20:07:46.190221071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:46.192944 containerd[1936]: time="2025-01-13T20:07:46.192125495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.252091ms" Jan 13 20:07:46.198173 containerd[1936]: time="2025-01-13T20:07:46.198090251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.973563ms" Jan 13 20:07:46.202297 kubelet[2886]: W0113 20:07:46.202150 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.220:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:46.202297 kubelet[2886]: E0113 20:07:46.202217 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.220:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:46.218635 containerd[1936]: time="2025-01-13T20:07:46.218277035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.016039ms" Jan 13 20:07:46.361416 kubelet[2886]: W0113 20:07:46.361307 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:46.361416 kubelet[2886]: E0113 20:07:46.361379 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:46.392754 containerd[1936]: time="2025-01-13T20:07:46.392154780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:07:46.392754 containerd[1936]: time="2025-01-13T20:07:46.392317368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:07:46.392754 containerd[1936]: time="2025-01-13T20:07:46.392356428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:46.392754 containerd[1936]: time="2025-01-13T20:07:46.392546832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:46.400692 containerd[1936]: time="2025-01-13T20:07:46.400369692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:07:46.400692 containerd[1936]: time="2025-01-13T20:07:46.400495416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:07:46.400916 containerd[1936]: time="2025-01-13T20:07:46.400533600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:46.401819 containerd[1936]: time="2025-01-13T20:07:46.401452284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:46.406790 containerd[1936]: time="2025-01-13T20:07:46.406176864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:07:46.406790 containerd[1936]: time="2025-01-13T20:07:46.406275576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:07:46.406790 containerd[1936]: time="2025-01-13T20:07:46.406312848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:46.406790 containerd[1936]: time="2025-01-13T20:07:46.406501776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:46.452003 systemd[1]: Started cri-containerd-fde4cc6258d455f5895c036024c0345c64fc99d5c0f9bd71204d437aaa6de83f.scope - libcontainer container fde4cc6258d455f5895c036024c0345c64fc99d5c0f9bd71204d437aaa6de83f. Jan 13 20:07:46.474965 systemd[1]: Started cri-containerd-d05182d92a43c58d60f9e8c196c284aa0584bb58fc9b62f2830fac8f05a20887.scope - libcontainer container d05182d92a43c58d60f9e8c196c284aa0584bb58fc9b62f2830fac8f05a20887. Jan 13 20:07:46.489270 systemd[1]: Started cri-containerd-7d7dc90a0eb421fce962fba66fa35e45d01c5d498e758224f0ff84b28845a583.scope - libcontainer container 7d7dc90a0eb421fce962fba66fa35e45d01c5d498e758224f0ff84b28845a583. Jan 13 20:07:46.502617 kubelet[2886]: W0113 20:07:46.501553 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-220&limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:46.502617 kubelet[2886]: E0113 20:07:46.501695 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-220&limit=500&resourceVersion=0": dial tcp 172.31.29.220:6443: connect: connection refused Jan 13 20:07:46.562457 kubelet[2886]: E0113 20:07:46.560519 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-220?timeout=10s\": dial tcp 172.31.29.220:6443: connect: connection refused" interval="1.6s" Jan 13 20:07:46.584230 containerd[1936]: time="2025-01-13T20:07:46.584174341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-220,Uid:501da319af8babe4a84db8cc33e022ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"fde4cc6258d455f5895c036024c0345c64fc99d5c0f9bd71204d437aaa6de83f\"" Jan 13 20:07:46.601605 containerd[1936]: time="2025-01-13T20:07:46.601544845Z" level=info msg="CreateContainer within sandbox \"fde4cc6258d455f5895c036024c0345c64fc99d5c0f9bd71204d437aaa6de83f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:07:46.622642 containerd[1936]: time="2025-01-13T20:07:46.622583377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-220,Uid:2902c0cd443a592f2f2a6f5fb1125e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"d05182d92a43c58d60f9e8c196c284aa0584bb58fc9b62f2830fac8f05a20887\"" Jan 13 20:07:46.631342 containerd[1936]: time="2025-01-13T20:07:46.630095173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-220,Uid:08d432e52f08a684c46e679a6eb26905,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d7dc90a0eb421fce962fba66fa35e45d01c5d498e758224f0ff84b28845a583\"" Jan 13 20:07:46.633475 containerd[1936]: time="2025-01-13T20:07:46.633410041Z" level=info msg="CreateContainer within sandbox \"d05182d92a43c58d60f9e8c196c284aa0584bb58fc9b62f2830fac8f05a20887\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:07:46.635976 containerd[1936]: time="2025-01-13T20:07:46.635904097Z" level=info msg="CreateContainer within sandbox \"7d7dc90a0eb421fce962fba66fa35e45d01c5d498e758224f0ff84b28845a583\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:07:46.650094 containerd[1936]: time="2025-01-13T20:07:46.650036161Z" level=info msg="CreateContainer within sandbox \"fde4cc6258d455f5895c036024c0345c64fc99d5c0f9bd71204d437aaa6de83f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7cc6e033035bf2db434a20369a0c5852de49e6f7c614ae2b4809dc351dfa2043\"" Jan 13 20:07:46.652718 containerd[1936]: time="2025-01-13T20:07:46.651238417Z" level=info msg="StartContainer for \"7cc6e033035bf2db434a20369a0c5852de49e6f7c614ae2b4809dc351dfa2043\"" Jan 13 20:07:46.674173 kubelet[2886]: I0113 20:07:46.674129 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-220" Jan 13 20:07:46.674654 kubelet[2886]: E0113 20:07:46.674624 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.220:6443/api/v1/nodes\": dial tcp 172.31.29.220:6443: connect: connection refused" node="ip-172-31-29-220" Jan 13 20:07:46.683339 containerd[1936]: time="2025-01-13T20:07:46.683107333Z" level=info msg="CreateContainer within sandbox \"7d7dc90a0eb421fce962fba66fa35e45d01c5d498e758224f0ff84b28845a583\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7c4e2e973cafe3426618686d4f13dc8474635b4f08032180e1d7bc3b04588bc8\"" Jan 13 20:07:46.684571 containerd[1936]: time="2025-01-13T20:07:46.684371401Z" level=info msg="StartContainer for \"7c4e2e973cafe3426618686d4f13dc8474635b4f08032180e1d7bc3b04588bc8\"" Jan 13 20:07:46.688248 containerd[1936]: time="2025-01-13T20:07:46.688086949Z" level=info msg="CreateContainer within sandbox \"d05182d92a43c58d60f9e8c196c284aa0584bb58fc9b62f2830fac8f05a20887\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c72b08bbd99864b2c6978e99a4b8a377124f6f428f4490f54ccadefac1518164\"" Jan 13 20:07:46.689520 containerd[1936]: time="2025-01-13T20:07:46.689350417Z" level=info msg="StartContainer for \"c72b08bbd99864b2c6978e99a4b8a377124f6f428f4490f54ccadefac1518164\"" Jan 13 20:07:46.721002 systemd[1]: Started cri-containerd-7cc6e033035bf2db434a20369a0c5852de49e6f7c614ae2b4809dc351dfa2043.scope - libcontainer container 7cc6e033035bf2db434a20369a0c5852de49e6f7c614ae2b4809dc351dfa2043. Jan 13 20:07:46.777100 systemd[1]: Started cri-containerd-c72b08bbd99864b2c6978e99a4b8a377124f6f428f4490f54ccadefac1518164.scope - libcontainer container c72b08bbd99864b2c6978e99a4b8a377124f6f428f4490f54ccadefac1518164. Jan 13 20:07:46.793380 systemd[1]: Started cri-containerd-7c4e2e973cafe3426618686d4f13dc8474635b4f08032180e1d7bc3b04588bc8.scope - libcontainer container 7c4e2e973cafe3426618686d4f13dc8474635b4f08032180e1d7bc3b04588bc8. Jan 13 20:07:46.848761 containerd[1936]: time="2025-01-13T20:07:46.848511326Z" level=info msg="StartContainer for \"7cc6e033035bf2db434a20369a0c5852de49e6f7c614ae2b4809dc351dfa2043\" returns successfully" Jan 13 20:07:46.947811 containerd[1936]: time="2025-01-13T20:07:46.946762359Z" level=info msg="StartContainer for \"7c4e2e973cafe3426618686d4f13dc8474635b4f08032180e1d7bc3b04588bc8\" returns successfully" Jan 13 20:07:46.947940 containerd[1936]: time="2025-01-13T20:07:46.946775907Z" level=info msg="StartContainer for \"c72b08bbd99864b2c6978e99a4b8a377124f6f428f4490f54ccadefac1518164\" returns successfully" Jan 13 20:07:48.277357 kubelet[2886]: I0113 20:07:48.277309 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-220" Jan 13 20:07:50.444370 kubelet[2886]: E0113 20:07:50.444309 2886 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-220\" not found" node="ip-172-31-29-220" Jan 13 20:07:50.523046 kubelet[2886]: I0113 20:07:50.522940 2886 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-220" Jan 13 20:07:50.561430 kubelet[2886]: E0113 20:07:50.561380 2886 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-220.181a595daa69ce9e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-220,UID:ip-172-31-29-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-220,},FirstTimestamp:2025-01-13 20:07:45.130639006 +0000 UTC m=+0.928450290,LastTimestamp:2025-01-13 20:07:45.130639006 +0000 UTC m=+0.928450290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-220,}" Jan 13 20:07:51.047306 update_engine[1919]: I20250113 20:07:51.046712 1919 update_attempter.cc:509] Updating boot flags... Jan 13 20:07:51.127257 kubelet[2886]: I0113 20:07:51.127208 2886 apiserver.go:52] "Watching apiserver" Jan 13 20:07:51.154917 kubelet[2886]: I0113 20:07:51.154876 2886 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:07:51.183755 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3173) Jan 13 20:07:51.718723 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3179) Jan 13 20:07:52.160763 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3179) Jan 13 20:07:53.909206 systemd[1]: Reloading requested from client PID 3427 ('systemctl') (unit session-7.scope)... Jan 13 20:07:53.909239 systemd[1]: Reloading... Jan 13 20:07:54.080705 zram_generator::config[3470]: No configuration found. Jan 13 20:07:54.314333 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:54.511394 systemd[1]: Reloading finished in 601 ms. Jan 13 20:07:54.585999 kubelet[2886]: I0113 20:07:54.585947 2886 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:07:54.586299 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:54.599134 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:07:54.599567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:54.599657 systemd[1]: kubelet.service: Consumed 1.723s CPU time, 115.2M memory peak, 0B memory swap peak. Jan 13 20:07:54.607251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:54.914064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:54.923533 (kubelet)[3527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:07:55.036711 kubelet[3527]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:55.036711 kubelet[3527]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:07:55.036711 kubelet[3527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:55.037243 kubelet[3527]: I0113 20:07:55.036806 3527 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:07:55.053688 kubelet[3527]: I0113 20:07:55.053436 3527 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:07:55.053688 kubelet[3527]: I0113 20:07:55.053488 3527 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:07:55.054877 kubelet[3527]: I0113 20:07:55.054163 3527 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:07:55.059701 kubelet[3527]: I0113 20:07:55.058778 3527 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:07:55.063280 kubelet[3527]: I0113 20:07:55.063219 3527 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:07:55.078765 kubelet[3527]: I0113 20:07:55.078713 3527 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:07:55.079765 kubelet[3527]: I0113 20:07:55.079308 3527 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:07:55.079765 kubelet[3527]: I0113 20:07:55.079599 3527 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:07:55.079765 kubelet[3527]: I0113 20:07:55.079640 3527 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:07:55.079765 kubelet[3527]: I0113 20:07:55.079701 3527 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:07:55.079765 kubelet[3527]: I0113 20:07:55.079760 3527 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:55.080169 kubelet[3527]: I0113 20:07:55.079964 3527 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:07:55.081692 kubelet[3527]: I0113 20:07:55.080751 3527 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:07:55.081692 kubelet[3527]: I0113 20:07:55.080842 3527 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:07:55.081692 kubelet[3527]: I0113 20:07:55.080872 3527 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:07:55.081339 sudo[3540]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:07:55.083186 sudo[3540]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:07:55.085548 kubelet[3527]: I0113 20:07:55.084205 3527 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:07:55.085548 kubelet[3527]: I0113 20:07:55.084619 3527 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:07:55.086439 kubelet[3527]: I0113 20:07:55.086382 3527 server.go:1256] "Started kubelet" Jan 13 20:07:55.097255 kubelet[3527]: I0113 20:07:55.097196 3527 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:07:55.111707 kubelet[3527]: I0113 20:07:55.111553 3527 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:07:55.113419 kubelet[3527]: I0113 20:07:55.112972 3527 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:07:55.129691 kubelet[3527]: I0113 20:07:55.127765 3527 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:07:55.129691 kubelet[3527]: I0113 20:07:55.128128 3527 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:07:55.146715 kubelet[3527]: I0113 20:07:55.145054 3527 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:07:55.146715 kubelet[3527]: I0113 20:07:55.146169 3527 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:07:55.146715 kubelet[3527]: I0113 20:07:55.146479 3527 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:07:55.167733 kubelet[3527]: I0113 20:07:55.167363 3527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:07:55.173038 kubelet[3527]: I0113 20:07:55.173000 3527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:07:55.173236 kubelet[3527]: I0113 20:07:55.173216 3527 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:07:55.173364 kubelet[3527]: I0113 20:07:55.173328 3527 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:07:55.173565 kubelet[3527]: E0113 20:07:55.173545 3527 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:07:55.211177 kubelet[3527]: I0113 20:07:55.211121 3527 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:07:55.211177 kubelet[3527]: I0113 20:07:55.211161 3527 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:07:55.211383 kubelet[3527]: I0113 20:07:55.211335 3527 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:07:55.245759 kubelet[3527]: E0113 20:07:55.245722 3527 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:07:55.268421 kubelet[3527]: I0113 20:07:55.268369 3527 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-220" Jan 13 20:07:55.275896 kubelet[3527]: E0113 20:07:55.275781 3527 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:07:55.292867 kubelet[3527]: I0113 20:07:55.292696 3527 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-220" Jan 13 20:07:55.292867 kubelet[3527]: I0113 20:07:55.292817 3527 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-220" Jan 13 20:07:55.394989 kubelet[3527]: I0113 20:07:55.394695 3527 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:07:55.394989 kubelet[3527]: I0113 20:07:55.394738 3527 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:07:55.395939 kubelet[3527]: I0113 20:07:55.395496 3527 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:55.395939 kubelet[3527]: I0113 20:07:55.395769 3527 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:07:55.395939 kubelet[3527]: I0113 20:07:55.395808 3527 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:07:55.395939 kubelet[3527]: I0113 20:07:55.395825 3527 policy_none.go:49] "None policy: Start" Jan 13 20:07:55.398995 kubelet[3527]: I0113 20:07:55.398200 3527 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:07:55.398995 kubelet[3527]: I0113 20:07:55.398465 3527 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:07:55.398995 kubelet[3527]: I0113 20:07:55.398791 3527 state_mem.go:75] "Updated machine memory state" Jan 13 20:07:55.411080 kubelet[3527]: I0113 20:07:55.410446 3527 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:07:55.411629 kubelet[3527]: I0113 20:07:55.411594 3527 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:07:55.477808 kubelet[3527]: I0113 20:07:55.476153 3527 topology_manager.go:215] "Topology Admit Handler" podUID="08d432e52f08a684c46e679a6eb26905" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-220" Jan 13 20:07:55.477808 kubelet[3527]: I0113 20:07:55.476280 3527 topology_manager.go:215] "Topology Admit Handler" podUID="2902c0cd443a592f2f2a6f5fb1125e16" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:55.477808 kubelet[3527]: I0113 20:07:55.476394 3527 topology_manager.go:215] "Topology Admit Handler" podUID="501da319af8babe4a84db8cc33e022ff" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-220" Jan 13 20:07:55.494391 kubelet[3527]: E0113 20:07:55.494333 3527 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-220\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-220" Jan 13 20:07:55.551577 kubelet[3527]: I0113 20:07:55.551521 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:55.551729 kubelet[3527]: I0113 20:07:55.551614 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:55.551729 kubelet[3527]: I0113 20:07:55.551698 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:55.551874 kubelet[3527]: I0113 20:07:55.551752 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/501da319af8babe4a84db8cc33e022ff-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-220\" (UID: \"501da319af8babe4a84db8cc33e022ff\") " pod="kube-system/kube-scheduler-ip-172-31-29-220" Jan 13 20:07:55.551874 kubelet[3527]: I0113 20:07:55.551803 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08d432e52f08a684c46e679a6eb26905-ca-certs\") pod \"kube-apiserver-ip-172-31-29-220\" (UID: \"08d432e52f08a684c46e679a6eb26905\") " pod="kube-system/kube-apiserver-ip-172-31-29-220" Jan 13 20:07:55.551874 kubelet[3527]: I0113 20:07:55.551848 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:55.552022 kubelet[3527]: I0113 20:07:55.551892 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2902c0cd443a592f2f2a6f5fb1125e16-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-220\" (UID: \"2902c0cd443a592f2f2a6f5fb1125e16\") " pod="kube-system/kube-controller-manager-ip-172-31-29-220" Jan 13 20:07:55.552022 kubelet[3527]: I0113 20:07:55.551936 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08d432e52f08a684c46e679a6eb26905-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-220\" (UID: \"08d432e52f08a684c46e679a6eb26905\") " pod="kube-system/kube-apiserver-ip-172-31-29-220" Jan 13 20:07:55.552022 kubelet[3527]: I0113 20:07:55.551985 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08d432e52f08a684c46e679a6eb26905-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-220\" (UID: \"08d432e52f08a684c46e679a6eb26905\") " pod="kube-system/kube-apiserver-ip-172-31-29-220" Jan 13 20:07:56.006580 sudo[3540]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:56.084436 kubelet[3527]: I0113 20:07:56.084020 3527 apiserver.go:52] "Watching apiserver" Jan 13 20:07:56.146880 kubelet[3527]: I0113 20:07:56.146752 3527 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:07:56.305860 kubelet[3527]: E0113 20:07:56.305699 3527 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-220\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-220" Jan 13 20:07:56.337113 kubelet[3527]: I0113 20:07:56.336632 3527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-220" podStartSLOduration=1.336550689 podStartE2EDuration="1.336550689s" podCreationTimestamp="2025-01-13 20:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:07:56.335721921 +0000 UTC m=+1.399247864" watchObservedRunningTime="2025-01-13 20:07:56.336550689 +0000 UTC m=+1.400076608" Jan 13 20:07:56.353431 kubelet[3527]: I0113 20:07:56.353051 3527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-220" podStartSLOduration=3.3529927170000002 podStartE2EDuration="3.352992717s" podCreationTimestamp="2025-01-13 20:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:07:56.352308849 +0000 UTC m=+1.415834816" watchObservedRunningTime="2025-01-13 20:07:56.352992717 +0000 UTC m=+1.416518672" Jan 13 20:07:56.644767 kubelet[3527]: I0113 20:07:56.644107 3527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-220" podStartSLOduration=1.644052923 podStartE2EDuration="1.644052923s" podCreationTimestamp="2025-01-13 20:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:07:56.371307598 +0000 UTC m=+1.434833541" watchObservedRunningTime="2025-01-13 20:07:56.644052923 +0000 UTC m=+1.707578842" Jan 13 20:07:58.015913 sudo[2260]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:58.039602 sshd[2259]: Connection closed by 139.178.68.195 port 40382 Jan 13 20:07:58.039419 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:58.047869 systemd[1]: sshd@6-172.31.29.220:22-139.178.68.195:40382.service: Deactivated successfully. Jan 13 20:07:58.054716 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:07:58.055624 systemd[1]: session-7.scope: Consumed 13.170s CPU time, 187.3M memory peak, 0B memory swap peak. Jan 13 20:07:58.060409 systemd-logind[1918]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:07:58.064534 systemd-logind[1918]: Removed session 7. Jan 13 20:08:06.153243 kubelet[3527]: I0113 20:08:06.153183 3527 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:08:06.154229 containerd[1936]: time="2025-01-13T20:08:06.154079910Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:08:06.154866 kubelet[3527]: I0113 20:08:06.154806 3527 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:08:06.898118 kubelet[3527]: I0113 20:08:06.898046 3527 topology_manager.go:215] "Topology Admit Handler" podUID="f0dc82b0-aec1-4bae-8a4b-6821280c849d" podNamespace="kube-system" podName="kube-proxy-4v7ql" Jan 13 20:08:06.920601 systemd[1]: Created slice kubepods-besteffort-podf0dc82b0_aec1_4bae_8a4b_6821280c849d.slice - libcontainer container kubepods-besteffort-podf0dc82b0_aec1_4bae_8a4b_6821280c849d.slice. Jan 13 20:08:06.926098 kubelet[3527]: I0113 20:08:06.926035 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0dc82b0-aec1-4bae-8a4b-6821280c849d-xtables-lock\") pod \"kube-proxy-4v7ql\" (UID: \"f0dc82b0-aec1-4bae-8a4b-6821280c849d\") " pod="kube-system/kube-proxy-4v7ql" Jan 13 20:08:06.926280 kubelet[3527]: I0113 20:08:06.926120 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0dc82b0-aec1-4bae-8a4b-6821280c849d-kube-proxy\") pod \"kube-proxy-4v7ql\" (UID: \"f0dc82b0-aec1-4bae-8a4b-6821280c849d\") " pod="kube-system/kube-proxy-4v7ql" Jan 13 20:08:06.926280 kubelet[3527]: I0113 20:08:06.926168 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0dc82b0-aec1-4bae-8a4b-6821280c849d-lib-modules\") pod \"kube-proxy-4v7ql\" (UID: \"f0dc82b0-aec1-4bae-8a4b-6821280c849d\") " pod="kube-system/kube-proxy-4v7ql" Jan 13 20:08:06.926280 kubelet[3527]: I0113 20:08:06.926212 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72x5l\" (UniqueName: \"kubernetes.io/projected/f0dc82b0-aec1-4bae-8a4b-6821280c849d-kube-api-access-72x5l\") pod \"kube-proxy-4v7ql\" (UID: \"f0dc82b0-aec1-4bae-8a4b-6821280c849d\") " pod="kube-system/kube-proxy-4v7ql" Jan 13 20:08:06.936923 kubelet[3527]: I0113 20:08:06.936189 3527 topology_manager.go:215] "Topology Admit Handler" podUID="f9469757-9c55-49e2-90d4-99afad8be6e2" podNamespace="kube-system" podName="cilium-qbfwf" Jan 13 20:08:06.957286 systemd[1]: Created slice kubepods-burstable-podf9469757_9c55_49e2_90d4_99afad8be6e2.slice - libcontainer container kubepods-burstable-podf9469757_9c55_49e2_90d4_99afad8be6e2.slice. Jan 13 20:08:07.127838 kubelet[3527]: I0113 20:08:07.127775 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cni-path\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128002 kubelet[3527]: I0113 20:08:07.127856 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-hostproc\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128002 kubelet[3527]: I0113 20:08:07.127913 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-host-proc-sys-net\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128002 kubelet[3527]: I0113 20:08:07.127970 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-host-proc-sys-kernel\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128165 kubelet[3527]: I0113 20:08:07.128017 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-cgroup\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128165 kubelet[3527]: I0113 20:08:07.128063 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-lib-modules\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128165 kubelet[3527]: I0113 20:08:07.128107 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9469757-9c55-49e2-90d4-99afad8be6e2-clustermesh-secrets\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128165 kubelet[3527]: I0113 20:08:07.128155 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xss89\" (UniqueName: \"kubernetes.io/projected/f9469757-9c55-49e2-90d4-99afad8be6e2-kube-api-access-xss89\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128377 kubelet[3527]: I0113 20:08:07.128196 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-etc-cni-netd\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128377 kubelet[3527]: I0113 20:08:07.128239 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-config-path\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128377 kubelet[3527]: I0113 20:08:07.128286 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-run\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128377 kubelet[3527]: I0113 20:08:07.128330 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-bpf-maps\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128377 kubelet[3527]: I0113 20:08:07.128372 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-xtables-lock\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.128629 kubelet[3527]: I0113 20:08:07.128417 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9469757-9c55-49e2-90d4-99afad8be6e2-hubble-tls\") pod \"cilium-qbfwf\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " pod="kube-system/cilium-qbfwf" Jan 13 20:08:07.186081 kubelet[3527]: I0113 20:08:07.185904 3527 topology_manager.go:215] "Topology Admit Handler" podUID="81daafb6-d970-4ba4-8ba2-749a05666538" podNamespace="kube-system" podName="cilium-operator-5cc964979-7zjdt" Jan 13 20:08:07.207300 systemd[1]: Created slice kubepods-besteffort-pod81daafb6_d970_4ba4_8ba2_749a05666538.slice - libcontainer container kubepods-besteffort-pod81daafb6_d970_4ba4_8ba2_749a05666538.slice. Jan 13 20:08:07.241883 containerd[1936]: time="2025-01-13T20:08:07.239924155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4v7ql,Uid:f0dc82b0-aec1-4bae-8a4b-6821280c849d,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:07.333393 kubelet[3527]: I0113 20:08:07.333170 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81daafb6-d970-4ba4-8ba2-749a05666538-cilium-config-path\") pod \"cilium-operator-5cc964979-7zjdt\" (UID: \"81daafb6-d970-4ba4-8ba2-749a05666538\") " pod="kube-system/cilium-operator-5cc964979-7zjdt" Jan 13 20:08:07.333393 kubelet[3527]: I0113 20:08:07.333316 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94jv9\" (UniqueName: \"kubernetes.io/projected/81daafb6-d970-4ba4-8ba2-749a05666538-kube-api-access-94jv9\") pod \"cilium-operator-5cc964979-7zjdt\" (UID: \"81daafb6-d970-4ba4-8ba2-749a05666538\") " pod="kube-system/cilium-operator-5cc964979-7zjdt" Jan 13 20:08:07.381832 containerd[1936]: time="2025-01-13T20:08:07.381090344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:07.381832 containerd[1936]: time="2025-01-13T20:08:07.381179324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:07.381832 containerd[1936]: time="2025-01-13T20:08:07.381205148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:07.381832 containerd[1936]: time="2025-01-13T20:08:07.381335660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:07.450018 systemd[1]: Started cri-containerd-43a6326af23d2a0a9b8248a7318987af25af078002437ce704cc766bf755ebad.scope - libcontainer container 43a6326af23d2a0a9b8248a7318987af25af078002437ce704cc766bf755ebad. Jan 13 20:08:07.519444 containerd[1936]: time="2025-01-13T20:08:07.519371277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4v7ql,Uid:f0dc82b0-aec1-4bae-8a4b-6821280c849d,Namespace:kube-system,Attempt:0,} returns sandbox id \"43a6326af23d2a0a9b8248a7318987af25af078002437ce704cc766bf755ebad\"" Jan 13 20:08:07.520871 containerd[1936]: time="2025-01-13T20:08:07.520643301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-7zjdt,Uid:81daafb6-d970-4ba4-8ba2-749a05666538,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:07.527502 containerd[1936]: time="2025-01-13T20:08:07.527386557Z" level=info msg="CreateContainer within sandbox \"43a6326af23d2a0a9b8248a7318987af25af078002437ce704cc766bf755ebad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:08:07.569993 containerd[1936]: time="2025-01-13T20:08:07.569873073Z" level=info msg="CreateContainer within sandbox \"43a6326af23d2a0a9b8248a7318987af25af078002437ce704cc766bf755ebad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c00698f4f1f4eaa3be4ab08f1ed2c8829f7c9f40ee203695d952c92f24984338\"" Jan 13 20:08:07.570188 containerd[1936]: time="2025-01-13T20:08:07.570082389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbfwf,Uid:f9469757-9c55-49e2-90d4-99afad8be6e2,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:07.571783 containerd[1936]: time="2025-01-13T20:08:07.571343565Z" level=info msg="StartContainer for \"c00698f4f1f4eaa3be4ab08f1ed2c8829f7c9f40ee203695d952c92f24984338\"" Jan 13 20:08:07.585533 containerd[1936]: time="2025-01-13T20:08:07.585416817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:07.585931 containerd[1936]: time="2025-01-13T20:08:07.585851925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:07.586560 containerd[1936]: time="2025-01-13T20:08:07.585897597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:07.588232 containerd[1936]: time="2025-01-13T20:08:07.588045645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:07.631043 systemd[1]: Started cri-containerd-a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a.scope - libcontainer container a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a. Jan 13 20:08:07.661690 containerd[1936]: time="2025-01-13T20:08:07.658568386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:07.661961 containerd[1936]: time="2025-01-13T20:08:07.661015630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:07.661961 containerd[1936]: time="2025-01-13T20:08:07.661057018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:07.662432 containerd[1936]: time="2025-01-13T20:08:07.662268382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:07.665032 systemd[1]: Started cri-containerd-c00698f4f1f4eaa3be4ab08f1ed2c8829f7c9f40ee203695d952c92f24984338.scope - libcontainer container c00698f4f1f4eaa3be4ab08f1ed2c8829f7c9f40ee203695d952c92f24984338. Jan 13 20:08:07.717840 systemd[1]: Started cri-containerd-e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca.scope - libcontainer container e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca. Jan 13 20:08:07.799234 containerd[1936]: time="2025-01-13T20:08:07.798993466Z" level=info msg="StartContainer for \"c00698f4f1f4eaa3be4ab08f1ed2c8829f7c9f40ee203695d952c92f24984338\" returns successfully" Jan 13 20:08:07.816523 containerd[1936]: time="2025-01-13T20:08:07.816423202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-7zjdt,Uid:81daafb6-d970-4ba4-8ba2-749a05666538,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\"" Jan 13 20:08:07.818876 containerd[1936]: time="2025-01-13T20:08:07.818654506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbfwf,Uid:f9469757-9c55-49e2-90d4-99afad8be6e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\"" Jan 13 20:08:07.826532 containerd[1936]: time="2025-01-13T20:08:07.825773734Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:08:08.344480 kubelet[3527]: I0113 20:08:08.344409 3527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4v7ql" podStartSLOduration=2.344319273 podStartE2EDuration="2.344319273s" podCreationTimestamp="2025-01-13 20:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:08.342401817 +0000 UTC m=+13.405927748" watchObservedRunningTime="2025-01-13 20:08:08.344319273 +0000 UTC m=+13.407845204" Jan 13 20:08:13.898382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718357400.mount: Deactivated successfully. Jan 13 20:08:16.433397 containerd[1936]: time="2025-01-13T20:08:16.432937661Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:16.435536 containerd[1936]: time="2025-01-13T20:08:16.435446825Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651534" Jan 13 20:08:16.437524 containerd[1936]: time="2025-01-13T20:08:16.437436017Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:16.441450 containerd[1936]: time="2025-01-13T20:08:16.440892149Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.615054539s" Jan 13 20:08:16.441450 containerd[1936]: time="2025-01-13T20:08:16.440947205Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:08:16.442929 containerd[1936]: time="2025-01-13T20:08:16.442295597Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:08:16.445331 containerd[1936]: time="2025-01-13T20:08:16.445097525Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:08:16.471346 containerd[1936]: time="2025-01-13T20:08:16.471291737Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\"" Jan 13 20:08:16.473930 containerd[1936]: time="2025-01-13T20:08:16.472428005Z" level=info msg="StartContainer for \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\"" Jan 13 20:08:16.533005 systemd[1]: Started cri-containerd-a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947.scope - libcontainer container a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947. Jan 13 20:08:16.584711 containerd[1936]: time="2025-01-13T20:08:16.584627130Z" level=info msg="StartContainer for \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\" returns successfully" Jan 13 20:08:16.607172 systemd[1]: cri-containerd-a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947.scope: Deactivated successfully. Jan 13 20:08:17.460815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947-rootfs.mount: Deactivated successfully. Jan 13 20:08:17.631330 containerd[1936]: time="2025-01-13T20:08:17.631226935Z" level=info msg="shim disconnected" id=a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947 namespace=k8s.io Jan 13 20:08:17.631919 containerd[1936]: time="2025-01-13T20:08:17.631329415Z" level=warning msg="cleaning up after shim disconnected" id=a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947 namespace=k8s.io Jan 13 20:08:17.631919 containerd[1936]: time="2025-01-13T20:08:17.631353475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:18.339120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956423126.mount: Deactivated successfully. Jan 13 20:08:18.370180 containerd[1936]: time="2025-01-13T20:08:18.370073347Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:08:18.395796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1545572453.mount: Deactivated successfully. Jan 13 20:08:18.418904 containerd[1936]: time="2025-01-13T20:08:18.418833187Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\"" Jan 13 20:08:18.421088 containerd[1936]: time="2025-01-13T20:08:18.420950059Z" level=info msg="StartContainer for \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\"" Jan 13 20:08:18.493419 systemd[1]: Started cri-containerd-ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8.scope - libcontainer container ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8. Jan 13 20:08:18.567640 containerd[1936]: time="2025-01-13T20:08:18.567387896Z" level=info msg="StartContainer for \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\" returns successfully" Jan 13 20:08:18.592552 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:08:18.593158 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:18.593308 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:18.607339 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:18.608093 systemd[1]: cri-containerd-ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8.scope: Deactivated successfully. Jan 13 20:08:18.660000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:18.681053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8-rootfs.mount: Deactivated successfully. Jan 13 20:08:18.704237 containerd[1936]: time="2025-01-13T20:08:18.704162264Z" level=info msg="shim disconnected" id=ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8 namespace=k8s.io Jan 13 20:08:18.705505 containerd[1936]: time="2025-01-13T20:08:18.704974760Z" level=warning msg="cleaning up after shim disconnected" id=ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8 namespace=k8s.io Jan 13 20:08:18.705505 containerd[1936]: time="2025-01-13T20:08:18.705018464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:19.188687 containerd[1936]: time="2025-01-13T20:08:19.188603023Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:19.190479 containerd[1936]: time="2025-01-13T20:08:19.190409467Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138290" Jan 13 20:08:19.192958 containerd[1936]: time="2025-01-13T20:08:19.192867103Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:19.196191 containerd[1936]: time="2025-01-13T20:08:19.196001047Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.753623322s" Jan 13 20:08:19.196191 containerd[1936]: time="2025-01-13T20:08:19.196062115Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:08:19.199587 containerd[1936]: time="2025-01-13T20:08:19.199305175Z" level=info msg="CreateContainer within sandbox \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:08:19.233639 containerd[1936]: time="2025-01-13T20:08:19.233563315Z" level=info msg="CreateContainer within sandbox \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca\"" Jan 13 20:08:19.236366 containerd[1936]: time="2025-01-13T20:08:19.235615231Z" level=info msg="StartContainer for \"31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca\"" Jan 13 20:08:19.282008 systemd[1]: Started cri-containerd-31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca.scope - libcontainer container 31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca. Jan 13 20:08:19.328154 containerd[1936]: time="2025-01-13T20:08:19.328074248Z" level=info msg="StartContainer for \"31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca\" returns successfully" Jan 13 20:08:19.387746 containerd[1936]: time="2025-01-13T20:08:19.386647796Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:08:19.432821 containerd[1936]: time="2025-01-13T20:08:19.431940428Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\"" Jan 13 20:08:19.434788 containerd[1936]: time="2025-01-13T20:08:19.434249288Z" level=info msg="StartContainer for \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\"" Jan 13 20:08:19.438127 kubelet[3527]: I0113 20:08:19.438051 3527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-7zjdt" podStartSLOduration=1.066089923 podStartE2EDuration="12.437992616s" podCreationTimestamp="2025-01-13 20:08:07 +0000 UTC" firstStartedPulling="2025-01-13 20:08:07.82486693 +0000 UTC m=+12.888392849" lastFinishedPulling="2025-01-13 20:08:19.196769611 +0000 UTC m=+24.260295542" observedRunningTime="2025-01-13 20:08:19.437530004 +0000 UTC m=+24.501055935" watchObservedRunningTime="2025-01-13 20:08:19.437992616 +0000 UTC m=+24.501518535" Jan 13 20:08:19.538108 systemd[1]: Started cri-containerd-20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9.scope - libcontainer container 20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9. Jan 13 20:08:19.616917 containerd[1936]: time="2025-01-13T20:08:19.616723485Z" level=info msg="StartContainer for \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\" returns successfully" Jan 13 20:08:19.625849 systemd[1]: cri-containerd-20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9.scope: Deactivated successfully. Jan 13 20:08:19.700694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9-rootfs.mount: Deactivated successfully. Jan 13 20:08:19.783701 containerd[1936]: time="2025-01-13T20:08:19.783597322Z" level=info msg="shim disconnected" id=20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9 namespace=k8s.io Jan 13 20:08:19.783701 containerd[1936]: time="2025-01-13T20:08:19.783695854Z" level=warning msg="cleaning up after shim disconnected" id=20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9 namespace=k8s.io Jan 13 20:08:19.784442 containerd[1936]: time="2025-01-13T20:08:19.783720490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:20.411153 containerd[1936]: time="2025-01-13T20:08:20.409037529Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:08:20.450825 containerd[1936]: time="2025-01-13T20:08:20.450769497Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\"" Jan 13 20:08:20.451770 containerd[1936]: time="2025-01-13T20:08:20.451724457Z" level=info msg="StartContainer for \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\"" Jan 13 20:08:20.464336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438672796.mount: Deactivated successfully. Jan 13 20:08:20.566930 systemd[1]: Started cri-containerd-ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432.scope - libcontainer container ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432. Jan 13 20:08:20.668746 systemd[1]: cri-containerd-ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432.scope: Deactivated successfully. Jan 13 20:08:20.675351 containerd[1936]: time="2025-01-13T20:08:20.674398378Z" level=info msg="StartContainer for \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\" returns successfully" Jan 13 20:08:20.718344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432-rootfs.mount: Deactivated successfully. Jan 13 20:08:20.729348 containerd[1936]: time="2025-01-13T20:08:20.728387254Z" level=info msg="shim disconnected" id=ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432 namespace=k8s.io Jan 13 20:08:20.729348 containerd[1936]: time="2025-01-13T20:08:20.729153934Z" level=warning msg="cleaning up after shim disconnected" id=ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432 namespace=k8s.io Jan 13 20:08:20.729348 containerd[1936]: time="2025-01-13T20:08:20.729184714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:21.408586 containerd[1936]: time="2025-01-13T20:08:21.408282598Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:08:21.448008 containerd[1936]: time="2025-01-13T20:08:21.447916714Z" level=info msg="CreateContainer within sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\"" Jan 13 20:08:21.451000 containerd[1936]: time="2025-01-13T20:08:21.449146654Z" level=info msg="StartContainer for \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\"" Jan 13 20:08:21.463350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768429342.mount: Deactivated successfully. Jan 13 20:08:21.520998 systemd[1]: Started cri-containerd-1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f.scope - libcontainer container 1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f. Jan 13 20:08:21.683368 containerd[1936]: time="2025-01-13T20:08:21.683205359Z" level=info msg="StartContainer for \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\" returns successfully" Jan 13 20:08:21.745492 systemd[1]: run-containerd-runc-k8s.io-1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f-runc.RGt5qB.mount: Deactivated successfully. Jan 13 20:08:21.923319 kubelet[3527]: I0113 20:08:21.923259 3527 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:08:21.970789 kubelet[3527]: I0113 20:08:21.970242 3527 topology_manager.go:215] "Topology Admit Handler" podUID="cfc1d271-5700-4db9-9b46-ca518e090652" podNamespace="kube-system" podName="coredns-76f75df574-l6q7b" Jan 13 20:08:21.987698 systemd[1]: Created slice kubepods-burstable-podcfc1d271_5700_4db9_9b46_ca518e090652.slice - libcontainer container kubepods-burstable-podcfc1d271_5700_4db9_9b46_ca518e090652.slice. Jan 13 20:08:21.999605 kubelet[3527]: I0113 20:08:21.999539 3527 topology_manager.go:215] "Topology Admit Handler" podUID="0f22d9a4-b7f0-4767-86fa-09b48e2c01ab" podNamespace="kube-system" podName="coredns-76f75df574-qzc96" Jan 13 20:08:22.015456 systemd[1]: Created slice kubepods-burstable-pod0f22d9a4_b7f0_4767_86fa_09b48e2c01ab.slice - libcontainer container kubepods-burstable-pod0f22d9a4_b7f0_4767_86fa_09b48e2c01ab.slice. Jan 13 20:08:22.147392 kubelet[3527]: I0113 20:08:22.147232 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f22d9a4-b7f0-4767-86fa-09b48e2c01ab-config-volume\") pod \"coredns-76f75df574-qzc96\" (UID: \"0f22d9a4-b7f0-4767-86fa-09b48e2c01ab\") " pod="kube-system/coredns-76f75df574-qzc96" Jan 13 20:08:22.147392 kubelet[3527]: I0113 20:08:22.147382 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzhdt\" (UniqueName: \"kubernetes.io/projected/cfc1d271-5700-4db9-9b46-ca518e090652-kube-api-access-xzhdt\") pod \"coredns-76f75df574-l6q7b\" (UID: \"cfc1d271-5700-4db9-9b46-ca518e090652\") " pod="kube-system/coredns-76f75df574-l6q7b" Jan 13 20:08:22.147620 kubelet[3527]: I0113 20:08:22.147446 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvj97\" (UniqueName: \"kubernetes.io/projected/0f22d9a4-b7f0-4767-86fa-09b48e2c01ab-kube-api-access-lvj97\") pod \"coredns-76f75df574-qzc96\" (UID: \"0f22d9a4-b7f0-4767-86fa-09b48e2c01ab\") " pod="kube-system/coredns-76f75df574-qzc96" Jan 13 20:08:22.147620 kubelet[3527]: I0113 20:08:22.147496 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfc1d271-5700-4db9-9b46-ca518e090652-config-volume\") pod \"coredns-76f75df574-l6q7b\" (UID: \"cfc1d271-5700-4db9-9b46-ca518e090652\") " pod="kube-system/coredns-76f75df574-l6q7b" Jan 13 20:08:22.299042 containerd[1936]: time="2025-01-13T20:08:22.298219222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l6q7b,Uid:cfc1d271-5700-4db9-9b46-ca518e090652,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:22.324891 containerd[1936]: time="2025-01-13T20:08:22.324826138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qzc96,Uid:0f22d9a4-b7f0-4767-86fa-09b48e2c01ab,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:24.643370 systemd-networkd[1853]: cilium_host: Link UP Jan 13 20:08:24.645833 systemd-networkd[1853]: cilium_net: Link UP Jan 13 20:08:24.646319 systemd-networkd[1853]: cilium_net: Gained carrier Jan 13 20:08:24.646699 systemd-networkd[1853]: cilium_host: Gained carrier Jan 13 20:08:24.646886 (udev-worker)[4311]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:24.649928 (udev-worker)[4309]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:24.825089 systemd-networkd[1853]: cilium_vxlan: Link UP Jan 13 20:08:24.825108 systemd-networkd[1853]: cilium_vxlan: Gained carrier Jan 13 20:08:25.315714 kernel: NET: Registered PF_ALG protocol family Jan 13 20:08:25.345877 systemd-networkd[1853]: cilium_host: Gained IPv6LL Jan 13 20:08:25.474005 systemd-networkd[1853]: cilium_net: Gained IPv6LL Jan 13 20:08:26.049938 systemd-networkd[1853]: cilium_vxlan: Gained IPv6LL Jan 13 20:08:26.672187 systemd-networkd[1853]: lxc_health: Link UP Jan 13 20:08:26.687279 systemd-networkd[1853]: lxc_health: Gained carrier Jan 13 20:08:27.374651 systemd-networkd[1853]: lxce8e4cee0845b: Link UP Jan 13 20:08:27.385765 kernel: eth0: renamed from tmp1e279 Jan 13 20:08:27.390283 systemd-networkd[1853]: lxce8e4cee0845b: Gained carrier Jan 13 20:08:27.442713 kernel: eth0: renamed from tmpd948a Jan 13 20:08:27.447287 systemd-networkd[1853]: lxc44637855927a: Link UP Jan 13 20:08:27.449742 systemd-networkd[1853]: lxc44637855927a: Gained carrier Jan 13 20:08:27.613368 kubelet[3527]: I0113 20:08:27.613297 3527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qbfwf" podStartSLOduration=12.995510818 podStartE2EDuration="21.613239389s" podCreationTimestamp="2025-01-13 20:08:06 +0000 UTC" firstStartedPulling="2025-01-13 20:08:07.82363369 +0000 UTC m=+12.887159597" lastFinishedPulling="2025-01-13 20:08:16.441362225 +0000 UTC m=+21.504888168" observedRunningTime="2025-01-13 20:08:22.446289611 +0000 UTC m=+27.509815554" watchObservedRunningTime="2025-01-13 20:08:27.613239389 +0000 UTC m=+32.676765320" Jan 13 20:08:27.905910 systemd-networkd[1853]: lxc_health: Gained IPv6LL Jan 13 20:08:28.930069 systemd-networkd[1853]: lxce8e4cee0845b: Gained IPv6LL Jan 13 20:08:29.443794 systemd-networkd[1853]: lxc44637855927a: Gained IPv6LL Jan 13 20:08:31.479884 ntpd[1911]: Listen normally on 8 cilium_host 192.168.0.242:123 Jan 13 20:08:31.482842 ntpd[1911]: 13 Jan 20:08:31 ntpd[1911]: Listen normally on 8 cilium_host 192.168.0.242:123 Jan 13 20:08:31.482842 ntpd[1911]: 13 Jan 20:08:31 ntpd[1911]: Listen normally on 9 cilium_net [fe80::bc58:eff:fe13:20f9%4]:123 Jan 13 20:08:31.482842 ntpd[1911]: 13 Jan 20:08:31 ntpd[1911]: Listen normally on 10 cilium_host [fe80::90e5:33ff:fe1e:bbe3%5]:123 Jan 13 20:08:31.482842 ntpd[1911]: 13 Jan 20:08:31 ntpd[1911]: Listen normally on 11 cilium_vxlan [fe80::c479:88ff:fe6c:c602%6]:123 Jan 13 20:08:31.482842 ntpd[1911]: 13 Jan 20:08:31 ntpd[1911]: Listen normally on 12 lxc_health [fe80::5454:65ff:fe74:9ec1%8]:123 Jan 13 20:08:31.482842 ntpd[1911]: 13 Jan 20:08:31 ntpd[1911]: Listen normally on 13 lxce8e4cee0845b [fe80::dc47:99ff:fe0f:4b7%10]:123 Jan 13 20:08:31.482842 ntpd[1911]: 13 Jan 20:08:31 ntpd[1911]: Listen normally on 14 lxc44637855927a [fe80::2c57:33ff:fe57:d31%12]:123 Jan 13 20:08:31.480094 ntpd[1911]: Listen normally on 9 cilium_net [fe80::bc58:eff:fe13:20f9%4]:123 Jan 13 20:08:31.480198 ntpd[1911]: Listen normally on 10 cilium_host [fe80::90e5:33ff:fe1e:bbe3%5]:123 Jan 13 20:08:31.480318 ntpd[1911]: Listen normally on 11 cilium_vxlan [fe80::c479:88ff:fe6c:c602%6]:123 Jan 13 20:08:31.480395 ntpd[1911]: Listen normally on 12 lxc_health [fe80::5454:65ff:fe74:9ec1%8]:123 Jan 13 20:08:31.480465 ntpd[1911]: Listen normally on 13 lxce8e4cee0845b [fe80::dc47:99ff:fe0f:4b7%10]:123 Jan 13 20:08:31.480541 ntpd[1911]: Listen normally on 14 lxc44637855927a [fe80::2c57:33ff:fe57:d31%12]:123 Jan 13 20:08:31.649923 systemd[1]: Started sshd@7-172.31.29.220:22-139.178.68.195:51694.service - OpenSSH per-connection server daemon (139.178.68.195:51694). Jan 13 20:08:31.849303 kubelet[3527]: I0113 20:08:31.848322 3527 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:08:31.850426 sshd[4707]: Accepted publickey for core from 139.178.68.195 port 51694 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:31.857507 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:31.886102 systemd-logind[1918]: New session 8 of user core. Jan 13 20:08:31.891005 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:08:32.197732 sshd[4709]: Connection closed by 139.178.68.195 port 51694 Jan 13 20:08:32.196626 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:32.204118 systemd-logind[1918]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:08:32.205432 systemd[1]: sshd@7-172.31.29.220:22-139.178.68.195:51694.service: Deactivated successfully. Jan 13 20:08:32.211022 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:08:32.216707 systemd-logind[1918]: Removed session 8. Jan 13 20:08:35.951215 containerd[1936]: time="2025-01-13T20:08:35.951046430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:35.951215 containerd[1936]: time="2025-01-13T20:08:35.951157214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:35.956782 containerd[1936]: time="2025-01-13T20:08:35.954332162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:35.959378 containerd[1936]: time="2025-01-13T20:08:35.957306086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:35.962934 containerd[1936]: time="2025-01-13T20:08:35.962635478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:35.962934 containerd[1936]: time="2025-01-13T20:08:35.962773130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:35.962934 containerd[1936]: time="2025-01-13T20:08:35.962840150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:35.964350 containerd[1936]: time="2025-01-13T20:08:35.963997586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:36.037424 systemd[1]: Started cri-containerd-1e279d9135e65b22f2284ee897b1e758189694799708d3bc9d7a08141ba083d2.scope - libcontainer container 1e279d9135e65b22f2284ee897b1e758189694799708d3bc9d7a08141ba083d2. Jan 13 20:08:36.072378 systemd[1]: Started cri-containerd-d948afab8ca6d6bfd5ea86c06cc75621a9024da00f7a47debfd1c71ad7c7dcdc.scope - libcontainer container d948afab8ca6d6bfd5ea86c06cc75621a9024da00f7a47debfd1c71ad7c7dcdc. Jan 13 20:08:36.228880 containerd[1936]: time="2025-01-13T20:08:36.228369479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l6q7b,Uid:cfc1d271-5700-4db9-9b46-ca518e090652,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e279d9135e65b22f2284ee897b1e758189694799708d3bc9d7a08141ba083d2\"" Jan 13 20:08:36.240789 containerd[1936]: time="2025-01-13T20:08:36.240203592Z" level=info msg="CreateContainer within sandbox \"1e279d9135e65b22f2284ee897b1e758189694799708d3bc9d7a08141ba083d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:08:36.252823 containerd[1936]: time="2025-01-13T20:08:36.252630804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qzc96,Uid:0f22d9a4-b7f0-4767-86fa-09b48e2c01ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"d948afab8ca6d6bfd5ea86c06cc75621a9024da00f7a47debfd1c71ad7c7dcdc\"" Jan 13 20:08:36.266337 containerd[1936]: time="2025-01-13T20:08:36.266158572Z" level=info msg="CreateContainer within sandbox \"d948afab8ca6d6bfd5ea86c06cc75621a9024da00f7a47debfd1c71ad7c7dcdc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:08:36.309817 containerd[1936]: time="2025-01-13T20:08:36.309591768Z" level=info msg="CreateContainer within sandbox \"1e279d9135e65b22f2284ee897b1e758189694799708d3bc9d7a08141ba083d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93b5cdc322926dbf647c8628144c3650e891df699ef7696ac0e3203f23c11eb2\"" Jan 13 20:08:36.311783 containerd[1936]: time="2025-01-13T20:08:36.311508720Z" level=info msg="StartContainer for \"93b5cdc322926dbf647c8628144c3650e891df699ef7696ac0e3203f23c11eb2\"" Jan 13 20:08:36.330075 containerd[1936]: time="2025-01-13T20:08:36.329811108Z" level=info msg="CreateContainer within sandbox \"d948afab8ca6d6bfd5ea86c06cc75621a9024da00f7a47debfd1c71ad7c7dcdc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99b6228351be54a736ab253a2c5d4c4dd56944a932910f4732aa84f9599a34f3\"" Jan 13 20:08:36.333708 containerd[1936]: time="2025-01-13T20:08:36.332720640Z" level=info msg="StartContainer for \"99b6228351be54a736ab253a2c5d4c4dd56944a932910f4732aa84f9599a34f3\"" Jan 13 20:08:36.402909 systemd[1]: Started cri-containerd-93b5cdc322926dbf647c8628144c3650e891df699ef7696ac0e3203f23c11eb2.scope - libcontainer container 93b5cdc322926dbf647c8628144c3650e891df699ef7696ac0e3203f23c11eb2. Jan 13 20:08:36.439990 systemd[1]: Started cri-containerd-99b6228351be54a736ab253a2c5d4c4dd56944a932910f4732aa84f9599a34f3.scope - libcontainer container 99b6228351be54a736ab253a2c5d4c4dd56944a932910f4732aa84f9599a34f3. Jan 13 20:08:36.505779 containerd[1936]: time="2025-01-13T20:08:36.505495681Z" level=info msg="StartContainer for \"93b5cdc322926dbf647c8628144c3650e891df699ef7696ac0e3203f23c11eb2\" returns successfully" Jan 13 20:08:36.527294 containerd[1936]: time="2025-01-13T20:08:36.527240113Z" level=info msg="StartContainer for \"99b6228351be54a736ab253a2c5d4c4dd56944a932910f4732aa84f9599a34f3\" returns successfully" Jan 13 20:08:36.973174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340983313.mount: Deactivated successfully. Jan 13 20:08:37.240163 systemd[1]: Started sshd@8-172.31.29.220:22-139.178.68.195:42716.service - OpenSSH per-connection server daemon (139.178.68.195:42716). Jan 13 20:08:37.437091 sshd[4890]: Accepted publickey for core from 139.178.68.195 port 42716 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:37.439735 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:37.447233 systemd-logind[1918]: New session 9 of user core. Jan 13 20:08:37.454958 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:08:37.514234 kubelet[3527]: I0113 20:08:37.513405 3527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qzc96" podStartSLOduration=30.513347918 podStartE2EDuration="30.513347918s" podCreationTimestamp="2025-01-13 20:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:37.51162725 +0000 UTC m=+42.575153205" watchObservedRunningTime="2025-01-13 20:08:37.513347918 +0000 UTC m=+42.576873837" Jan 13 20:08:37.537570 kubelet[3527]: I0113 20:08:37.537506 3527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l6q7b" podStartSLOduration=30.53741975 podStartE2EDuration="30.53741975s" podCreationTimestamp="2025-01-13 20:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:37.53459519 +0000 UTC m=+42.598121145" watchObservedRunningTime="2025-01-13 20:08:37.53741975 +0000 UTC m=+42.600945669" Jan 13 20:08:37.771828 sshd[4892]: Connection closed by 139.178.68.195 port 42716 Jan 13 20:08:37.770972 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:37.777246 systemd-logind[1918]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:08:37.779315 systemd[1]: sshd@8-172.31.29.220:22-139.178.68.195:42716.service: Deactivated successfully. Jan 13 20:08:37.783832 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:08:37.787029 systemd-logind[1918]: Removed session 9. Jan 13 20:08:42.807187 systemd[1]: Started sshd@9-172.31.29.220:22-139.178.68.195:42730.service - OpenSSH per-connection server daemon (139.178.68.195:42730). Jan 13 20:08:42.995152 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 42730 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:42.997726 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:43.005579 systemd-logind[1918]: New session 10 of user core. Jan 13 20:08:43.012957 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:08:43.254888 sshd[4915]: Connection closed by 139.178.68.195 port 42730 Jan 13 20:08:43.254772 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:43.261529 systemd[1]: sshd@9-172.31.29.220:22-139.178.68.195:42730.service: Deactivated successfully. Jan 13 20:08:43.261731 systemd-logind[1918]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:08:43.266508 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:08:43.268150 systemd-logind[1918]: Removed session 10. Jan 13 20:08:48.297185 systemd[1]: Started sshd@10-172.31.29.220:22-139.178.68.195:54034.service - OpenSSH per-connection server daemon (139.178.68.195:54034). Jan 13 20:08:48.482253 sshd[4931]: Accepted publickey for core from 139.178.68.195 port 54034 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:48.484652 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:48.492760 systemd-logind[1918]: New session 11 of user core. Jan 13 20:08:48.503062 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:08:48.766704 sshd[4933]: Connection closed by 139.178.68.195 port 54034 Jan 13 20:08:48.765623 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:48.774386 systemd[1]: sshd@10-172.31.29.220:22-139.178.68.195:54034.service: Deactivated successfully. Jan 13 20:08:48.782019 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:08:48.789361 systemd-logind[1918]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:08:48.818974 systemd[1]: Started sshd@11-172.31.29.220:22-139.178.68.195:54040.service - OpenSSH per-connection server daemon (139.178.68.195:54040). Jan 13 20:08:48.820866 systemd-logind[1918]: Removed session 11. Jan 13 20:08:49.018612 sshd[4945]: Accepted publickey for core from 139.178.68.195 port 54040 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:49.021392 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:49.029848 systemd-logind[1918]: New session 12 of user core. Jan 13 20:08:49.034925 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:08:49.350116 sshd[4947]: Connection closed by 139.178.68.195 port 54040 Jan 13 20:08:49.350967 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:49.363773 systemd[1]: sshd@11-172.31.29.220:22-139.178.68.195:54040.service: Deactivated successfully. Jan 13 20:08:49.370114 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:08:49.373134 systemd-logind[1918]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:08:49.400385 systemd[1]: Started sshd@12-172.31.29.220:22-139.178.68.195:54046.service - OpenSSH per-connection server daemon (139.178.68.195:54046). Jan 13 20:08:49.402554 systemd-logind[1918]: Removed session 12. Jan 13 20:08:49.590984 sshd[4956]: Accepted publickey for core from 139.178.68.195 port 54046 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:49.594101 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:49.602780 systemd-logind[1918]: New session 13 of user core. Jan 13 20:08:49.615214 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:08:49.855513 sshd[4958]: Connection closed by 139.178.68.195 port 54046 Jan 13 20:08:49.856208 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:49.863508 systemd[1]: sshd@12-172.31.29.220:22-139.178.68.195:54046.service: Deactivated successfully. Jan 13 20:08:49.867428 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:08:49.870528 systemd-logind[1918]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:08:49.872715 systemd-logind[1918]: Removed session 13. Jan 13 20:08:54.897183 systemd[1]: Started sshd@13-172.31.29.220:22-139.178.68.195:45934.service - OpenSSH per-connection server daemon (139.178.68.195:45934). Jan 13 20:08:55.090968 sshd[4969]: Accepted publickey for core from 139.178.68.195 port 45934 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:55.093685 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:55.102071 systemd-logind[1918]: New session 14 of user core. Jan 13 20:08:55.107951 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:08:55.364174 sshd[4971]: Connection closed by 139.178.68.195 port 45934 Jan 13 20:08:55.365103 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:55.371826 systemd-logind[1918]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:08:55.372613 systemd[1]: sshd@13-172.31.29.220:22-139.178.68.195:45934.service: Deactivated successfully. Jan 13 20:08:55.377368 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:08:55.380414 systemd-logind[1918]: Removed session 14. Jan 13 20:09:00.403222 systemd[1]: Started sshd@14-172.31.29.220:22-139.178.68.195:45942.service - OpenSSH per-connection server daemon (139.178.68.195:45942). Jan 13 20:09:00.591987 sshd[4984]: Accepted publickey for core from 139.178.68.195 port 45942 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:00.595828 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:00.604573 systemd-logind[1918]: New session 15 of user core. Jan 13 20:09:00.609952 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:09:00.854510 sshd[4986]: Connection closed by 139.178.68.195 port 45942 Jan 13 20:09:00.855382 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:00.861721 systemd[1]: sshd@14-172.31.29.220:22-139.178.68.195:45942.service: Deactivated successfully. Jan 13 20:09:00.867936 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:09:00.869566 systemd-logind[1918]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:09:00.871757 systemd-logind[1918]: Removed session 15. Jan 13 20:09:05.895202 systemd[1]: Started sshd@15-172.31.29.220:22-139.178.68.195:50158.service - OpenSSH per-connection server daemon (139.178.68.195:50158). Jan 13 20:09:06.092134 sshd[4997]: Accepted publickey for core from 139.178.68.195 port 50158 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:06.094584 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:06.101765 systemd-logind[1918]: New session 16 of user core. Jan 13 20:09:06.108964 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:09:06.363140 sshd[4999]: Connection closed by 139.178.68.195 port 50158 Jan 13 20:09:06.364244 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:06.369588 systemd-logind[1918]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:09:06.370100 systemd[1]: sshd@15-172.31.29.220:22-139.178.68.195:50158.service: Deactivated successfully. Jan 13 20:09:06.374059 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:09:06.378609 systemd-logind[1918]: Removed session 16. Jan 13 20:09:11.403177 systemd[1]: Started sshd@16-172.31.29.220:22-139.178.68.195:50168.service - OpenSSH per-connection server daemon (139.178.68.195:50168). Jan 13 20:09:11.588584 sshd[5013]: Accepted publickey for core from 139.178.68.195 port 50168 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:11.591190 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:11.600033 systemd-logind[1918]: New session 17 of user core. Jan 13 20:09:11.606949 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:09:11.857906 sshd[5015]: Connection closed by 139.178.68.195 port 50168 Jan 13 20:09:11.857786 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:11.862995 systemd-logind[1918]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:09:11.863875 systemd[1]: sshd@16-172.31.29.220:22-139.178.68.195:50168.service: Deactivated successfully. Jan 13 20:09:11.868538 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:09:11.873006 systemd-logind[1918]: Removed session 17. Jan 13 20:09:11.900216 systemd[1]: Started sshd@17-172.31.29.220:22-139.178.68.195:50174.service - OpenSSH per-connection server daemon (139.178.68.195:50174). Jan 13 20:09:12.093892 sshd[5026]: Accepted publickey for core from 139.178.68.195 port 50174 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:12.096410 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:12.105914 systemd-logind[1918]: New session 18 of user core. Jan 13 20:09:12.112959 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:09:12.418755 sshd[5028]: Connection closed by 139.178.68.195 port 50174 Jan 13 20:09:12.419730 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:12.425015 systemd[1]: sshd@17-172.31.29.220:22-139.178.68.195:50174.service: Deactivated successfully. Jan 13 20:09:12.429739 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:09:12.433606 systemd-logind[1918]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:09:12.436109 systemd-logind[1918]: Removed session 18. Jan 13 20:09:12.455235 systemd[1]: Started sshd@18-172.31.29.220:22-139.178.68.195:50176.service - OpenSSH per-connection server daemon (139.178.68.195:50176). Jan 13 20:09:12.647428 sshd[5036]: Accepted publickey for core from 139.178.68.195 port 50176 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:12.650227 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:12.662031 systemd-logind[1918]: New session 19 of user core. Jan 13 20:09:12.663994 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:09:15.251598 sshd[5038]: Connection closed by 139.178.68.195 port 50176 Jan 13 20:09:15.252242 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:15.264572 systemd[1]: sshd@18-172.31.29.220:22-139.178.68.195:50176.service: Deactivated successfully. Jan 13 20:09:15.274015 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:09:15.276834 systemd-logind[1918]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:09:15.302320 systemd[1]: Started sshd@19-172.31.29.220:22-139.178.68.195:37530.service - OpenSSH per-connection server daemon (139.178.68.195:37530). Jan 13 20:09:15.307607 systemd-logind[1918]: Removed session 19. Jan 13 20:09:15.492127 sshd[5054]: Accepted publickey for core from 139.178.68.195 port 37530 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:15.494744 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:15.502862 systemd-logind[1918]: New session 20 of user core. Jan 13 20:09:15.508954 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:09:16.016242 sshd[5056]: Connection closed by 139.178.68.195 port 37530 Jan 13 20:09:16.017256 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:16.022902 systemd-logind[1918]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:09:16.023280 systemd[1]: sshd@19-172.31.29.220:22-139.178.68.195:37530.service: Deactivated successfully. Jan 13 20:09:16.027525 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:09:16.032424 systemd-logind[1918]: Removed session 20. Jan 13 20:09:16.060198 systemd[1]: Started sshd@20-172.31.29.220:22-139.178.68.195:37546.service - OpenSSH per-connection server daemon (139.178.68.195:37546). Jan 13 20:09:16.254116 sshd[5065]: Accepted publickey for core from 139.178.68.195 port 37546 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:16.256817 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:16.265774 systemd-logind[1918]: New session 21 of user core. Jan 13 20:09:16.272943 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:09:16.520974 sshd[5067]: Connection closed by 139.178.68.195 port 37546 Jan 13 20:09:16.522203 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:16.529078 systemd[1]: sshd@20-172.31.29.220:22-139.178.68.195:37546.service: Deactivated successfully. Jan 13 20:09:16.534168 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:09:16.536155 systemd-logind[1918]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:09:16.538253 systemd-logind[1918]: Removed session 21. Jan 13 20:09:21.557226 systemd[1]: Started sshd@21-172.31.29.220:22-139.178.68.195:37562.service - OpenSSH per-connection server daemon (139.178.68.195:37562). Jan 13 20:09:21.754052 sshd[5077]: Accepted publickey for core from 139.178.68.195 port 37562 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:21.756593 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:21.763745 systemd-logind[1918]: New session 22 of user core. Jan 13 20:09:21.769994 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:09:22.017219 sshd[5079]: Connection closed by 139.178.68.195 port 37562 Jan 13 20:09:22.018154 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:22.025091 systemd[1]: sshd@21-172.31.29.220:22-139.178.68.195:37562.service: Deactivated successfully. Jan 13 20:09:22.029265 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:09:22.031473 systemd-logind[1918]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:09:22.033694 systemd-logind[1918]: Removed session 22. Jan 13 20:09:27.056222 systemd[1]: Started sshd@22-172.31.29.220:22-139.178.68.195:45806.service - OpenSSH per-connection server daemon (139.178.68.195:45806). Jan 13 20:09:27.245921 sshd[5092]: Accepted publickey for core from 139.178.68.195 port 45806 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:27.248639 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:27.258910 systemd-logind[1918]: New session 23 of user core. Jan 13 20:09:27.269023 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:09:27.525630 sshd[5094]: Connection closed by 139.178.68.195 port 45806 Jan 13 20:09:27.526640 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:27.532045 systemd[1]: sshd@22-172.31.29.220:22-139.178.68.195:45806.service: Deactivated successfully. Jan 13 20:09:27.536240 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:09:27.540518 systemd-logind[1918]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:09:27.543228 systemd-logind[1918]: Removed session 23. Jan 13 20:09:32.565209 systemd[1]: Started sshd@23-172.31.29.220:22-139.178.68.195:45808.service - OpenSSH per-connection server daemon (139.178.68.195:45808). Jan 13 20:09:32.763759 sshd[5105]: Accepted publickey for core from 139.178.68.195 port 45808 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:32.766505 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:32.775022 systemd-logind[1918]: New session 24 of user core. Jan 13 20:09:32.784988 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:09:33.038625 sshd[5107]: Connection closed by 139.178.68.195 port 45808 Jan 13 20:09:33.040623 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:33.047094 systemd[1]: sshd@23-172.31.29.220:22-139.178.68.195:45808.service: Deactivated successfully. Jan 13 20:09:33.050861 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:09:33.052527 systemd-logind[1918]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:09:33.056421 systemd-logind[1918]: Removed session 24. Jan 13 20:09:38.078259 systemd[1]: Started sshd@24-172.31.29.220:22-139.178.68.195:41600.service - OpenSSH per-connection server daemon (139.178.68.195:41600). Jan 13 20:09:38.265674 sshd[5119]: Accepted publickey for core from 139.178.68.195 port 41600 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:38.268262 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:38.276542 systemd-logind[1918]: New session 25 of user core. Jan 13 20:09:38.289240 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:09:38.531880 sshd[5121]: Connection closed by 139.178.68.195 port 41600 Jan 13 20:09:38.532747 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:38.539620 systemd[1]: sshd@24-172.31.29.220:22-139.178.68.195:41600.service: Deactivated successfully. Jan 13 20:09:38.543100 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:09:38.544976 systemd-logind[1918]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:09:38.547486 systemd-logind[1918]: Removed session 25. Jan 13 20:09:38.570485 systemd[1]: Started sshd@25-172.31.29.220:22-139.178.68.195:41610.service - OpenSSH per-connection server daemon (139.178.68.195:41610). Jan 13 20:09:38.765050 sshd[5132]: Accepted publickey for core from 139.178.68.195 port 41610 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:38.767585 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:38.775893 systemd-logind[1918]: New session 26 of user core. Jan 13 20:09:38.783923 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:09:41.477254 containerd[1936]: time="2025-01-13T20:09:41.477087892Z" level=info msg="StopContainer for \"31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca\" with timeout 30 (s)" Jan 13 20:09:41.483034 containerd[1936]: time="2025-01-13T20:09:41.482829016Z" level=info msg="Stop container \"31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca\" with signal terminated" Jan 13 20:09:41.514984 systemd[1]: cri-containerd-31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca.scope: Deactivated successfully. Jan 13 20:09:41.531651 containerd[1936]: time="2025-01-13T20:09:41.531554512Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:09:41.547737 containerd[1936]: time="2025-01-13T20:09:41.547504144Z" level=info msg="StopContainer for \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\" with timeout 2 (s)" Jan 13 20:09:41.548609 containerd[1936]: time="2025-01-13T20:09:41.548551588Z" level=info msg="Stop container \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\" with signal terminated" Jan 13 20:09:41.569395 systemd-networkd[1853]: lxc_health: Link DOWN Jan 13 20:09:41.569415 systemd-networkd[1853]: lxc_health: Lost carrier Jan 13 20:09:41.579597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca-rootfs.mount: Deactivated successfully. Jan 13 20:09:41.606324 containerd[1936]: time="2025-01-13T20:09:41.605902768Z" level=info msg="shim disconnected" id=31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca namespace=k8s.io Jan 13 20:09:41.606324 containerd[1936]: time="2025-01-13T20:09:41.606152164Z" level=warning msg="cleaning up after shim disconnected" id=31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca namespace=k8s.io Jan 13 20:09:41.606324 containerd[1936]: time="2025-01-13T20:09:41.606173404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:41.608907 systemd[1]: cri-containerd-1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f.scope: Deactivated successfully. Jan 13 20:09:41.611065 systemd[1]: cri-containerd-1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f.scope: Consumed 14.398s CPU time. Jan 13 20:09:41.641148 containerd[1936]: time="2025-01-13T20:09:41.641051212Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:09:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:09:41.649271 containerd[1936]: time="2025-01-13T20:09:41.649209364Z" level=info msg="StopContainer for \"31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca\" returns successfully" Jan 13 20:09:41.652690 containerd[1936]: time="2025-01-13T20:09:41.650097340Z" level=info msg="StopPodSandbox for \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\"" Jan 13 20:09:41.652690 containerd[1936]: time="2025-01-13T20:09:41.650168176Z" level=info msg="Container to stop \"31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.654937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a-shm.mount: Deactivated successfully. Jan 13 20:09:41.670283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f-rootfs.mount: Deactivated successfully. Jan 13 20:09:41.677873 systemd[1]: cri-containerd-a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a.scope: Deactivated successfully. Jan 13 20:09:41.684686 containerd[1936]: time="2025-01-13T20:09:41.684325133Z" level=info msg="shim disconnected" id=1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f namespace=k8s.io Jan 13 20:09:41.684686 containerd[1936]: time="2025-01-13T20:09:41.684407981Z" level=warning msg="cleaning up after shim disconnected" id=1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f namespace=k8s.io Jan 13 20:09:41.684686 containerd[1936]: time="2025-01-13T20:09:41.684429053Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:41.721416 containerd[1936]: time="2025-01-13T20:09:41.721172477Z" level=info msg="StopContainer for \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\" returns successfully" Jan 13 20:09:41.723717 containerd[1936]: time="2025-01-13T20:09:41.722800505Z" level=info msg="StopPodSandbox for \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\"" Jan 13 20:09:41.723717 containerd[1936]: time="2025-01-13T20:09:41.722874221Z" level=info msg="Container to stop \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.723717 containerd[1936]: time="2025-01-13T20:09:41.722899697Z" level=info msg="Container to stop \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.723717 containerd[1936]: time="2025-01-13T20:09:41.722921057Z" level=info msg="Container to stop \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.723717 containerd[1936]: time="2025-01-13T20:09:41.722942405Z" level=info msg="Container to stop \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.723717 containerd[1936]: time="2025-01-13T20:09:41.722966033Z" level=info msg="Container to stop \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.732045 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca-shm.mount: Deactivated successfully. Jan 13 20:09:41.732986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a-rootfs.mount: Deactivated successfully. Jan 13 20:09:41.738394 systemd[1]: cri-containerd-e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca.scope: Deactivated successfully. Jan 13 20:09:41.743226 containerd[1936]: time="2025-01-13T20:09:41.743090633Z" level=info msg="shim disconnected" id=a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a namespace=k8s.io Jan 13 20:09:41.743226 containerd[1936]: time="2025-01-13T20:09:41.743199389Z" level=warning msg="cleaning up after shim disconnected" id=a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a namespace=k8s.io Jan 13 20:09:41.743226 containerd[1936]: time="2025-01-13T20:09:41.743222093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:41.772763 containerd[1936]: time="2025-01-13T20:09:41.772303241Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:09:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:09:41.775308 containerd[1936]: time="2025-01-13T20:09:41.775191185Z" level=info msg="TearDown network for sandbox \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\" successfully" Jan 13 20:09:41.775308 containerd[1936]: time="2025-01-13T20:09:41.775257473Z" level=info msg="StopPodSandbox for \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\" returns successfully" Jan 13 20:09:41.807491 containerd[1936]: time="2025-01-13T20:09:41.807402989Z" level=info msg="shim disconnected" id=e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca namespace=k8s.io Jan 13 20:09:41.807491 containerd[1936]: time="2025-01-13T20:09:41.807481805Z" level=warning msg="cleaning up after shim disconnected" id=e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca namespace=k8s.io Jan 13 20:09:41.809953 containerd[1936]: time="2025-01-13T20:09:41.807503573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:41.832260 containerd[1936]: time="2025-01-13T20:09:41.832154345Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:09:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:09:41.834930 containerd[1936]: time="2025-01-13T20:09:41.834864473Z" level=info msg="TearDown network for sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" successfully" Jan 13 20:09:41.834930 containerd[1936]: time="2025-01-13T20:09:41.834927677Z" level=info msg="StopPodSandbox for \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" returns successfully" Jan 13 20:09:41.919117 kubelet[3527]: I0113 20:09:41.919047 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-hostproc\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.919117 kubelet[3527]: I0113 20:09:41.919118 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-lib-modules\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.919805 kubelet[3527]: I0113 20:09:41.919163 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cni-path\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.919805 kubelet[3527]: I0113 20:09:41.919205 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-bpf-maps\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.919805 kubelet[3527]: I0113 20:09:41.919257 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xss89\" (UniqueName: \"kubernetes.io/projected/f9469757-9c55-49e2-90d4-99afad8be6e2-kube-api-access-xss89\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.919805 kubelet[3527]: I0113 20:09:41.919302 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81daafb6-d970-4ba4-8ba2-749a05666538-cilium-config-path\") pod \"81daafb6-d970-4ba4-8ba2-749a05666538\" (UID: \"81daafb6-d970-4ba4-8ba2-749a05666538\") " Jan 13 20:09:41.919805 kubelet[3527]: I0113 20:09:41.919346 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-config-path\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.919805 kubelet[3527]: I0113 20:09:41.919386 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-xtables-lock\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.920145 kubelet[3527]: I0113 20:09:41.919428 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9469757-9c55-49e2-90d4-99afad8be6e2-hubble-tls\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.920145 kubelet[3527]: I0113 20:09:41.919465 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-run\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.920145 kubelet[3527]: I0113 20:09:41.919507 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-host-proc-sys-net\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.920145 kubelet[3527]: I0113 20:09:41.919553 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9469757-9c55-49e2-90d4-99afad8be6e2-clustermesh-secrets\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.920145 kubelet[3527]: I0113 20:09:41.919602 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94jv9\" (UniqueName: \"kubernetes.io/projected/81daafb6-d970-4ba4-8ba2-749a05666538-kube-api-access-94jv9\") pod \"81daafb6-d970-4ba4-8ba2-749a05666538\" (UID: \"81daafb6-d970-4ba4-8ba2-749a05666538\") " Jan 13 20:09:41.920145 kubelet[3527]: I0113 20:09:41.919643 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-etc-cni-netd\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.920452 kubelet[3527]: I0113 20:09:41.919732 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-host-proc-sys-kernel\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.920452 kubelet[3527]: I0113 20:09:41.919777 3527 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-cgroup\") pod \"f9469757-9c55-49e2-90d4-99afad8be6e2\" (UID: \"f9469757-9c55-49e2-90d4-99afad8be6e2\") " Jan 13 20:09:41.920452 kubelet[3527]: I0113 20:09:41.919864 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.920452 kubelet[3527]: I0113 20:09:41.919930 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-hostproc" (OuterVolumeSpecName: "hostproc") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.920452 kubelet[3527]: I0113 20:09:41.919969 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.922800 kubelet[3527]: I0113 20:09:41.920007 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cni-path" (OuterVolumeSpecName: "cni-path") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.922800 kubelet[3527]: I0113 20:09:41.920044 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.922800 kubelet[3527]: I0113 20:09:41.920740 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.922800 kubelet[3527]: I0113 20:09:41.921084 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.925025 kubelet[3527]: I0113 20:09:41.924975 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.925215 kubelet[3527]: I0113 20:09:41.924975 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.925321 kubelet[3527]: I0113 20:09:41.925063 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:41.937066 kubelet[3527]: I0113 20:09:41.935971 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81daafb6-d970-4ba4-8ba2-749a05666538-kube-api-access-94jv9" (OuterVolumeSpecName: "kube-api-access-94jv9") pod "81daafb6-d970-4ba4-8ba2-749a05666538" (UID: "81daafb6-d970-4ba4-8ba2-749a05666538"). InnerVolumeSpecName "kube-api-access-94jv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:41.937066 kubelet[3527]: I0113 20:09:41.936012 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9469757-9c55-49e2-90d4-99afad8be6e2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:09:41.937284 kubelet[3527]: I0113 20:09:41.937256 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:09:41.938928 kubelet[3527]: I0113 20:09:41.938868 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9469757-9c55-49e2-90d4-99afad8be6e2-kube-api-access-xss89" (OuterVolumeSpecName: "kube-api-access-xss89") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "kube-api-access-xss89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:41.939066 kubelet[3527]: I0113 20:09:41.938864 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81daafb6-d970-4ba4-8ba2-749a05666538-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "81daafb6-d970-4ba4-8ba2-749a05666538" (UID: "81daafb6-d970-4ba4-8ba2-749a05666538"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:09:41.939314 kubelet[3527]: I0113 20:09:41.939279 3527 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9469757-9c55-49e2-90d4-99afad8be6e2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f9469757-9c55-49e2-90d4-99afad8be6e2" (UID: "f9469757-9c55-49e2-90d4-99afad8be6e2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:42.021423 kubelet[3527]: I0113 20:09:42.020511 3527 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cni-path\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.021423 kubelet[3527]: I0113 20:09:42.020568 3527 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-bpf-maps\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.021423 kubelet[3527]: I0113 20:09:42.020597 3527 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81daafb6-d970-4ba4-8ba2-749a05666538-cilium-config-path\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.021423 kubelet[3527]: I0113 20:09:42.020629 3527 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xss89\" (UniqueName: \"kubernetes.io/projected/f9469757-9c55-49e2-90d4-99afad8be6e2-kube-api-access-xss89\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.021423 kubelet[3527]: I0113 20:09:42.020656 3527 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-xtables-lock\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.021423 kubelet[3527]: I0113 20:09:42.020702 3527 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9469757-9c55-49e2-90d4-99afad8be6e2-hubble-tls\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.021423 kubelet[3527]: I0113 20:09:42.020727 3527 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-config-path\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.021423 kubelet[3527]: I0113 20:09:42.020753 3527 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-host-proc-sys-net\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.022425 kubelet[3527]: I0113 20:09:42.020777 3527 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9469757-9c55-49e2-90d4-99afad8be6e2-clustermesh-secrets\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.022425 kubelet[3527]: I0113 20:09:42.020800 3527 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-run\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.022425 kubelet[3527]: I0113 20:09:42.020826 3527 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-94jv9\" (UniqueName: \"kubernetes.io/projected/81daafb6-d970-4ba4-8ba2-749a05666538-kube-api-access-94jv9\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.022425 kubelet[3527]: I0113 20:09:42.020850 3527 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-host-proc-sys-kernel\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.022425 kubelet[3527]: I0113 20:09:42.020877 3527 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-etc-cni-netd\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.022425 kubelet[3527]: I0113 20:09:42.021108 3527 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-cilium-cgroup\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.022425 kubelet[3527]: I0113 20:09:42.021138 3527 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-hostproc\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.022425 kubelet[3527]: I0113 20:09:42.021162 3527 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9469757-9c55-49e2-90d4-99afad8be6e2-lib-modules\") on node \"ip-172-31-29-220\" DevicePath \"\"" Jan 13 20:09:42.493221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca-rootfs.mount: Deactivated successfully. Jan 13 20:09:42.493387 systemd[1]: var-lib-kubelet-pods-81daafb6\x2dd970\x2d4ba4\x2d8ba2\x2d749a05666538-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d94jv9.mount: Deactivated successfully. Jan 13 20:09:42.493532 systemd[1]: var-lib-kubelet-pods-f9469757\x2d9c55\x2d49e2\x2d90d4\x2d99afad8be6e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxss89.mount: Deactivated successfully. Jan 13 20:09:42.494138 systemd[1]: var-lib-kubelet-pods-f9469757\x2d9c55\x2d49e2\x2d90d4\x2d99afad8be6e2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:09:42.494403 systemd[1]: var-lib-kubelet-pods-f9469757\x2d9c55\x2d49e2\x2d90d4\x2d99afad8be6e2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:09:42.672448 kubelet[3527]: I0113 20:09:42.672303 3527 scope.go:117] "RemoveContainer" containerID="1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f" Jan 13 20:09:42.678958 containerd[1936]: time="2025-01-13T20:09:42.678556002Z" level=info msg="RemoveContainer for \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\"" Jan 13 20:09:42.692324 systemd[1]: Removed slice kubepods-burstable-podf9469757_9c55_49e2_90d4_99afad8be6e2.slice - libcontainer container kubepods-burstable-podf9469757_9c55_49e2_90d4_99afad8be6e2.slice. Jan 13 20:09:42.692561 systemd[1]: kubepods-burstable-podf9469757_9c55_49e2_90d4_99afad8be6e2.slice: Consumed 14.558s CPU time. Jan 13 20:09:42.695427 containerd[1936]: time="2025-01-13T20:09:42.695284230Z" level=info msg="RemoveContainer for \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\" returns successfully" Jan 13 20:09:42.696145 kubelet[3527]: I0113 20:09:42.696102 3527 scope.go:117] "RemoveContainer" containerID="ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432" Jan 13 20:09:42.707888 systemd[1]: Removed slice kubepods-besteffort-pod81daafb6_d970_4ba4_8ba2_749a05666538.slice - libcontainer container kubepods-besteffort-pod81daafb6_d970_4ba4_8ba2_749a05666538.slice. Jan 13 20:09:42.717502 containerd[1936]: time="2025-01-13T20:09:42.716453502Z" level=info msg="RemoveContainer for \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\"" Jan 13 20:09:42.729225 containerd[1936]: time="2025-01-13T20:09:42.729166194Z" level=info msg="RemoveContainer for \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\" returns successfully" Jan 13 20:09:42.729589 kubelet[3527]: I0113 20:09:42.729558 3527 scope.go:117] "RemoveContainer" containerID="20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9" Jan 13 20:09:42.732747 containerd[1936]: time="2025-01-13T20:09:42.732510390Z" level=info msg="RemoveContainer for \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\"" Jan 13 20:09:42.742334 containerd[1936]: time="2025-01-13T20:09:42.742268802Z" level=info msg="RemoveContainer for \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\" returns successfully" Jan 13 20:09:42.743011 kubelet[3527]: I0113 20:09:42.742627 3527 scope.go:117] "RemoveContainer" containerID="ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8" Jan 13 20:09:42.745752 containerd[1936]: time="2025-01-13T20:09:42.744841578Z" level=info msg="RemoveContainer for \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\"" Jan 13 20:09:42.754781 containerd[1936]: time="2025-01-13T20:09:42.754728198Z" level=info msg="RemoveContainer for \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\" returns successfully" Jan 13 20:09:42.755457 kubelet[3527]: I0113 20:09:42.755286 3527 scope.go:117] "RemoveContainer" containerID="a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947" Jan 13 20:09:42.757970 containerd[1936]: time="2025-01-13T20:09:42.757483302Z" level=info msg="RemoveContainer for \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\"" Jan 13 20:09:42.764779 containerd[1936]: time="2025-01-13T20:09:42.764728254Z" level=info msg="RemoveContainer for \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\" returns successfully" Jan 13 20:09:42.765495 kubelet[3527]: I0113 20:09:42.765453 3527 scope.go:117] "RemoveContainer" containerID="1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f" Jan 13 20:09:42.766204 containerd[1936]: time="2025-01-13T20:09:42.766079706Z" level=error msg="ContainerStatus for \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\": not found" Jan 13 20:09:42.766496 kubelet[3527]: E0113 20:09:42.766413 3527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\": not found" containerID="1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f" Jan 13 20:09:42.766720 kubelet[3527]: I0113 20:09:42.766641 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f"} err="failed to get container status \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1168c45241d72f161a2966ac291943f56c0f2477917caf9584b07f6d97c7b16f\": not found" Jan 13 20:09:42.766720 kubelet[3527]: I0113 20:09:42.766715 3527 scope.go:117] "RemoveContainer" containerID="ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432" Jan 13 20:09:42.767186 containerd[1936]: time="2025-01-13T20:09:42.767125026Z" level=error msg="ContainerStatus for \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\": not found" Jan 13 20:09:42.767632 kubelet[3527]: E0113 20:09:42.767430 3527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\": not found" containerID="ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432" Jan 13 20:09:42.767632 kubelet[3527]: I0113 20:09:42.767492 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432"} err="failed to get container status \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff5cd74dd630835c3622c36b24105ff1ad795d269722ec7ce631219576e1a432\": not found" Jan 13 20:09:42.767632 kubelet[3527]: I0113 20:09:42.767518 3527 scope.go:117] "RemoveContainer" containerID="20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9" Jan 13 20:09:42.768180 containerd[1936]: time="2025-01-13T20:09:42.767907894Z" level=error msg="ContainerStatus for \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\": not found" Jan 13 20:09:42.768372 kubelet[3527]: E0113 20:09:42.768249 3527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\": not found" containerID="20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9" Jan 13 20:09:42.768451 kubelet[3527]: I0113 20:09:42.768415 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9"} err="failed to get container status \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\": rpc error: code = NotFound desc = an error occurred when try to find container \"20971127590563516c3a6c13a78774bf9c133851eefe116e7304a1fb51ccfba9\": not found" Jan 13 20:09:42.768451 kubelet[3527]: I0113 20:09:42.768445 3527 scope.go:117] "RemoveContainer" containerID="ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8" Jan 13 20:09:42.768834 containerd[1936]: time="2025-01-13T20:09:42.768784722Z" level=error msg="ContainerStatus for \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\": not found" Jan 13 20:09:42.769084 kubelet[3527]: E0113 20:09:42.768993 3527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\": not found" containerID="ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8" Jan 13 20:09:42.769084 kubelet[3527]: I0113 20:09:42.769044 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8"} err="failed to get container status \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae04266adc50370be43a9b18667d6a3fc3e9d20b4426220c00d4d8b6a4313bf8\": not found" Jan 13 20:09:42.769084 kubelet[3527]: I0113 20:09:42.769066 3527 scope.go:117] "RemoveContainer" containerID="a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947" Jan 13 20:09:42.769696 containerd[1936]: time="2025-01-13T20:09:42.769434522Z" level=error msg="ContainerStatus for \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\": not found" Jan 13 20:09:42.769889 kubelet[3527]: E0113 20:09:42.769775 3527 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\": not found" containerID="a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947" Jan 13 20:09:42.770018 kubelet[3527]: I0113 20:09:42.769915 3527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947"} err="failed to get container status \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3fc7f83f590a834d243f461661ff22947914028c353b9100c8428eb6871d947\": not found" Jan 13 20:09:42.770018 kubelet[3527]: I0113 20:09:42.769942 3527 scope.go:117] "RemoveContainer" containerID="31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca" Jan 13 20:09:42.772226 containerd[1936]: time="2025-01-13T20:09:42.772175178Z" level=info msg="RemoveContainer for \"31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca\"" Jan 13 20:09:42.780881 containerd[1936]: time="2025-01-13T20:09:42.780781398Z" level=info msg="RemoveContainer for \"31f038b03a07d5af8a3aa0bc6921fb29f78d72807d206a2d770c004f28016bca\" returns successfully" Jan 13 20:09:43.179581 kubelet[3527]: I0113 20:09:43.179218 3527 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81daafb6-d970-4ba4-8ba2-749a05666538" path="/var/lib/kubelet/pods/81daafb6-d970-4ba4-8ba2-749a05666538/volumes" Jan 13 20:09:43.182064 kubelet[3527]: I0113 20:09:43.181986 3527 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f9469757-9c55-49e2-90d4-99afad8be6e2" path="/var/lib/kubelet/pods/f9469757-9c55-49e2-90d4-99afad8be6e2/volumes" Jan 13 20:09:43.409708 sshd[5134]: Connection closed by 139.178.68.195 port 41610 Jan 13 20:09:43.408024 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:43.413078 systemd-logind[1918]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:09:43.414382 systemd[1]: sshd@25-172.31.29.220:22-139.178.68.195:41610.service: Deactivated successfully. Jan 13 20:09:43.419403 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:09:43.420042 systemd[1]: session-26.scope: Consumed 1.950s CPU time. Jan 13 20:09:43.423973 systemd-logind[1918]: Removed session 26. Jan 13 20:09:43.439955 systemd[1]: Started sshd@26-172.31.29.220:22-139.178.68.195:41620.service - OpenSSH per-connection server daemon (139.178.68.195:41620). Jan 13 20:09:43.629519 sshd[5297]: Accepted publickey for core from 139.178.68.195 port 41620 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:43.632453 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:43.639883 systemd-logind[1918]: New session 27 of user core. Jan 13 20:09:43.646932 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:09:44.479788 ntpd[1911]: Deleting interface #12 lxc_health, fe80::5454:65ff:fe74:9ec1%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs Jan 13 20:09:44.480402 ntpd[1911]: 13 Jan 20:09:44 ntpd[1911]: Deleting interface #12 lxc_health, fe80::5454:65ff:fe74:9ec1%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs Jan 13 20:09:45.276495 sshd[5299]: Connection closed by 139.178.68.195 port 41620 Jan 13 20:09:45.277902 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:45.290546 systemd[1]: sshd@26-172.31.29.220:22-139.178.68.195:41620.service: Deactivated successfully. Jan 13 20:09:45.296213 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:09:45.300766 kubelet[3527]: I0113 20:09:45.298746 3527 topology_manager.go:215] "Topology Admit Handler" podUID="5a2fcb51-6047-4807-9cda-7b6db5df3061" podNamespace="kube-system" podName="cilium-nqxrb" Jan 13 20:09:45.300766 kubelet[3527]: E0113 20:09:45.298838 3527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f9469757-9c55-49e2-90d4-99afad8be6e2" containerName="mount-cgroup" Jan 13 20:09:45.300766 kubelet[3527]: E0113 20:09:45.298862 3527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f9469757-9c55-49e2-90d4-99afad8be6e2" containerName="apply-sysctl-overwrites" Jan 13 20:09:45.300766 kubelet[3527]: E0113 20:09:45.298880 3527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81daafb6-d970-4ba4-8ba2-749a05666538" containerName="cilium-operator" Jan 13 20:09:45.300766 kubelet[3527]: E0113 20:09:45.298901 3527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f9469757-9c55-49e2-90d4-99afad8be6e2" containerName="mount-bpf-fs" Jan 13 20:09:45.300766 kubelet[3527]: E0113 20:09:45.298919 3527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f9469757-9c55-49e2-90d4-99afad8be6e2" containerName="clean-cilium-state" Jan 13 20:09:45.300766 kubelet[3527]: E0113 20:09:45.298937 3527 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f9469757-9c55-49e2-90d4-99afad8be6e2" containerName="cilium-agent" Jan 13 20:09:45.300766 kubelet[3527]: I0113 20:09:45.298986 3527 memory_manager.go:354] "RemoveStaleState removing state" podUID="81daafb6-d970-4ba4-8ba2-749a05666538" containerName="cilium-operator" Jan 13 20:09:45.300766 kubelet[3527]: I0113 20:09:45.299005 3527 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9469757-9c55-49e2-90d4-99afad8be6e2" containerName="cilium-agent" Jan 13 20:09:45.299833 systemd[1]: session-27.scope: Consumed 1.448s CPU time. Jan 13 20:09:45.306043 systemd-logind[1918]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:09:45.328564 kubelet[3527]: W0113 20:09:45.328466 3527 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-29-220" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-220' and this object Jan 13 20:09:45.328564 kubelet[3527]: E0113 20:09:45.328522 3527 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-29-220" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-220' and this object Jan 13 20:09:45.331121 kubelet[3527]: W0113 20:09:45.330863 3527 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-29-220" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-220' and this object Jan 13 20:09:45.331121 kubelet[3527]: E0113 20:09:45.331068 3527 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-29-220" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-220' and this object Jan 13 20:09:45.331121 kubelet[3527]: W0113 20:09:45.330934 3527 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-29-220" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-220' and this object Jan 13 20:09:45.331742 kubelet[3527]: E0113 20:09:45.331413 3527 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-29-220" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-220' and this object Jan 13 20:09:45.331742 kubelet[3527]: W0113 20:09:45.331691 3527 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-29-220" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-220' and this object Jan 13 20:09:45.331968 kubelet[3527]: E0113 20:09:45.331925 3527 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-29-220" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-220' and this object Jan 13 20:09:45.334248 systemd[1]: Started sshd@27-172.31.29.220:22-139.178.68.195:51524.service - OpenSSH per-connection server daemon (139.178.68.195:51524). Jan 13 20:09:45.340966 systemd-logind[1918]: Removed session 27. Jan 13 20:09:45.344817 kubelet[3527]: I0113 20:09:45.344781 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-hostproc\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.345042 kubelet[3527]: I0113 20:09:45.345005 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5a2fcb51-6047-4807-9cda-7b6db5df3061-cilium-ipsec-secrets\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.345760 kubelet[3527]: I0113 20:09:45.345727 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-cilium-cgroup\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.346109 kubelet[3527]: I0113 20:09:45.345961 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-etc-cni-netd\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.346509 kubelet[3527]: I0113 20:09:45.346363 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-cilium-run\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.346984 kubelet[3527]: I0113 20:09:45.346762 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-xtables-lock\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.351602 kubelet[3527]: I0113 20:09:45.347367 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a2fcb51-6047-4807-9cda-7b6db5df3061-hubble-tls\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.351602 kubelet[3527]: I0113 20:09:45.351035 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a2fcb51-6047-4807-9cda-7b6db5df3061-cilium-config-path\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.351602 kubelet[3527]: I0113 20:09:45.351095 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-host-proc-sys-net\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.351602 kubelet[3527]: I0113 20:09:45.351146 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a2fcb51-6047-4807-9cda-7b6db5df3061-clustermesh-secrets\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.351602 kubelet[3527]: I0113 20:09:45.351193 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-host-proc-sys-kernel\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.352072 kubelet[3527]: I0113 20:09:45.351240 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-lib-modules\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.352072 kubelet[3527]: I0113 20:09:45.351286 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ngvc\" (UniqueName: \"kubernetes.io/projected/5a2fcb51-6047-4807-9cda-7b6db5df3061-kube-api-access-9ngvc\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.352072 kubelet[3527]: I0113 20:09:45.351341 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-bpf-maps\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.352072 kubelet[3527]: I0113 20:09:45.351383 3527 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a2fcb51-6047-4807-9cda-7b6db5df3061-cni-path\") pod \"cilium-nqxrb\" (UID: \"5a2fcb51-6047-4807-9cda-7b6db5df3061\") " pod="kube-system/cilium-nqxrb" Jan 13 20:09:45.362179 systemd[1]: Created slice kubepods-burstable-pod5a2fcb51_6047_4807_9cda_7b6db5df3061.slice - libcontainer container kubepods-burstable-pod5a2fcb51_6047_4807_9cda_7b6db5df3061.slice. Jan 13 20:09:45.451023 kubelet[3527]: E0113 20:09:45.450976 3527 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:09:45.560966 sshd[5309]: Accepted publickey for core from 139.178.68.195 port 51524 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:45.562534 sshd-session[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:45.578335 systemd-logind[1918]: New session 28 of user core. Jan 13 20:09:45.582177 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:09:45.708388 sshd[5312]: Connection closed by 139.178.68.195 port 51524 Jan 13 20:09:45.709273 sshd-session[5309]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:45.714842 systemd[1]: sshd@27-172.31.29.220:22-139.178.68.195:51524.service: Deactivated successfully. Jan 13 20:09:45.718337 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:09:45.723045 systemd-logind[1918]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:09:45.725078 systemd-logind[1918]: Removed session 28. Jan 13 20:09:45.750176 systemd[1]: Started sshd@28-172.31.29.220:22-139.178.68.195:51528.service - OpenSSH per-connection server daemon (139.178.68.195:51528). Jan 13 20:09:45.931160 sshd[5318]: Accepted publickey for core from 139.178.68.195 port 51528 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:45.933778 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:45.941030 systemd-logind[1918]: New session 29 of user core. Jan 13 20:09:45.949940 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:09:46.453376 kubelet[3527]: E0113 20:09:46.452764 3527 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 20:09:46.453376 kubelet[3527]: E0113 20:09:46.452808 3527 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-nqxrb: failed to sync secret cache: timed out waiting for the condition Jan 13 20:09:46.453376 kubelet[3527]: E0113 20:09:46.452913 3527 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a2fcb51-6047-4807-9cda-7b6db5df3061-hubble-tls podName:5a2fcb51-6047-4807-9cda-7b6db5df3061 nodeName:}" failed. No retries permitted until 2025-01-13 20:09:46.952878832 +0000 UTC m=+112.016404751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/5a2fcb51-6047-4807-9cda-7b6db5df3061-hubble-tls") pod "cilium-nqxrb" (UID: "5a2fcb51-6047-4807-9cda-7b6db5df3061") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:09:46.453376 kubelet[3527]: E0113 20:09:46.452957 3527 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 13 20:09:46.453376 kubelet[3527]: E0113 20:09:46.453011 3527 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2fcb51-6047-4807-9cda-7b6db5df3061-clustermesh-secrets podName:5a2fcb51-6047-4807-9cda-7b6db5df3061 nodeName:}" failed. No retries permitted until 2025-01-13 20:09:46.95299462 +0000 UTC m=+112.016520539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5a2fcb51-6047-4807-9cda-7b6db5df3061-clustermesh-secrets") pod "cilium-nqxrb" (UID: "5a2fcb51-6047-4807-9cda-7b6db5df3061") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:09:46.453376 kubelet[3527]: E0113 20:09:46.453289 3527 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:09:46.455374 kubelet[3527]: E0113 20:09:46.453351 3527 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a2fcb51-6047-4807-9cda-7b6db5df3061-cilium-config-path podName:5a2fcb51-6047-4807-9cda-7b6db5df3061 nodeName:}" failed. No retries permitted until 2025-01-13 20:09:46.953333464 +0000 UTC m=+112.016859383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5a2fcb51-6047-4807-9cda-7b6db5df3061-cilium-config-path") pod "cilium-nqxrb" (UID: "5a2fcb51-6047-4807-9cda-7b6db5df3061") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:09:46.455374 kubelet[3527]: E0113 20:09:46.453795 3527 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 13 20:09:46.455374 kubelet[3527]: E0113 20:09:46.453892 3527 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a2fcb51-6047-4807-9cda-7b6db5df3061-cilium-ipsec-secrets podName:5a2fcb51-6047-4807-9cda-7b6db5df3061 nodeName:}" failed. No retries permitted until 2025-01-13 20:09:46.953869396 +0000 UTC m=+112.017395315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/5a2fcb51-6047-4807-9cda-7b6db5df3061-cilium-ipsec-secrets") pod "cilium-nqxrb" (UID: "5a2fcb51-6047-4807-9cda-7b6db5df3061") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:09:47.170117 containerd[1936]: time="2025-01-13T20:09:47.170067080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqxrb,Uid:5a2fcb51-6047-4807-9cda-7b6db5df3061,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:47.206944 containerd[1936]: time="2025-01-13T20:09:47.206735660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:47.206944 containerd[1936]: time="2025-01-13T20:09:47.206846816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:47.206944 containerd[1936]: time="2025-01-13T20:09:47.206884604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:47.207410 containerd[1936]: time="2025-01-13T20:09:47.207052424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:47.244984 systemd[1]: Started cri-containerd-5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377.scope - libcontainer container 5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377. Jan 13 20:09:47.287486 containerd[1936]: time="2025-01-13T20:09:47.287333960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqxrb,Uid:5a2fcb51-6047-4807-9cda-7b6db5df3061,Namespace:kube-system,Attempt:0,} returns sandbox id \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\"" Jan 13 20:09:47.295541 containerd[1936]: time="2025-01-13T20:09:47.295482320Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:09:47.311245 containerd[1936]: time="2025-01-13T20:09:47.311166813Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ad0e27df0f49e2251410aaa3dd340189b40692655c96941d7d7747992d3f43e7\"" Jan 13 20:09:47.312258 containerd[1936]: time="2025-01-13T20:09:47.312187329Z" level=info msg="StartContainer for \"ad0e27df0f49e2251410aaa3dd340189b40692655c96941d7d7747992d3f43e7\"" Jan 13 20:09:47.357015 systemd[1]: Started cri-containerd-ad0e27df0f49e2251410aaa3dd340189b40692655c96941d7d7747992d3f43e7.scope - libcontainer container ad0e27df0f49e2251410aaa3dd340189b40692655c96941d7d7747992d3f43e7. Jan 13 20:09:47.402428 containerd[1936]: time="2025-01-13T20:09:47.402341013Z" level=info msg="StartContainer for \"ad0e27df0f49e2251410aaa3dd340189b40692655c96941d7d7747992d3f43e7\" returns successfully" Jan 13 20:09:47.418855 systemd[1]: cri-containerd-ad0e27df0f49e2251410aaa3dd340189b40692655c96941d7d7747992d3f43e7.scope: Deactivated successfully. Jan 13 20:09:47.474272 containerd[1936]: time="2025-01-13T20:09:47.474084345Z" level=info msg="shim disconnected" id=ad0e27df0f49e2251410aaa3dd340189b40692655c96941d7d7747992d3f43e7 namespace=k8s.io Jan 13 20:09:47.474272 containerd[1936]: time="2025-01-13T20:09:47.474163329Z" level=warning msg="cleaning up after shim disconnected" id=ad0e27df0f49e2251410aaa3dd340189b40692655c96941d7d7747992d3f43e7 namespace=k8s.io Jan 13 20:09:47.474272 containerd[1936]: time="2025-01-13T20:09:47.474184245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:47.702285 containerd[1936]: time="2025-01-13T20:09:47.702171118Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:09:47.725325 containerd[1936]: time="2025-01-13T20:09:47.725144639Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a6fd3ee61405c0e8fafa209ba26b2cac68b9320c23807d622b215fa28d8ffc90\"" Jan 13 20:09:47.727303 containerd[1936]: time="2025-01-13T20:09:47.727235411Z" level=info msg="StartContainer for \"a6fd3ee61405c0e8fafa209ba26b2cac68b9320c23807d622b215fa28d8ffc90\"" Jan 13 20:09:47.732362 kubelet[3527]: I0113 20:09:47.732302 3527 setters.go:568] "Node became not ready" node="ip-172-31-29-220" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:09:47Z","lastTransitionTime":"2025-01-13T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:09:47.794047 systemd[1]: Started cri-containerd-a6fd3ee61405c0e8fafa209ba26b2cac68b9320c23807d622b215fa28d8ffc90.scope - libcontainer container a6fd3ee61405c0e8fafa209ba26b2cac68b9320c23807d622b215fa28d8ffc90. Jan 13 20:09:47.846115 containerd[1936]: time="2025-01-13T20:09:47.846028667Z" level=info msg="StartContainer for \"a6fd3ee61405c0e8fafa209ba26b2cac68b9320c23807d622b215fa28d8ffc90\" returns successfully" Jan 13 20:09:47.859416 systemd[1]: cri-containerd-a6fd3ee61405c0e8fafa209ba26b2cac68b9320c23807d622b215fa28d8ffc90.scope: Deactivated successfully. Jan 13 20:09:47.911637 containerd[1936]: time="2025-01-13T20:09:47.911544456Z" level=info msg="shim disconnected" id=a6fd3ee61405c0e8fafa209ba26b2cac68b9320c23807d622b215fa28d8ffc90 namespace=k8s.io Jan 13 20:09:47.911637 containerd[1936]: time="2025-01-13T20:09:47.911621436Z" level=warning msg="cleaning up after shim disconnected" id=a6fd3ee61405c0e8fafa209ba26b2cac68b9320c23807d622b215fa28d8ffc90 namespace=k8s.io Jan 13 20:09:47.911637 containerd[1936]: time="2025-01-13T20:09:47.911641500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:48.710912 containerd[1936]: time="2025-01-13T20:09:48.710830404Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:09:48.749124 containerd[1936]: time="2025-01-13T20:09:48.748933452Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"02a94ba641ec68a9c5e75f08050758d2ff84efeb0b5b44e81caf4c78b625e7e0\"" Jan 13 20:09:48.751780 containerd[1936]: time="2025-01-13T20:09:48.750270048Z" level=info msg="StartContainer for \"02a94ba641ec68a9c5e75f08050758d2ff84efeb0b5b44e81caf4c78b625e7e0\"" Jan 13 20:09:48.810956 systemd[1]: Started cri-containerd-02a94ba641ec68a9c5e75f08050758d2ff84efeb0b5b44e81caf4c78b625e7e0.scope - libcontainer container 02a94ba641ec68a9c5e75f08050758d2ff84efeb0b5b44e81caf4c78b625e7e0. Jan 13 20:09:48.888615 systemd[1]: cri-containerd-02a94ba641ec68a9c5e75f08050758d2ff84efeb0b5b44e81caf4c78b625e7e0.scope: Deactivated successfully. Jan 13 20:09:48.891893 containerd[1936]: time="2025-01-13T20:09:48.891637956Z" level=info msg="StartContainer for \"02a94ba641ec68a9c5e75f08050758d2ff84efeb0b5b44e81caf4c78b625e7e0\" returns successfully" Jan 13 20:09:48.944052 containerd[1936]: time="2025-01-13T20:09:48.943943137Z" level=info msg="shim disconnected" id=02a94ba641ec68a9c5e75f08050758d2ff84efeb0b5b44e81caf4c78b625e7e0 namespace=k8s.io Jan 13 20:09:48.944052 containerd[1936]: time="2025-01-13T20:09:48.944015461Z" level=warning msg="cleaning up after shim disconnected" id=02a94ba641ec68a9c5e75f08050758d2ff84efeb0b5b44e81caf4c78b625e7e0 namespace=k8s.io Jan 13 20:09:48.944052 containerd[1936]: time="2025-01-13T20:09:48.944034469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:48.972027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02a94ba641ec68a9c5e75f08050758d2ff84efeb0b5b44e81caf4c78b625e7e0-rootfs.mount: Deactivated successfully. Jan 13 20:09:49.717908 containerd[1936]: time="2025-01-13T20:09:49.716812789Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:09:49.749182 containerd[1936]: time="2025-01-13T20:09:49.749115673Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5138463df7f0acb63cdcd48c17622863826d33e09921e2817deed46f61b30816\"" Jan 13 20:09:49.751294 containerd[1936]: time="2025-01-13T20:09:49.750952345Z" level=info msg="StartContainer for \"5138463df7f0acb63cdcd48c17622863826d33e09921e2817deed46f61b30816\"" Jan 13 20:09:49.806001 systemd[1]: Started cri-containerd-5138463df7f0acb63cdcd48c17622863826d33e09921e2817deed46f61b30816.scope - libcontainer container 5138463df7f0acb63cdcd48c17622863826d33e09921e2817deed46f61b30816. Jan 13 20:09:49.852844 systemd[1]: cri-containerd-5138463df7f0acb63cdcd48c17622863826d33e09921e2817deed46f61b30816.scope: Deactivated successfully. Jan 13 20:09:49.859249 containerd[1936]: time="2025-01-13T20:09:49.859167505Z" level=info msg="StartContainer for \"5138463df7f0acb63cdcd48c17622863826d33e09921e2817deed46f61b30816\" returns successfully" Jan 13 20:09:49.912623 containerd[1936]: time="2025-01-13T20:09:49.912519277Z" level=info msg="shim disconnected" id=5138463df7f0acb63cdcd48c17622863826d33e09921e2817deed46f61b30816 namespace=k8s.io Jan 13 20:09:49.912623 containerd[1936]: time="2025-01-13T20:09:49.912598633Z" level=warning msg="cleaning up after shim disconnected" id=5138463df7f0acb63cdcd48c17622863826d33e09921e2817deed46f61b30816 namespace=k8s.io Jan 13 20:09:49.912623 containerd[1936]: time="2025-01-13T20:09:49.912619813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:49.971749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5138463df7f0acb63cdcd48c17622863826d33e09921e2817deed46f61b30816-rootfs.mount: Deactivated successfully. Jan 13 20:09:50.174501 kubelet[3527]: E0113 20:09:50.174441 3527 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-l6q7b" podUID="cfc1d271-5700-4db9-9b46-ca518e090652" Jan 13 20:09:50.452790 kubelet[3527]: E0113 20:09:50.452743 3527 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:09:50.723836 containerd[1936]: time="2025-01-13T20:09:50.723190526Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:09:50.753813 containerd[1936]: time="2025-01-13T20:09:50.753733154Z" level=info msg="CreateContainer within sandbox \"5edab50a23d656dc4c12a4474cca7318305d7241a1353f6e5ef126eddf226377\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c95d3f948a1681427f19d8c213586be6e14b33c6cbaa070959fbaef176a48e44\"" Jan 13 20:09:50.754503 containerd[1936]: time="2025-01-13T20:09:50.754453934Z" level=info msg="StartContainer for \"c95d3f948a1681427f19d8c213586be6e14b33c6cbaa070959fbaef176a48e44\"" Jan 13 20:09:50.817991 systemd[1]: Started cri-containerd-c95d3f948a1681427f19d8c213586be6e14b33c6cbaa070959fbaef176a48e44.scope - libcontainer container c95d3f948a1681427f19d8c213586be6e14b33c6cbaa070959fbaef176a48e44. Jan 13 20:09:50.876455 containerd[1936]: time="2025-01-13T20:09:50.876369830Z" level=info msg="StartContainer for \"c95d3f948a1681427f19d8c213586be6e14b33c6cbaa070959fbaef176a48e44\" returns successfully" Jan 13 20:09:51.664798 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:09:51.756274 kubelet[3527]: I0113 20:09:51.756211 3527 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-nqxrb" podStartSLOduration=6.756149043 podStartE2EDuration="6.756149043s" podCreationTimestamp="2025-01-13 20:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:51.754185387 +0000 UTC m=+116.817711330" watchObservedRunningTime="2025-01-13 20:09:51.756149043 +0000 UTC m=+116.819674998" Jan 13 20:09:52.174523 kubelet[3527]: E0113 20:09:52.174428 3527 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-l6q7b" podUID="cfc1d271-5700-4db9-9b46-ca518e090652" Jan 13 20:09:52.383243 systemd[1]: run-containerd-runc-k8s.io-c95d3f948a1681427f19d8c213586be6e14b33c6cbaa070959fbaef176a48e44-runc.IJLvqr.mount: Deactivated successfully. Jan 13 20:09:54.174052 kubelet[3527]: E0113 20:09:54.173990 3527 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-l6q7b" podUID="cfc1d271-5700-4db9-9b46-ca518e090652" Jan 13 20:09:55.220265 containerd[1936]: time="2025-01-13T20:09:55.220194340Z" level=info msg="StopPodSandbox for \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\"" Jan 13 20:09:55.220872 containerd[1936]: time="2025-01-13T20:09:55.220344568Z" level=info msg="TearDown network for sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" successfully" Jan 13 20:09:55.220872 containerd[1936]: time="2025-01-13T20:09:55.220368772Z" level=info msg="StopPodSandbox for \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" returns successfully" Jan 13 20:09:55.222172 containerd[1936]: time="2025-01-13T20:09:55.221605060Z" level=info msg="RemovePodSandbox for \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\"" Jan 13 20:09:55.222172 containerd[1936]: time="2025-01-13T20:09:55.221703244Z" level=info msg="Forcibly stopping sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\"" Jan 13 20:09:55.222172 containerd[1936]: time="2025-01-13T20:09:55.221881084Z" level=info msg="TearDown network for sandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" successfully" Jan 13 20:09:55.229962 containerd[1936]: time="2025-01-13T20:09:55.229089328Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:09:55.229962 containerd[1936]: time="2025-01-13T20:09:55.229251604Z" level=info msg="RemovePodSandbox \"e5b09054db02fefd0f69b3f6689c96774db87951cc4cb240202cde8f8b7007ca\" returns successfully" Jan 13 20:09:55.230169 containerd[1936]: time="2025-01-13T20:09:55.229987000Z" level=info msg="StopPodSandbox for \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\"" Jan 13 20:09:55.230169 containerd[1936]: time="2025-01-13T20:09:55.230120392Z" level=info msg="TearDown network for sandbox \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\" successfully" Jan 13 20:09:55.230169 containerd[1936]: time="2025-01-13T20:09:55.230143696Z" level=info msg="StopPodSandbox for \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\" returns successfully" Jan 13 20:09:55.230907 containerd[1936]: time="2025-01-13T20:09:55.230839060Z" level=info msg="RemovePodSandbox for \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\"" Jan 13 20:09:55.231012 containerd[1936]: time="2025-01-13T20:09:55.230917924Z" level=info msg="Forcibly stopping sandbox \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\"" Jan 13 20:09:55.231103 containerd[1936]: time="2025-01-13T20:09:55.231019072Z" level=info msg="TearDown network for sandbox \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\" successfully" Jan 13 20:09:55.238170 containerd[1936]: time="2025-01-13T20:09:55.237908332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:09:55.238170 containerd[1936]: time="2025-01-13T20:09:55.237998596Z" level=info msg="RemovePodSandbox \"a5b1e3ebf2689a672bd8ba65d332affc08ae6e4fa98c75f7730a87eb29c85c2a\" returns successfully" Jan 13 20:09:55.856581 systemd-networkd[1853]: lxc_health: Link UP Jan 13 20:09:55.876601 (udev-worker)[6162]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:55.891499 systemd-networkd[1853]: lxc_health: Gained carrier Jan 13 20:09:57.058914 systemd-networkd[1853]: lxc_health: Gained IPv6LL Jan 13 20:09:59.235403 systemd[1]: run-containerd-runc-k8s.io-c95d3f948a1681427f19d8c213586be6e14b33c6cbaa070959fbaef176a48e44-runc.G8OeQd.mount: Deactivated successfully. Jan 13 20:09:59.479918 ntpd[1911]: Listen normally on 15 lxc_health [fe80::fcef:faff:fe94:9b59%14]:123 Jan 13 20:09:59.480444 ntpd[1911]: 13 Jan 20:09:59 ntpd[1911]: Listen normally on 15 lxc_health [fe80::fcef:faff:fe94:9b59%14]:123 Jan 13 20:10:01.549104 systemd[1]: run-containerd-runc-k8s.io-c95d3f948a1681427f19d8c213586be6e14b33c6cbaa070959fbaef176a48e44-runc.nyf7ib.mount: Deactivated successfully. Jan 13 20:10:08.452440 sshd[5320]: Connection closed by 139.178.68.195 port 51528 Jan 13 20:10:08.453464 sshd-session[5318]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:08.460451 systemd[1]: sshd@28-172.31.29.220:22-139.178.68.195:51528.service: Deactivated successfully. Jan 13 20:10:08.464132 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:10:08.465568 systemd-logind[1918]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:10:08.467505 systemd-logind[1918]: Removed session 29.