Jan 13 20:07:31.154742 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 20:07:31.154784 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 13 20:07:31.154851 kernel: KASLR disabled due to lack of seed Jan 13 20:07:31.154871 kernel: efi: EFI v2.7 by EDK II Jan 13 20:07:31.154887 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Jan 13 20:07:31.154903 kernel: secureboot: Secure boot disabled Jan 13 20:07:31.154921 kernel: ACPI: Early table checksum verification disabled Jan 13 20:07:31.154936 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 20:07:31.154952 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 20:07:31.154967 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:07:31.154989 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 20:07:31.155006 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:07:31.155021 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 20:07:31.155037 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 20:07:31.155055 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 20:07:31.155076 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:07:31.155093 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 20:07:31.155109 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 20:07:31.155126 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 20:07:31.155142 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 20:07:31.155158 kernel: printk: bootconsole [uart0] enabled Jan 13 20:07:31.155174 kernel: NUMA: Failed to initialise from firmware Jan 13 20:07:31.155191 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:07:31.155207 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 20:07:31.155222 kernel: Zone ranges: Jan 13 20:07:31.155238 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:07:31.155259 kernel: DMA32 empty Jan 13 20:07:31.155276 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 20:07:31.155292 kernel: Movable zone start for each node Jan 13 20:07:31.155308 kernel: Early memory node ranges Jan 13 20:07:31.155324 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 20:07:31.155341 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 20:07:31.155357 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 20:07:31.155373 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 20:07:31.155389 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 20:07:31.155405 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 20:07:31.155421 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 20:07:31.155437 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 20:07:31.155458 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:07:31.155476 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 20:07:31.155499 kernel: psci: probing for conduit method from ACPI. Jan 13 20:07:31.155517 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 20:07:31.155534 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:07:31.155555 kernel: psci: Trusted OS migration not required Jan 13 20:07:31.155572 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:07:31.155589 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:07:31.155606 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:07:31.155623 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:07:31.155640 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:07:31.155657 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:07:31.155673 kernel: CPU features: detected: Spectre-v2 Jan 13 20:07:31.155690 kernel: CPU features: detected: Spectre-v3a Jan 13 20:07:31.155707 kernel: CPU features: detected: Spectre-BHB Jan 13 20:07:31.155723 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 20:07:31.155740 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 20:07:31.155761 kernel: alternatives: applying boot alternatives Jan 13 20:07:31.155780 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:07:31.155799 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:07:31.155860 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:07:31.155878 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:07:31.155895 kernel: Fallback order for Node 0: 0 Jan 13 20:07:31.155912 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 20:07:31.155929 kernel: Policy zone: Normal Jan 13 20:07:31.155946 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:07:31.155964 kernel: software IO TLB: area num 2. Jan 13 20:07:31.155987 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 20:07:31.156005 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Jan 13 20:07:31.156023 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:07:31.156040 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:07:31.156058 kernel: rcu: RCU event tracing is enabled. Jan 13 20:07:31.156075 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:07:31.156093 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:07:31.156110 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:07:31.156128 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:07:31.158097 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:07:31.158115 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:07:31.158140 kernel: GICv3: 96 SPIs implemented Jan 13 20:07:31.158158 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:07:31.158175 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:07:31.158192 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 20:07:31.158209 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 20:07:31.158226 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 20:07:31.158243 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:07:31.158261 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:07:31.158278 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 20:07:31.158295 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 20:07:31.158311 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 20:07:31.158329 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:07:31.158351 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 20:07:31.158369 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 20:07:31.158386 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 20:07:31.158403 kernel: Console: colour dummy device 80x25 Jan 13 20:07:31.158420 kernel: printk: console [tty1] enabled Jan 13 20:07:31.158438 kernel: ACPI: Core revision 20230628 Jan 13 20:07:31.158455 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 20:07:31.158473 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:07:31.158491 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:07:31.158508 kernel: landlock: Up and running. Jan 13 20:07:31.158530 kernel: SELinux: Initializing. Jan 13 20:07:31.158548 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:07:31.158565 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:07:31.158582 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:07:31.158600 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:07:31.158618 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:07:31.158635 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:07:31.158653 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 20:07:31.158674 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 20:07:31.158692 kernel: Remapping and enabling EFI services. Jan 13 20:07:31.158709 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:07:31.158726 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:07:31.158743 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 20:07:31.158761 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 20:07:31.158778 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 20:07:31.158795 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:07:31.158846 kernel: SMP: Total of 2 processors activated. Jan 13 20:07:31.158866 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:07:31.158890 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 20:07:31.158908 kernel: CPU features: detected: CRC32 instructions Jan 13 20:07:31.158936 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:07:31.158959 kernel: alternatives: applying system-wide alternatives Jan 13 20:07:31.158977 kernel: devtmpfs: initialized Jan 13 20:07:31.158995 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:07:31.159013 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:07:31.159031 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:07:31.159049 kernel: SMBIOS 3.0.0 present. Jan 13 20:07:31.159071 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 20:07:31.159089 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:07:31.159107 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:07:31.159125 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:07:31.159143 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:07:31.159161 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:07:31.159180 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Jan 13 20:07:31.159202 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:07:31.159221 kernel: cpuidle: using governor menu Jan 13 20:07:31.159239 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:07:31.159257 kernel: ASID allocator initialised with 65536 entries Jan 13 20:07:31.159274 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:07:31.159292 kernel: Serial: AMBA PL011 UART driver Jan 13 20:07:31.159310 kernel: Modules: 17360 pages in range for non-PLT usage Jan 13 20:07:31.159328 kernel: Modules: 508880 pages in range for PLT usage Jan 13 20:07:31.159346 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:07:31.159368 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:07:31.159387 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:07:31.159405 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:07:31.159423 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:07:31.159441 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:07:31.159459 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:07:31.159477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:07:31.159494 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:07:31.159512 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:07:31.159534 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:07:31.159553 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:07:31.159571 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:07:31.159589 kernel: ACPI: Interpreter enabled Jan 13 20:07:31.159607 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:07:31.159625 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:07:31.159643 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 20:07:31.159956 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:07:31.160228 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:07:31.160512 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:07:31.160708 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 20:07:31.160946 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 20:07:31.160972 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 20:07:31.161010 kernel: acpiphp: Slot [1] registered Jan 13 20:07:31.161029 kernel: acpiphp: Slot [2] registered Jan 13 20:07:31.161047 kernel: acpiphp: Slot [3] registered Jan 13 20:07:31.161072 kernel: acpiphp: Slot [4] registered Jan 13 20:07:31.161090 kernel: acpiphp: Slot [5] registered Jan 13 20:07:31.161108 kernel: acpiphp: Slot [6] registered Jan 13 20:07:31.161126 kernel: acpiphp: Slot [7] registered Jan 13 20:07:31.161143 kernel: acpiphp: Slot [8] registered Jan 13 20:07:31.161161 kernel: acpiphp: Slot [9] registered Jan 13 20:07:31.161179 kernel: acpiphp: Slot [10] registered Jan 13 20:07:31.161197 kernel: acpiphp: Slot [11] registered Jan 13 20:07:31.161214 kernel: acpiphp: Slot [12] registered Jan 13 20:07:31.161232 kernel: acpiphp: Slot [13] registered Jan 13 20:07:31.161255 kernel: acpiphp: Slot [14] registered Jan 13 20:07:31.161273 kernel: acpiphp: Slot [15] registered Jan 13 20:07:31.161290 kernel: acpiphp: Slot [16] registered Jan 13 20:07:31.161308 kernel: acpiphp: Slot [17] registered Jan 13 20:07:31.161326 kernel: acpiphp: Slot [18] registered Jan 13 20:07:31.161344 kernel: acpiphp: Slot [19] registered Jan 13 20:07:31.161362 kernel: acpiphp: Slot [20] registered Jan 13 20:07:31.161380 kernel: acpiphp: Slot [21] registered Jan 13 20:07:31.161398 kernel: acpiphp: Slot [22] registered Jan 13 20:07:31.161420 kernel: acpiphp: Slot [23] registered Jan 13 20:07:31.161438 kernel: acpiphp: Slot [24] registered Jan 13 20:07:31.161456 kernel: acpiphp: Slot [25] registered Jan 13 20:07:31.161473 kernel: acpiphp: Slot [26] registered Jan 13 20:07:31.161491 kernel: acpiphp: Slot [27] registered Jan 13 20:07:31.161509 kernel: acpiphp: Slot [28] registered Jan 13 20:07:31.161527 kernel: acpiphp: Slot [29] registered Jan 13 20:07:31.161544 kernel: acpiphp: Slot [30] registered Jan 13 20:07:31.161562 kernel: acpiphp: Slot [31] registered Jan 13 20:07:31.161579 kernel: PCI host bridge to bus 0000:00 Jan 13 20:07:31.161788 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 20:07:31.161997 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:07:31.162174 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 20:07:31.162355 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 20:07:31.162590 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 20:07:31.162834 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 20:07:31.165387 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 20:07:31.165617 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:07:31.165884 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 20:07:31.166092 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:07:31.166305 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:07:31.166503 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 20:07:31.166709 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 20:07:31.166954 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 20:07:31.167164 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:07:31.167370 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 20:07:31.167575 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 20:07:31.167780 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 20:07:31.168009 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 20:07:31.168223 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 20:07:31.168419 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 20:07:31.168598 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:07:31.171131 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 20:07:31.171169 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:07:31.171188 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:07:31.171207 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:07:31.171226 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:07:31.171244 kernel: iommu: Default domain type: Translated Jan 13 20:07:31.171272 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:07:31.171290 kernel: efivars: Registered efivars operations Jan 13 20:07:31.171308 kernel: vgaarb: loaded Jan 13 20:07:31.171326 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:07:31.171344 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:07:31.171362 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:07:31.171382 kernel: pnp: PnP ACPI init Jan 13 20:07:31.171606 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 20:07:31.171638 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:07:31.171656 kernel: NET: Registered PF_INET protocol family Jan 13 20:07:31.171675 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:07:31.171693 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:07:31.171711 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:07:31.171729 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:07:31.171748 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:07:31.171766 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:07:31.171784 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:07:31.171916 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:07:31.171940 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:07:31.171959 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:07:31.171976 kernel: kvm [1]: HYP mode not available Jan 13 20:07:31.171995 kernel: Initialise system trusted keyrings Jan 13 20:07:31.172013 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:07:31.172031 kernel: Key type asymmetric registered Jan 13 20:07:31.172049 kernel: Asymmetric key parser 'x509' registered Jan 13 20:07:31.172067 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:07:31.172091 kernel: io scheduler mq-deadline registered Jan 13 20:07:31.172109 kernel: io scheduler kyber registered Jan 13 20:07:31.172127 kernel: io scheduler bfq registered Jan 13 20:07:31.172339 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 20:07:31.172367 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:07:31.172385 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:07:31.172404 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 20:07:31.172422 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 20:07:31.172447 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:07:31.172467 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:07:31.172676 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 20:07:31.172743 kernel: printk: console [ttyS0] disabled Jan 13 20:07:31.172800 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 20:07:31.173286 kernel: printk: console [ttyS0] enabled Jan 13 20:07:31.173315 kernel: printk: bootconsole [uart0] disabled Jan 13 20:07:31.173334 kernel: thunder_xcv, ver 1.0 Jan 13 20:07:31.173352 kernel: thunder_bgx, ver 1.0 Jan 13 20:07:31.173370 kernel: nicpf, ver 1.0 Jan 13 20:07:31.173399 kernel: nicvf, ver 1.0 Jan 13 20:07:31.173640 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:07:31.173906 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:07:30 UTC (1736798850) Jan 13 20:07:31.173935 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:07:31.173955 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 20:07:31.173974 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:07:31.173993 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:07:31.174019 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:07:31.174038 kernel: Segment Routing with IPv6 Jan 13 20:07:31.174056 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:07:31.174074 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:07:31.174092 kernel: Key type dns_resolver registered Jan 13 20:07:31.174110 kernel: registered taskstats version 1 Jan 13 20:07:31.174128 kernel: Loading compiled-in X.509 certificates Jan 13 20:07:31.174147 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 13 20:07:31.174165 kernel: Key type .fscrypt registered Jan 13 20:07:31.174183 kernel: Key type fscrypt-provisioning registered Jan 13 20:07:31.174206 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:07:31.174224 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:07:31.174242 kernel: ima: No architecture policies found Jan 13 20:07:31.174260 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:07:31.174278 kernel: clk: Disabling unused clocks Jan 13 20:07:31.174296 kernel: Freeing unused kernel memory: 39936K Jan 13 20:07:31.174314 kernel: Run /init as init process Jan 13 20:07:31.174333 kernel: with arguments: Jan 13 20:07:31.174351 kernel: /init Jan 13 20:07:31.174373 kernel: with environment: Jan 13 20:07:31.174391 kernel: HOME=/ Jan 13 20:07:31.174410 kernel: TERM=linux Jan 13 20:07:31.174428 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:07:31.174451 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:07:31.174474 systemd[1]: Detected virtualization amazon. Jan 13 20:07:31.174494 systemd[1]: Detected architecture arm64. Jan 13 20:07:31.174519 systemd[1]: Running in initrd. Jan 13 20:07:31.174539 systemd[1]: No hostname configured, using default hostname. Jan 13 20:07:31.174558 systemd[1]: Hostname set to <localhost>. Jan 13 20:07:31.174578 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:07:31.174598 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:07:31.174618 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:07:31.174638 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:07:31.174659 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:07:31.174684 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:07:31.174705 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:07:31.174725 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:07:31.174747 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:07:31.174768 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:07:31.174788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:07:31.174830 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:07:31.174889 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:07:31.174910 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:07:31.174931 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:07:31.174952 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:07:31.174973 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:07:31.174993 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:07:31.175014 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:07:31.175033 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:07:31.175053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:07:31.175078 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:07:31.175098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:07:31.175118 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:07:31.175137 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:07:31.175158 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:07:31.175177 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:07:31.175197 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:07:31.175217 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:07:31.175241 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:07:31.175261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:31.175322 systemd-journald[251]: Collecting audit messages is disabled. Jan 13 20:07:31.175366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:07:31.175392 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:07:31.175412 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:07:31.175433 systemd-journald[251]: Journal started Jan 13 20:07:31.175481 systemd-journald[251]: Runtime Journal (/run/log/journal/ec20f47b6a7fbabbf53b8859810b5fa0) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:07:31.176278 systemd-modules-load[252]: Inserted module 'overlay' Jan 13 20:07:31.193855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:07:31.193922 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:07:31.204863 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:07:31.207837 kernel: Bridge firewalling registered Jan 13 20:07:31.207781 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 13 20:07:31.211177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:31.216566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:07:31.230188 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:07:31.236191 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:07:31.243132 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:07:31.248525 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:07:31.268133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:07:31.287877 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:07:31.307066 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:07:31.316752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:07:31.329579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:07:31.348095 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:07:31.351570 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:07:31.361701 dracut-cmdline[284]: dracut-dracut-053 Jan 13 20:07:31.371377 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:07:31.440743 systemd-resolved[290]: Positive Trust Anchors: Jan 13 20:07:31.440782 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:07:31.440875 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:07:31.512847 kernel: SCSI subsystem initialized Jan 13 20:07:31.520836 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:07:31.531871 kernel: iscsi: registered transport (tcp) Jan 13 20:07:31.554076 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:07:31.554150 kernel: QLogic iSCSI HBA Driver Jan 13 20:07:31.653841 kernel: random: crng init done Jan 13 20:07:31.653078 systemd-resolved[290]: Defaulting to hostname 'linux'. Jan 13 20:07:31.657133 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:07:31.673304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:07:31.679456 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:07:31.689171 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:07:31.725417 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:07:31.725501 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:07:31.725528 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:07:31.789865 kernel: raid6: neonx8 gen() 6503 MB/s Jan 13 20:07:31.806839 kernel: raid6: neonx4 gen() 6442 MB/s Jan 13 20:07:31.823838 kernel: raid6: neonx2 gen() 5396 MB/s Jan 13 20:07:31.840839 kernel: raid6: neonx1 gen() 3919 MB/s Jan 13 20:07:31.857838 kernel: raid6: int64x8 gen() 3593 MB/s Jan 13 20:07:31.874839 kernel: raid6: int64x4 gen() 3698 MB/s Jan 13 20:07:31.891839 kernel: raid6: int64x2 gen() 3590 MB/s Jan 13 20:07:31.909763 kernel: raid6: int64x1 gen() 2761 MB/s Jan 13 20:07:31.909796 kernel: raid6: using algorithm neonx8 gen() 6503 MB/s Jan 13 20:07:31.927842 kernel: raid6: .... xor() 4814 MB/s, rmw enabled Jan 13 20:07:31.927877 kernel: raid6: using neon recovery algorithm Jan 13 20:07:31.934842 kernel: xor: measuring software checksum speed Jan 13 20:07:31.935842 kernel: 8regs : 11928 MB/sec Jan 13 20:07:31.936838 kernel: 32regs : 11912 MB/sec Jan 13 20:07:31.938837 kernel: arm64_neon : 8698 MB/sec Jan 13 20:07:31.938880 kernel: xor: using function: 8regs (11928 MB/sec) Jan 13 20:07:32.020860 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:07:32.039671 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:07:32.053160 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:07:32.092889 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 13 20:07:32.101998 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:07:32.113442 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:07:32.144206 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jan 13 20:07:32.199189 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:07:32.209134 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:07:32.328155 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:07:32.342514 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:07:32.387933 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:07:32.393092 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:07:32.397950 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:07:32.402381 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:07:32.421309 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:07:32.449953 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:07:32.514531 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:07:32.514606 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 20:07:32.564046 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:07:32.564295 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:07:32.564524 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:65:78:3b:e8:45 Jan 13 20:07:32.564752 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:07:32.529500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:07:32.529714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:07:32.570694 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:07:32.532349 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:07:32.545342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:07:32.545612 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:32.547955 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:32.570661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:32.589427 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:07:32.601981 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:07:32.602043 kernel: GPT:9289727 != 16777215 Jan 13 20:07:32.603207 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:07:32.603993 kernel: GPT:9289727 != 16777215 Jan 13 20:07:32.605176 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:07:32.606186 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:07:32.610433 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:07:32.620348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:32.636233 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:07:32.683923 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:07:32.761843 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (517) Jan 13 20:07:32.771849 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (544) Jan 13 20:07:32.813193 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:07:32.844544 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:07:32.891280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:07:32.907080 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:07:32.909576 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:07:32.932140 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:07:32.945267 disk-uuid[662]: Primary Header is updated. Jan 13 20:07:32.945267 disk-uuid[662]: Secondary Entries is updated. Jan 13 20:07:32.945267 disk-uuid[662]: Secondary Header is updated. Jan 13 20:07:32.954836 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:07:33.972855 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:07:33.975433 disk-uuid[663]: The operation has completed successfully. Jan 13 20:07:34.151911 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:07:34.154075 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:07:34.201134 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:07:34.217714 sh[923]: Success Jan 13 20:07:34.242057 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:07:34.361325 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:07:34.377059 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:07:34.386330 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:07:34.410637 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 13 20:07:34.410708 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:07:34.410735 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:07:34.412075 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:07:34.413167 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:07:34.501825 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:07:34.521999 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:07:34.525869 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:07:34.539038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:07:34.547493 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:07:34.573356 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:07:34.573427 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:07:34.573464 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:07:34.581890 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:07:34.599774 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:07:34.602021 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:07:34.612995 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:07:34.623230 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:07:34.724433 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:07:34.736091 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:07:34.799159 systemd-networkd[1115]: lo: Link UP Jan 13 20:07:34.799181 systemd-networkd[1115]: lo: Gained carrier Jan 13 20:07:34.803779 systemd-networkd[1115]: Enumeration completed Jan 13 20:07:34.803979 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:07:34.806153 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:34.806160 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:07:34.806189 systemd[1]: Reached target network.target - Network. Jan 13 20:07:34.813253 systemd-networkd[1115]: eth0: Link UP Jan 13 20:07:34.813261 systemd-networkd[1115]: eth0: Gained carrier Jan 13 20:07:34.813278 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:34.852898 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.28.169/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:07:35.021889 ignition[1024]: Ignition 2.20.0 Jan 13 20:07:35.021911 ignition[1024]: Stage: fetch-offline Jan 13 20:07:35.022331 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:35.022355 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:35.023792 ignition[1024]: Ignition finished successfully Jan 13 20:07:35.033038 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:07:35.044126 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:07:35.067789 ignition[1125]: Ignition 2.20.0 Jan 13 20:07:35.067844 ignition[1125]: Stage: fetch Jan 13 20:07:35.069157 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:35.069208 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:35.069426 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:35.091760 ignition[1125]: PUT result: OK Jan 13 20:07:35.094683 ignition[1125]: parsed url from cmdline: "" Jan 13 20:07:35.094840 ignition[1125]: no config URL provided Jan 13 20:07:35.094860 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:07:35.094885 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:07:35.094916 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:35.097276 ignition[1125]: PUT result: OK Jan 13 20:07:35.097350 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:07:35.102224 ignition[1125]: GET result: OK Jan 13 20:07:35.103554 ignition[1125]: parsing config with SHA512: bbdfe31aa6cbea5270cf7a31d795a36d2165230ab0a1fdb04c9543a651997061352996b51b920c1bb850fcf812b45a44a1f2c581914ed5dad126a2edcb6c60a1 Jan 13 20:07:35.115231 unknown[1125]: fetched base config from "system" Jan 13 20:07:35.115260 unknown[1125]: fetched base config from "system" Jan 13 20:07:35.115274 unknown[1125]: fetched user config from "aws" Jan 13 20:07:35.119971 ignition[1125]: fetch: fetch complete Jan 13 20:07:35.119985 ignition[1125]: fetch: fetch passed Jan 13 20:07:35.120104 ignition[1125]: Ignition finished successfully Jan 13 20:07:35.124740 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:07:35.137192 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:07:35.163542 ignition[1132]: Ignition 2.20.0 Jan 13 20:07:35.163563 ignition[1132]: Stage: kargs Jan 13 20:07:35.164147 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:35.164172 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:35.164691 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:35.167185 ignition[1132]: PUT result: OK Jan 13 20:07:35.176831 ignition[1132]: kargs: kargs passed Jan 13 20:07:35.176931 ignition[1132]: Ignition finished successfully Jan 13 20:07:35.182871 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:07:35.193140 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:07:35.222056 ignition[1138]: Ignition 2.20.0 Jan 13 20:07:35.222085 ignition[1138]: Stage: disks Jan 13 20:07:35.223678 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:35.223704 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:35.224392 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:35.226852 ignition[1138]: PUT result: OK Jan 13 20:07:35.235216 ignition[1138]: disks: disks passed Jan 13 20:07:35.235491 ignition[1138]: Ignition finished successfully Jan 13 20:07:35.241872 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:07:35.244461 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:07:35.246787 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:07:35.250820 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:07:35.254619 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:07:35.258559 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:07:35.270118 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:07:35.318281 systemd-fsck[1147]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:07:35.325166 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:07:35.337030 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:07:35.435839 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 13 20:07:35.436979 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:07:35.440534 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:07:35.455995 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:07:35.467305 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:07:35.471724 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:07:35.475123 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:07:35.475187 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:07:35.488199 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:07:35.494550 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:07:35.507846 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1166) Jan 13 20:07:35.512299 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:07:35.512350 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:07:35.512377 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:07:35.527199 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:07:35.528903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:07:35.872196 initrd-setup-root[1190]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:07:35.907255 initrd-setup-root[1197]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:07:35.915356 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:07:35.924300 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:07:36.242031 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:07:36.258976 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:07:36.265111 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:07:36.280679 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:07:36.285855 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:07:36.323960 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:07:36.334782 ignition[1278]: INFO : Ignition 2.20.0 Jan 13 20:07:36.336715 ignition[1278]: INFO : Stage: mount Jan 13 20:07:36.338635 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:36.340550 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:36.342869 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:36.346018 ignition[1278]: INFO : PUT result: OK Jan 13 20:07:36.350506 ignition[1278]: INFO : mount: mount passed Jan 13 20:07:36.353291 ignition[1278]: INFO : Ignition finished successfully Jan 13 20:07:36.355722 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:07:36.367129 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:07:36.446145 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:07:36.480849 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1291) Jan 13 20:07:36.484662 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:07:36.484708 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:07:36.484734 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:07:36.490842 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:07:36.494216 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:07:36.532684 ignition[1308]: INFO : Ignition 2.20.0 Jan 13 20:07:36.532684 ignition[1308]: INFO : Stage: files Jan 13 20:07:36.535953 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:36.535953 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:36.535953 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:36.542530 ignition[1308]: INFO : PUT result: OK Jan 13 20:07:36.546969 ignition[1308]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:07:36.560611 ignition[1308]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:07:36.560611 ignition[1308]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:07:36.588201 ignition[1308]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:07:36.591220 ignition[1308]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:07:36.594038 unknown[1308]: wrote ssh authorized keys file for user: core Jan 13 20:07:36.596408 ignition[1308]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:07:36.607030 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:07:36.607030 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:07:36.684874 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:07:36.729946 systemd-networkd[1115]: eth0: Gained IPv6LL Jan 13 20:07:36.845690 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:07:36.845690 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:07:36.853008 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:07:37.185738 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:07:37.312381 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:07:37.312381 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:07:37.319007 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 20:07:37.623287 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:07:37.943871 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:07:37.943871 ignition[1308]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:07:37.951063 ignition[1308]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:07:37.955413 ignition[1308]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:07:37.955413 ignition[1308]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:07:37.955413 ignition[1308]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:07:37.955413 ignition[1308]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:07:37.966297 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:07:37.966297 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:07:37.966297 ignition[1308]: INFO : files: files passed Jan 13 20:07:37.966297 ignition[1308]: INFO : Ignition finished successfully Jan 13 20:07:37.977539 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:07:37.989199 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:07:37.999139 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:07:38.011075 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:07:38.013142 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:07:38.027181 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:07:38.027181 initrd-setup-root-after-ignition[1336]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:07:38.035739 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:07:38.042919 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:07:38.047969 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:07:38.068240 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:07:38.116534 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:07:38.116733 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:07:38.121077 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:07:38.124925 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:07:38.133020 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:07:38.149151 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:07:38.175040 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:07:38.191074 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:07:38.215853 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:07:38.218662 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:07:38.221276 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:07:38.228718 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:07:38.229004 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:07:38.231615 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:07:38.233703 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:07:38.235526 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:07:38.238011 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:07:38.241257 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:07:38.255800 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:07:38.257795 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:07:38.260479 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:07:38.268513 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:07:38.270776 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:07:38.275314 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:07:38.275533 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:07:38.277898 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:07:38.280122 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:07:38.283473 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:07:38.284272 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:07:38.286726 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:07:38.287111 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:07:38.305091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:07:38.305508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:07:38.312846 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:07:38.313080 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:07:38.333273 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:07:38.338927 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:07:38.339217 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:07:38.351131 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:07:38.357031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:07:38.359399 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:07:38.363109 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:07:38.363330 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:07:38.381612 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:07:38.383533 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:07:38.388562 ignition[1360]: INFO : Ignition 2.20.0 Jan 13 20:07:38.391713 ignition[1360]: INFO : Stage: umount Jan 13 20:07:38.391713 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:38.391713 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:38.391713 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:38.402980 ignition[1360]: INFO : PUT result: OK Jan 13 20:07:38.409142 ignition[1360]: INFO : umount: umount passed Jan 13 20:07:38.409142 ignition[1360]: INFO : Ignition finished successfully Jan 13 20:07:38.414447 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:07:38.417062 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:07:38.421631 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:07:38.421787 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:07:38.424148 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:07:38.424256 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:07:38.428077 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:07:38.428176 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:07:38.431381 systemd[1]: Stopped target network.target - Network. Jan 13 20:07:38.449345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:07:38.449451 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:07:38.449731 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:07:38.450253 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:07:38.454032 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:07:38.459763 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:07:38.460223 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:07:38.460581 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:07:38.460659 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:07:38.461157 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:07:38.461225 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:07:38.468353 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:07:38.468445 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:07:38.470379 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:07:38.470458 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:07:38.473117 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:07:38.475449 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:07:38.481563 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:07:38.482653 systemd-networkd[1115]: eth0: DHCPv6 lease lost Jan 13 20:07:38.483987 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:07:38.484222 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:07:38.533017 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:07:38.533219 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:07:38.542339 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:07:38.543872 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:07:38.548752 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:07:38.549873 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:07:38.554651 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:07:38.554756 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:07:38.580087 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:07:38.582742 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:07:38.582928 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:07:38.590405 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:07:38.590510 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:07:38.592801 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:07:38.592901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:07:38.594930 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:07:38.595003 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:07:38.597693 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:07:38.627593 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:07:38.627794 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:07:38.644488 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:07:38.644764 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:07:38.652907 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:07:38.653042 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:07:38.656932 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:07:38.657018 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:07:38.659093 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:07:38.659328 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:07:38.663183 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:07:38.663272 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:07:38.673289 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:07:38.673379 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:07:38.690167 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:07:38.696070 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:07:38.696193 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:07:38.701579 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:07:38.701687 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:38.710080 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:07:38.710315 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:07:38.714351 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:07:38.740174 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:07:38.757496 systemd[1]: Switching root. Jan 13 20:07:38.807847 systemd-journald[251]: Journal stopped Jan 13 20:07:41.219432 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 13 20:07:41.219572 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:07:41.219621 kernel: SELinux: policy capability open_perms=1 Jan 13 20:07:41.219651 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:07:41.219682 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:07:41.219713 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:07:41.219749 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:07:41.219779 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:07:41.219825 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:07:41.219861 kernel: audit: type=1403 audit(1736798859.412:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:07:41.219903 systemd[1]: Successfully loaded SELinux policy in 82.164ms. Jan 13 20:07:41.219957 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.338ms. Jan 13 20:07:41.219992 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:07:41.220024 systemd[1]: Detected virtualization amazon. Jan 13 20:07:41.220056 systemd[1]: Detected architecture arm64. Jan 13 20:07:41.220090 systemd[1]: Detected first boot. Jan 13 20:07:41.220124 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:07:41.220155 zram_generator::config[1403]: No configuration found. Jan 13 20:07:41.220191 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:07:41.220222 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:07:41.220253 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:07:41.220287 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:07:41.220328 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:07:41.220368 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:07:41.220400 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:07:41.220434 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:07:41.220466 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:07:41.220497 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:07:41.220530 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:07:41.220560 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:07:41.220596 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:07:41.220631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:07:41.220660 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:07:41.220693 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:07:41.220725 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:07:41.220754 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:07:41.220782 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:07:41.223020 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:07:41.223062 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:07:41.223091 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:07:41.223130 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:07:41.223162 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:07:41.223195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:07:41.223226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:07:41.223256 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:07:41.223287 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:07:41.223319 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:07:41.223351 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:07:41.223384 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:07:41.223416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:07:41.223448 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:07:41.223477 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:07:41.223506 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:07:41.223540 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:07:41.223580 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:07:41.223610 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:07:41.223643 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:07:41.223677 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:07:41.223708 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:07:41.223739 systemd[1]: Reached target machines.target - Containers. Jan 13 20:07:41.223779 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:07:41.223853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:41.223890 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:07:41.223919 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:07:41.223948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:41.223977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:07:41.224010 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:41.224039 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:07:41.224068 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:07:41.224099 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:07:41.224131 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:07:41.224160 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:07:41.224190 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:07:41.224219 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:07:41.224252 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:07:41.224281 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:07:41.224310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:07:41.224337 kernel: fuse: init (API version 7.39) Jan 13 20:07:41.224365 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:07:41.224394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:07:41.224424 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:07:41.224453 systemd[1]: Stopped verity-setup.service. Jan 13 20:07:41.224484 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:07:41.224519 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:07:41.224548 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:07:41.224576 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:07:41.224607 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:07:41.224638 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:07:41.224669 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:07:41.224702 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:07:41.224731 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:07:41.224760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:41.224789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:41.224840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:41.224874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:41.224907 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:07:41.224963 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:07:41.225036 systemd-journald[1481]: Collecting audit messages is disabled. Jan 13 20:07:41.225089 kernel: loop: module loaded Jan 13 20:07:41.225123 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:07:41.225156 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:07:41.225184 systemd-journald[1481]: Journal started Jan 13 20:07:41.225237 systemd-journald[1481]: Runtime Journal (/run/log/journal/ec20f47b6a7fbabbf53b8859810b5fa0) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:07:41.225313 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:07:40.620503 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:07:40.696076 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:07:40.696851 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:07:41.233679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:07:41.242868 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:07:41.247837 kernel: ACPI: bus type drm_connector registered Jan 13 20:07:41.252983 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:07:41.253422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:07:41.258367 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:07:41.265320 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:07:41.268500 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:07:41.272900 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:07:41.276209 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:07:41.278996 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:07:41.304125 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:07:41.306716 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:07:41.306781 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:07:41.311178 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:07:41.322266 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:07:41.334677 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:07:41.337173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:41.346276 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:07:41.353600 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:07:41.355900 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:07:41.358930 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:07:41.361114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:07:41.363878 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:07:41.370177 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:07:41.377230 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:07:41.382923 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:07:41.454618 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:07:41.457416 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:07:41.472242 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:07:41.487109 kernel: loop0: detected capacity change from 0 to 53784 Jan 13 20:07:41.492274 systemd-journald[1481]: Time spent on flushing to /var/log/journal/ec20f47b6a7fbabbf53b8859810b5fa0 is 123.318ms for 913 entries. Jan 13 20:07:41.492274 systemd-journald[1481]: System Journal (/var/log/journal/ec20f47b6a7fbabbf53b8859810b5fa0) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:07:41.633343 systemd-journald[1481]: Received client request to flush runtime journal. Jan 13 20:07:41.633445 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:07:41.542173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:07:41.569337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:07:41.589798 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:07:41.621165 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:07:41.634948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:07:41.641830 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:07:41.654843 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:07:41.661417 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:07:41.670846 kernel: loop1: detected capacity change from 0 to 189592 Jan 13 20:07:41.677336 udevadm[1548]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:07:41.747925 kernel: loop2: detected capacity change from 0 to 116784 Jan 13 20:07:41.753638 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. Jan 13 20:07:41.753676 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. Jan 13 20:07:41.763647 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:07:41.885943 kernel: loop3: detected capacity change from 0 to 113552 Jan 13 20:07:42.028893 kernel: loop4: detected capacity change from 0 to 53784 Jan 13 20:07:42.052053 kernel: loop5: detected capacity change from 0 to 189592 Jan 13 20:07:42.084846 kernel: loop6: detected capacity change from 0 to 116784 Jan 13 20:07:42.101849 kernel: loop7: detected capacity change from 0 to 113552 Jan 13 20:07:42.118912 (sd-merge)[1559]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:07:42.119928 (sd-merge)[1559]: Merged extensions into '/usr'. Jan 13 20:07:42.127291 systemd[1]: Reloading requested from client PID 1534 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:07:42.127322 systemd[1]: Reloading... Jan 13 20:07:42.313861 zram_generator::config[1586]: No configuration found. Jan 13 20:07:42.641644 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:42.752692 systemd[1]: Reloading finished in 624 ms. Jan 13 20:07:42.792322 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:07:42.796741 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:07:42.812129 systemd[1]: Starting ensure-sysext.service... Jan 13 20:07:42.828045 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:07:42.835155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:07:42.857120 systemd[1]: Reloading requested from client PID 1637 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:07:42.857150 systemd[1]: Reloading... Jan 13 20:07:42.905284 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:07:42.910011 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:07:42.910669 systemd-udevd[1639]: Using default interface naming scheme 'v255'. Jan 13 20:07:42.913979 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:07:42.914541 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 13 20:07:42.914703 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 13 20:07:42.932013 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:07:42.932040 systemd-tmpfiles[1638]: Skipping /boot Jan 13 20:07:42.994006 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:07:42.994035 systemd-tmpfiles[1638]: Skipping /boot Jan 13 20:07:43.050846 zram_generator::config[1669]: No configuration found. Jan 13 20:07:43.099066 ldconfig[1529]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:07:43.236043 (udev-worker)[1667]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:07:43.455783 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:43.476886 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1724) Jan 13 20:07:43.602520 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:07:43.602988 systemd[1]: Reloading finished in 745 ms. Jan 13 20:07:43.631552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:07:43.637707 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:07:43.640440 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:07:43.728130 systemd[1]: Finished ensure-sysext.service. Jan 13 20:07:43.773319 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:07:43.787760 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:07:43.790301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:43.794773 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:43.800144 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:07:43.814059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:43.822127 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:07:43.824243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:43.829154 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:07:43.838187 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:07:43.851113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:07:43.853287 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:07:43.860146 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:07:43.866075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:43.870981 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:07:43.873988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:43.874322 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:43.877255 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:07:43.877565 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:07:43.880277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:43.881303 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:43.892420 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:07:43.903341 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:07:43.904974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:07:43.938216 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:07:43.947149 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:07:43.950015 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:07:43.950145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:07:43.961154 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:07:43.984941 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:07:43.996160 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:07:44.006290 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:07:44.027165 lvm[1859]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:07:44.066122 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:07:44.076513 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:07:44.079332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:07:44.089084 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:07:44.126922 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:07:44.136304 augenrules[1883]: No rules Jan 13 20:07:44.137232 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:07:44.137703 lvm[1879]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:07:44.137963 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:07:44.149573 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:07:44.152301 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:07:44.174878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:44.191004 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:07:44.195626 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:07:44.298261 systemd-networkd[1842]: lo: Link UP Jan 13 20:07:44.298776 systemd-networkd[1842]: lo: Gained carrier Jan 13 20:07:44.301700 systemd-networkd[1842]: Enumeration completed Jan 13 20:07:44.301968 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:07:44.306719 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:44.306728 systemd-networkd[1842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:07:44.307286 systemd-resolved[1845]: Positive Trust Anchors: Jan 13 20:07:44.307309 systemd-resolved[1845]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:07:44.307370 systemd-resolved[1845]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:07:44.313336 systemd-networkd[1842]: eth0: Link UP Jan 13 20:07:44.315028 systemd-networkd[1842]: eth0: Gained carrier Jan 13 20:07:44.315066 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:44.315134 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:07:44.323928 systemd-resolved[1845]: Defaulting to hostname 'linux'. Jan 13 20:07:44.333010 systemd-networkd[1842]: eth0: DHCPv4 address 172.31.28.169/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:07:44.333182 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:07:44.335500 systemd[1]: Reached target network.target - Network. Jan 13 20:07:44.337673 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:07:44.341962 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:07:44.344183 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:07:44.346738 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:07:44.349724 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:07:44.355388 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:07:44.358009 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:07:44.360570 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:07:44.360623 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:07:44.362641 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:07:44.366111 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:07:44.370680 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:07:44.379109 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:07:44.382237 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:07:44.384632 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:07:44.386782 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:07:44.388612 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:07:44.388658 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:07:44.408170 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:07:44.413341 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:07:44.419170 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:07:44.428294 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:07:44.433354 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:07:44.435341 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:07:44.446253 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:07:44.454168 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:07:44.463868 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:07:44.470301 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:07:44.479200 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:07:44.489195 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:07:44.500173 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:07:44.503066 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:07:44.512860 jq[1906]: false Jan 13 20:07:44.504738 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:07:44.509130 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:07:44.516104 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:07:44.525634 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:07:44.526018 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:07:44.558521 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:07:44.557060 dbus-daemon[1905]: [system] SELinux support is enabled Jan 13 20:07:44.567154 dbus-daemon[1905]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1842 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:07:44.574242 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:07:44.576116 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:07:44.578392 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:07:44.578434 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:07:44.582105 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:07:44.582143 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:07:44.586597 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:07:44.586968 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:07:44.605995 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:07:44.616933 jq[1918]: true Jan 13 20:07:44.663496 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:07:44.663872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:07:44.714623 update_engine[1917]: I20250113 20:07:44.712018 1917 main.cc:92] Flatcar Update Engine starting Jan 13 20:07:44.718696 extend-filesystems[1907]: Found loop4 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found loop5 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found loop6 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found loop7 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found nvme0n1 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found nvme0n1p1 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found nvme0n1p2 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found nvme0n1p3 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found usr Jan 13 20:07:44.728646 extend-filesystems[1907]: Found nvme0n1p4 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found nvme0n1p6 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found nvme0n1p7 Jan 13 20:07:44.728646 extend-filesystems[1907]: Found nvme0n1p9 Jan 13 20:07:44.803067 tar[1931]: linux-arm64/helm Jan 13 20:07:44.803510 coreos-metadata[1904]: Jan 13 20:07:44.732 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:07:44.803510 coreos-metadata[1904]: Jan 13 20:07:44.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:07:44.803510 coreos-metadata[1904]: Jan 13 20:07:44.782 INFO Fetch successful Jan 13 20:07:44.803510 coreos-metadata[1904]: Jan 13 20:07:44.782 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:07:44.805597 extend-filesystems[1907]: Checking size of /dev/nvme0n1p9 Jan 13 20:07:44.809200 update_engine[1917]: I20250113 20:07:44.730832 1917 update_check_scheduler.cc:74] Next update check in 7m53s Jan 13 20:07:44.730577 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:07:44.812794 jq[1940]: true Jan 13 20:07:44.813348 coreos-metadata[1904]: Jan 13 20:07:44.805 INFO Fetch successful Jan 13 20:07:44.813348 coreos-metadata[1904]: Jan 13 20:07:44.805 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:07:44.813348 coreos-metadata[1904]: Jan 13 20:07:44.810 INFO Fetch successful Jan 13 20:07:44.813348 coreos-metadata[1904]: Jan 13 20:07:44.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:07:44.813348 coreos-metadata[1904]: Jan 13 20:07:44.811 INFO Fetch successful Jan 13 20:07:44.813348 coreos-metadata[1904]: Jan 13 20:07:44.811 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:07:44.813348 coreos-metadata[1904]: Jan 13 20:07:44.811 INFO Fetch failed with 404: resource not found Jan 13 20:07:44.813348 coreos-metadata[1904]: Jan 13 20:07:44.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:07:44.752632 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:07:44.820305 coreos-metadata[1904]: Jan 13 20:07:44.819 INFO Fetch successful Jan 13 20:07:44.820305 coreos-metadata[1904]: Jan 13 20:07:44.819 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:07:44.820108 (ntainerd)[1942]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:07:44.821985 ntpd[1909]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:48 UTC 2025 (1): Starting Jan 13 20:07:44.844716 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:48 UTC 2025 (1): Starting Jan 13 20:07:44.844716 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:07:44.844716 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: ---------------------------------------------------- Jan 13 20:07:44.844716 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:07:44.844716 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:07:44.844716 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: corporation. Support and training for ntp-4 are Jan 13 20:07:44.844716 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: available at https://www.nwtime.org/support Jan 13 20:07:44.844716 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: ---------------------------------------------------- Jan 13 20:07:44.844716 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: proto: precision = 0.096 usec (-23) Jan 13 20:07:44.845424 coreos-metadata[1904]: Jan 13 20:07:44.831 INFO Fetch successful Jan 13 20:07:44.845424 coreos-metadata[1904]: Jan 13 20:07:44.831 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:07:44.845424 coreos-metadata[1904]: Jan 13 20:07:44.836 INFO Fetch successful Jan 13 20:07:44.845424 coreos-metadata[1904]: Jan 13 20:07:44.837 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:07:44.845424 coreos-metadata[1904]: Jan 13 20:07:44.841 INFO Fetch successful Jan 13 20:07:44.845424 coreos-metadata[1904]: Jan 13 20:07:44.841 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:07:44.822046 ntpd[1909]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:07:44.822066 ntpd[1909]: ---------------------------------------------------- Jan 13 20:07:44.822084 ntpd[1909]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:07:44.822102 ntpd[1909]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:07:44.822119 ntpd[1909]: corporation. Support and training for ntp-4 are Jan 13 20:07:44.822136 ntpd[1909]: available at https://www.nwtime.org/support Jan 13 20:07:44.822153 ntpd[1909]: ---------------------------------------------------- Jan 13 20:07:44.843212 ntpd[1909]: proto: precision = 0.096 usec (-23) Jan 13 20:07:44.858614 coreos-metadata[1904]: Jan 13 20:07:44.848 INFO Fetch successful Jan 13 20:07:44.858742 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: basedate set to 2025-01-01 Jan 13 20:07:44.858742 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: gps base set to 2025-01-05 (week 2348) Jan 13 20:07:44.851185 ntpd[1909]: basedate set to 2025-01-01 Jan 13 20:07:44.851229 ntpd[1909]: gps base set to 2025-01-05 (week 2348) Jan 13 20:07:44.862753 ntpd[1909]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:07:44.865009 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:07:44.865009 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:07:44.865009 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:07:44.865009 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: Listen normally on 3 eth0 172.31.28.169:123 Jan 13 20:07:44.865009 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: Listen normally on 4 lo [::1]:123 Jan 13 20:07:44.865009 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: bind(21) AF_INET6 fe80::465:78ff:fe3b:e845%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:44.865009 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: unable to create socket on eth0 (5) for fe80::465:78ff:fe3b:e845%2#123 Jan 13 20:07:44.865009 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: failed to init interface for address fe80::465:78ff:fe3b:e845%2 Jan 13 20:07:44.865009 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: Listening on routing socket on fd #21 for interface updates Jan 13 20:07:44.862871 ntpd[1909]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:07:44.863127 ntpd[1909]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:07:44.863187 ntpd[1909]: Listen normally on 3 eth0 172.31.28.169:123 Jan 13 20:07:44.863251 ntpd[1909]: Listen normally on 4 lo [::1]:123 Jan 13 20:07:44.863320 ntpd[1909]: bind(21) AF_INET6 fe80::465:78ff:fe3b:e845%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:44.863356 ntpd[1909]: unable to create socket on eth0 (5) for fe80::465:78ff:fe3b:e845%2#123 Jan 13 20:07:44.863384 ntpd[1909]: failed to init interface for address fe80::465:78ff:fe3b:e845%2 Jan 13 20:07:44.863431 ntpd[1909]: Listening on routing socket on fd #21 for interface updates Jan 13 20:07:44.880756 extend-filesystems[1907]: Resized partition /dev/nvme0n1p9 Jan 13 20:07:44.892087 extend-filesystems[1961]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:07:44.913887 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:07:44.914749 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:07:44.920288 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:44.922167 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:44.922167 ntpd[1909]: 13 Jan 20:07:44 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:44.920343 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:45.012965 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:07:45.015537 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:07:45.053224 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:07:45.067794 extend-filesystems[1961]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:07:45.067794 extend-filesystems[1961]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:07:45.067794 extend-filesystems[1961]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:07:45.092733 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1688) Jan 13 20:07:45.074795 systemd-logind[1914]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:07:45.095190 extend-filesystems[1907]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:07:45.075555 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:07:45.076017 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:07:45.077945 systemd-logind[1914]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 20:07:45.083924 systemd-logind[1914]: New seat seat0. Jan 13 20:07:45.099450 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:07:45.124405 bash[1987]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:07:45.127304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:07:45.180782 systemd[1]: Starting sshkeys.service... Jan 13 20:07:45.236120 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:07:45.290849 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:07:45.356268 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:07:45.356588 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:07:45.361265 dbus-daemon[1905]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1929 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:07:45.375391 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:07:45.523738 polkitd[2048]: Started polkitd version 121 Jan 13 20:07:45.533193 locksmithd[1948]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:07:45.555366 polkitd[2048]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:07:45.555484 polkitd[2048]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:07:45.567977 coreos-metadata[2027]: Jan 13 20:07:45.567 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:07:45.572013 coreos-metadata[2027]: Jan 13 20:07:45.571 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:07:45.573296 coreos-metadata[2027]: Jan 13 20:07:45.573 INFO Fetch successful Jan 13 20:07:45.573296 coreos-metadata[2027]: Jan 13 20:07:45.573 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:07:45.575574 coreos-metadata[2027]: Jan 13 20:07:45.574 INFO Fetch successful Jan 13 20:07:45.580492 polkitd[2048]: Finished loading, compiling and executing 2 rules Jan 13 20:07:45.582566 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:07:45.582963 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:07:45.585923 unknown[2027]: wrote ssh authorized keys file for user: core Jan 13 20:07:45.586947 polkitd[2048]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:07:45.632930 update-ssh-keys[2086]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:07:45.637786 containerd[1942]: time="2025-01-13T20:07:45.637632767Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:07:45.637935 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:07:45.652087 systemd[1]: Finished sshkeys.service. Jan 13 20:07:45.700080 systemd-hostnamed[1929]: Hostname set to <ip-172-31-28-169> (transient) Jan 13 20:07:45.701887 systemd-resolved[1845]: System hostname changed to 'ip-172-31-28-169'. Jan 13 20:07:45.823084 ntpd[1909]: bind(24) AF_INET6 fe80::465:78ff:fe3b:e845%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:45.823845 ntpd[1909]: 13 Jan 20:07:45 ntpd[1909]: bind(24) AF_INET6 fe80::465:78ff:fe3b:e845%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:45.823845 ntpd[1909]: 13 Jan 20:07:45 ntpd[1909]: unable to create socket on eth0 (6) for fe80::465:78ff:fe3b:e845%2#123 Jan 13 20:07:45.823845 ntpd[1909]: 13 Jan 20:07:45 ntpd[1909]: failed to init interface for address fe80::465:78ff:fe3b:e845%2 Jan 13 20:07:45.823151 ntpd[1909]: unable to create socket on eth0 (6) for fe80::465:78ff:fe3b:e845%2#123 Jan 13 20:07:45.823179 ntpd[1909]: failed to init interface for address fe80::465:78ff:fe3b:e845%2 Jan 13 20:07:45.831128 containerd[1942]: time="2025-01-13T20:07:45.830769120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:45.833574 containerd[1942]: time="2025-01-13T20:07:45.833512068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:45.833723 containerd[1942]: time="2025-01-13T20:07:45.833694660Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:07:45.833849 containerd[1942]: time="2025-01-13T20:07:45.833802456Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:07:45.834843 containerd[1942]: time="2025-01-13T20:07:45.834181020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:07:45.834843 containerd[1942]: time="2025-01-13T20:07:45.834221508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:45.834843 containerd[1942]: time="2025-01-13T20:07:45.834340800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:45.834843 containerd[1942]: time="2025-01-13T20:07:45.834368184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:45.834843 containerd[1942]: time="2025-01-13T20:07:45.834647796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:45.834843 containerd[1942]: time="2025-01-13T20:07:45.834678780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:45.834843 containerd[1942]: time="2025-01-13T20:07:45.834710076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:45.834843 containerd[1942]: time="2025-01-13T20:07:45.834733788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:45.835378 containerd[1942]: time="2025-01-13T20:07:45.835346448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:45.835926 containerd[1942]: time="2025-01-13T20:07:45.835892064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:45.836240 containerd[1942]: time="2025-01-13T20:07:45.836208156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:45.836649 containerd[1942]: time="2025-01-13T20:07:45.836313900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:07:45.836649 containerd[1942]: time="2025-01-13T20:07:45.836497008Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:07:45.836649 containerd[1942]: time="2025-01-13T20:07:45.836593968Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:07:45.845655 containerd[1942]: time="2025-01-13T20:07:45.845602440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.846718428Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.846894648Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.846950340Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.846996984Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.847261212Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.847652400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.847848204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.847881960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.847914000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.847945956Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.847998828Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.848029656Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.848060196Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:07:45.848833 containerd[1942]: time="2025-01-13T20:07:45.848093076Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848124000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848154108Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848186136Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848227056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848257428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848286828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848316672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848344788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848375388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848403108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848432676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848462832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848494416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.849463 containerd[1942]: time="2025-01-13T20:07:45.848522508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.850024 containerd[1942]: time="2025-01-13T20:07:45.848551164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.850024 containerd[1942]: time="2025-01-13T20:07:45.848579088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.850024 containerd[1942]: time="2025-01-13T20:07:45.848617740Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:07:45.850024 containerd[1942]: time="2025-01-13T20:07:45.848660736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.850024 containerd[1942]: time="2025-01-13T20:07:45.848691516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.850024 containerd[1942]: time="2025-01-13T20:07:45.848720232Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:07:45.853928 containerd[1942]: time="2025-01-13T20:07:45.851962536Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:07:45.853928 containerd[1942]: time="2025-01-13T20:07:45.853527720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:07:45.853928 containerd[1942]: time="2025-01-13T20:07:45.853564740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:07:45.853928 containerd[1942]: time="2025-01-13T20:07:45.853620336Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:07:45.853928 containerd[1942]: time="2025-01-13T20:07:45.853650996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.853928 containerd[1942]: time="2025-01-13T20:07:45.853713384Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:07:45.853928 containerd[1942]: time="2025-01-13T20:07:45.853741956Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:07:45.853928 containerd[1942]: time="2025-01-13T20:07:45.853788660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:07:45.856945 containerd[1942]: time="2025-01-13T20:07:45.856017768Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:07:45.856945 containerd[1942]: time="2025-01-13T20:07:45.856134888Z" level=info msg="Connect containerd service" Jan 13 20:07:45.856945 containerd[1942]: time="2025-01-13T20:07:45.856690080Z" level=info msg="using legacy CRI server" Jan 13 20:07:45.856945 containerd[1942]: time="2025-01-13T20:07:45.856732860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:07:45.859012 containerd[1942]: time="2025-01-13T20:07:45.857898036Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:07:45.861767 containerd[1942]: time="2025-01-13T20:07:45.861718332Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:07:45.863479 containerd[1942]: time="2025-01-13T20:07:45.862122756Z" level=info msg="Start subscribing containerd event" Jan 13 20:07:45.863479 containerd[1942]: time="2025-01-13T20:07:45.862208412Z" level=info msg="Start recovering state" Jan 13 20:07:45.863479 containerd[1942]: time="2025-01-13T20:07:45.862318824Z" level=info msg="Start event monitor" Jan 13 20:07:45.863479 containerd[1942]: time="2025-01-13T20:07:45.862340868Z" level=info msg="Start snapshots syncer" Jan 13 20:07:45.863479 containerd[1942]: time="2025-01-13T20:07:45.862362024Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:07:45.863479 containerd[1942]: time="2025-01-13T20:07:45.862381032Z" level=info msg="Start streaming server" Jan 13 20:07:45.865461 containerd[1942]: time="2025-01-13T20:07:45.865402608Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:07:45.866243 containerd[1942]: time="2025-01-13T20:07:45.866189832Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:07:45.867124 containerd[1942]: time="2025-01-13T20:07:45.866524764Z" level=info msg="containerd successfully booted in 0.239745s" Jan 13 20:07:45.866639 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:07:45.886587 sshd_keygen[1947]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:07:45.936404 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:07:45.946127 systemd-networkd[1842]: eth0: Gained IPv6LL Jan 13 20:07:45.948327 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:07:45.964453 systemd[1]: Started sshd@0-172.31.28.169:22-147.75.109.163:43012.service - OpenSSH per-connection server daemon (147.75.109.163:43012). Jan 13 20:07:45.968519 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:07:45.973692 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:07:45.991317 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:07:46.004222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:46.019742 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:07:46.023386 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:07:46.024694 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:07:46.041702 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:07:46.112922 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:07:46.135590 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:07:46.150404 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:07:46.153664 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:07:46.158031 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:07:46.179864 amazon-ssm-agent[2122]: Initializing new seelog logger Jan 13 20:07:46.180552 amazon-ssm-agent[2122]: New Seelog Logger Creation Complete Jan 13 20:07:46.180753 amazon-ssm-agent[2122]: 2025/01/13 20:07:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:46.181725 amazon-ssm-agent[2122]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:46.181725 amazon-ssm-agent[2122]: 2025/01/13 20:07:46 processing appconfig overrides Jan 13 20:07:46.182133 amazon-ssm-agent[2122]: 2025/01/13 20:07:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:46.182239 amazon-ssm-agent[2122]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:46.182417 amazon-ssm-agent[2122]: 2025/01/13 20:07:46 processing appconfig overrides Jan 13 20:07:46.183412 amazon-ssm-agent[2122]: 2025/01/13 20:07:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:46.183521 amazon-ssm-agent[2122]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:46.183713 amazon-ssm-agent[2122]: 2025/01/13 20:07:46 processing appconfig overrides Jan 13 20:07:46.184608 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO Proxy environment variables: Jan 13 20:07:46.188098 amazon-ssm-agent[2122]: 2025/01/13 20:07:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:46.188098 amazon-ssm-agent[2122]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:46.188098 amazon-ssm-agent[2122]: 2025/01/13 20:07:46 processing appconfig overrides Jan 13 20:07:46.289141 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO https_proxy: Jan 13 20:07:46.361861 sshd[2120]: Accepted publickey for core from 147.75.109.163 port 43012 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:07:46.364439 sshd-session[2120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:46.388851 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:07:46.391026 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO http_proxy: Jan 13 20:07:46.400454 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:07:46.413123 systemd-logind[1914]: New session 1 of user core. Jan 13 20:07:46.450463 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:07:46.468408 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:07:46.489584 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO no_proxy: Jan 13 20:07:46.491352 (systemd)[2148]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:07:46.560556 tar[1931]: linux-arm64/LICENSE Jan 13 20:07:46.560556 tar[1931]: linux-arm64/README.md Jan 13 20:07:46.588998 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:07:46.607984 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:07:46.688006 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:07:46.773335 systemd[2148]: Queued start job for default target default.target. Jan 13 20:07:46.781223 systemd[2148]: Created slice app.slice - User Application Slice. Jan 13 20:07:46.781282 systemd[2148]: Reached target paths.target - Paths. Jan 13 20:07:46.781315 systemd[2148]: Reached target timers.target - Timers. Jan 13 20:07:46.785492 systemd[2148]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:07:46.787238 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO Agent will take identity from EC2 Jan 13 20:07:46.825512 systemd[2148]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:07:46.825766 systemd[2148]: Reached target sockets.target - Sockets. Jan 13 20:07:46.825799 systemd[2148]: Reached target basic.target - Basic System. Jan 13 20:07:46.826941 systemd[2148]: Reached target default.target - Main User Target. Jan 13 20:07:46.827016 systemd[2148]: Startup finished in 320ms. Jan 13 20:07:46.827887 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:07:46.839118 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:07:46.885900 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:46.985842 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:47.008972 systemd[1]: Started sshd@1-172.31.28.169:22-147.75.109.163:43028.service - OpenSSH per-connection server daemon (147.75.109.163:43028). Jan 13 20:07:47.084696 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:47.092556 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:07:47.092556 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 20:07:47.092726 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:07:47.092726 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:07:47.092726 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO [Registrar] Starting registrar module Jan 13 20:07:47.092726 amazon-ssm-agent[2122]: 2025-01-13 20:07:46 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:07:47.092726 amazon-ssm-agent[2122]: 2025-01-13 20:07:47 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:07:47.092726 amazon-ssm-agent[2122]: 2025-01-13 20:07:47 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:07:47.092726 amazon-ssm-agent[2122]: 2025-01-13 20:07:47 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:07:47.092726 amazon-ssm-agent[2122]: 2025-01-13 20:07:47 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:07:47.184203 amazon-ssm-agent[2122]: 2025-01-13 20:07:47 INFO [CredentialRefresher] Next credential rotation will be in 31.6499906598 minutes Jan 13 20:07:47.225168 sshd[2165]: Accepted publickey for core from 147.75.109.163 port 43028 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:07:47.228053 sshd-session[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:47.236529 systemd-logind[1914]: New session 2 of user core. Jan 13 20:07:47.244091 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:07:47.375878 sshd[2167]: Connection closed by 147.75.109.163 port 43028 Jan 13 20:07:47.376695 sshd-session[2165]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:47.383361 systemd[1]: sshd@1-172.31.28.169:22-147.75.109.163:43028.service: Deactivated successfully. Jan 13 20:07:47.387055 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:07:47.392148 systemd-logind[1914]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:07:47.394133 systemd-logind[1914]: Removed session 2. Jan 13 20:07:47.413451 systemd[1]: Started sshd@2-172.31.28.169:22-147.75.109.163:57012.service - OpenSSH per-connection server daemon (147.75.109.163:57012). Jan 13 20:07:47.609116 sshd[2172]: Accepted publickey for core from 147.75.109.163 port 57012 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:07:47.611318 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:47.621281 systemd-logind[1914]: New session 3 of user core. Jan 13 20:07:47.627151 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:07:47.696410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:47.699707 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:07:47.702379 systemd[1]: Startup finished in 1.077s (kernel) + 8.596s (initrd) + 8.369s (userspace) = 18.043s. Jan 13 20:07:47.709680 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:47.733634 agetty[2139]: failed to open credentials directory Jan 13 20:07:47.735704 agetty[2142]: failed to open credentials directory Jan 13 20:07:47.758329 sshd[2174]: Connection closed by 147.75.109.163 port 57012 Jan 13 20:07:47.759109 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:47.763634 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:07:47.766793 systemd[1]: sshd@2-172.31.28.169:22-147.75.109.163:57012.service: Deactivated successfully. Jan 13 20:07:47.773517 systemd-logind[1914]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:07:47.776144 systemd-logind[1914]: Removed session 3. Jan 13 20:07:48.120333 amazon-ssm-agent[2122]: 2025-01-13 20:07:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:07:48.221348 amazon-ssm-agent[2122]: 2025-01-13 20:07:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2193) started Jan 13 20:07:48.327123 amazon-ssm-agent[2122]: 2025-01-13 20:07:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:07:48.594356 kubelet[2180]: E0113 20:07:48.594190 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:48.598093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:48.598417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:48.598955 systemd[1]: kubelet.service: Consumed 1.259s CPU time. Jan 13 20:07:48.822692 ntpd[1909]: Listen normally on 7 eth0 [fe80::465:78ff:fe3b:e845%2]:123 Jan 13 20:07:48.823132 ntpd[1909]: 13 Jan 20:07:48 ntpd[1909]: Listen normally on 7 eth0 [fe80::465:78ff:fe3b:e845%2]:123 Jan 13 20:07:57.790101 systemd[1]: Started sshd@3-172.31.28.169:22-147.75.109.163:52546.service - OpenSSH per-connection server daemon (147.75.109.163:52546). Jan 13 20:07:57.983677 sshd[2206]: Accepted publickey for core from 147.75.109.163 port 52546 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:07:57.986079 sshd-session[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:57.993234 systemd-logind[1914]: New session 4 of user core. Jan 13 20:07:58.002068 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:07:58.127138 sshd[2208]: Connection closed by 147.75.109.163 port 52546 Jan 13 20:07:58.128354 sshd-session[2206]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:58.134969 systemd[1]: sshd@3-172.31.28.169:22-147.75.109.163:52546.service: Deactivated successfully. Jan 13 20:07:58.138703 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:07:58.140433 systemd-logind[1914]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:07:58.142154 systemd-logind[1914]: Removed session 4. Jan 13 20:07:58.168336 systemd[1]: Started sshd@4-172.31.28.169:22-147.75.109.163:52560.service - OpenSSH per-connection server daemon (147.75.109.163:52560). Jan 13 20:07:58.358227 sshd[2213]: Accepted publickey for core from 147.75.109.163 port 52560 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:07:58.360591 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:58.368788 systemd-logind[1914]: New session 5 of user core. Jan 13 20:07:58.375087 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:07:58.492436 sshd[2215]: Connection closed by 147.75.109.163 port 52560 Jan 13 20:07:58.493269 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:58.499453 systemd[1]: sshd@4-172.31.28.169:22-147.75.109.163:52560.service: Deactivated successfully. Jan 13 20:07:58.502356 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:07:58.503500 systemd-logind[1914]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:07:58.505212 systemd-logind[1914]: Removed session 5. Jan 13 20:07:58.533338 systemd[1]: Started sshd@5-172.31.28.169:22-147.75.109.163:52562.service - OpenSSH per-connection server daemon (147.75.109.163:52562). Jan 13 20:07:58.687479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:07:58.697200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:58.722864 sshd[2220]: Accepted publickey for core from 147.75.109.163 port 52562 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:07:58.724145 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:58.734361 systemd-logind[1914]: New session 6 of user core. Jan 13 20:07:58.743115 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:07:58.873738 sshd[2225]: Connection closed by 147.75.109.163 port 52562 Jan 13 20:07:58.876561 sshd-session[2220]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:58.883191 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:07:58.885412 systemd-logind[1914]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:07:58.886874 systemd[1]: sshd@5-172.31.28.169:22-147.75.109.163:52562.service: Deactivated successfully. Jan 13 20:07:58.892180 systemd-logind[1914]: Removed session 6. Jan 13 20:07:58.917519 systemd[1]: Started sshd@6-172.31.28.169:22-147.75.109.163:52576.service - OpenSSH per-connection server daemon (147.75.109.163:52576). Jan 13 20:07:58.995068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:58.996982 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:59.086977 kubelet[2237]: E0113 20:07:59.086886 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:59.093938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:59.094422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:59.107151 sshd[2230]: Accepted publickey for core from 147.75.109.163 port 52576 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:07:59.109585 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:59.118205 systemd-logind[1914]: New session 7 of user core. Jan 13 20:07:59.128055 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:07:59.265979 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:07:59.266616 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:59.282430 sudo[2245]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:59.305181 sshd[2244]: Connection closed by 147.75.109.163 port 52576 Jan 13 20:07:59.306244 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:59.311570 systemd-logind[1914]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:07:59.313094 systemd[1]: sshd@6-172.31.28.169:22-147.75.109.163:52576.service: Deactivated successfully. Jan 13 20:07:59.316118 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:07:59.319719 systemd-logind[1914]: Removed session 7. Jan 13 20:07:59.340603 systemd[1]: Started sshd@7-172.31.28.169:22-147.75.109.163:52588.service - OpenSSH per-connection server daemon (147.75.109.163:52588). Jan 13 20:07:59.528296 sshd[2250]: Accepted publickey for core from 147.75.109.163 port 52588 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:07:59.530946 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:59.540118 systemd-logind[1914]: New session 8 of user core. Jan 13 20:07:59.550101 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:07:59.652717 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:07:59.653393 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:59.659460 sudo[2254]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:59.669097 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:07:59.669696 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:59.695386 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:07:59.743039 augenrules[2276]: No rules Jan 13 20:07:59.745223 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:07:59.746918 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:07:59.749063 sudo[2253]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:59.772846 sshd[2252]: Connection closed by 147.75.109.163 port 52588 Jan 13 20:07:59.773620 sshd-session[2250]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:59.778864 systemd[1]: sshd@7-172.31.28.169:22-147.75.109.163:52588.service: Deactivated successfully. Jan 13 20:07:59.782605 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:07:59.785239 systemd-logind[1914]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:07:59.787359 systemd-logind[1914]: Removed session 8. Jan 13 20:07:59.814281 systemd[1]: Started sshd@8-172.31.28.169:22-147.75.109.163:52590.service - OpenSSH per-connection server daemon (147.75.109.163:52590). Jan 13 20:07:59.991209 sshd[2284]: Accepted publickey for core from 147.75.109.163 port 52590 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:07:59.993563 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:00.002152 systemd-logind[1914]: New session 9 of user core. Jan 13 20:08:00.014064 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:08:00.115116 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:08:00.115745 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:00.824305 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:08:00.824971 (dockerd)[2304]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:08:01.270541 dockerd[2304]: time="2025-01-13T20:08:01.270466769Z" level=info msg="Starting up" Jan 13 20:08:01.449218 systemd[1]: var-lib-docker-metacopy\x2dcheck710309421-merged.mount: Deactivated successfully. Jan 13 20:08:01.463081 dockerd[2304]: time="2025-01-13T20:08:01.463003405Z" level=info msg="Loading containers: start." Jan 13 20:08:01.703966 kernel: Initializing XFRM netlink socket Jan 13 20:08:01.736687 (udev-worker)[2326]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:01.824906 systemd-networkd[1842]: docker0: Link UP Jan 13 20:08:01.867142 dockerd[2304]: time="2025-01-13T20:08:01.867073890Z" level=info msg="Loading containers: done." Jan 13 20:08:01.893585 dockerd[2304]: time="2025-01-13T20:08:01.893508726Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:08:01.893802 dockerd[2304]: time="2025-01-13T20:08:01.893653410Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:08:01.893936 dockerd[2304]: time="2025-01-13T20:08:01.893901134Z" level=info msg="Daemon has completed initialization" Jan 13 20:08:01.948240 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:08:01.949624 dockerd[2304]: time="2025-01-13T20:08:01.948526080Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:08:03.038224 containerd[1942]: time="2025-01-13T20:08:03.038153049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:08:03.753442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525025402.mount: Deactivated successfully. Jan 13 20:08:05.011250 containerd[1942]: time="2025-01-13T20:08:05.011170193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:05.012855 containerd[1942]: time="2025-01-13T20:08:05.012704022Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615585" Jan 13 20:08:05.014446 containerd[1942]: time="2025-01-13T20:08:05.014370756Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:05.019956 containerd[1942]: time="2025-01-13T20:08:05.019855792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:05.022231 containerd[1942]: time="2025-01-13T20:08:05.022174580Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 1.983955431s" Jan 13 20:08:05.022652 containerd[1942]: time="2025-01-13T20:08:05.022235641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 13 20:08:05.023145 containerd[1942]: time="2025-01-13T20:08:05.023101236Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:08:06.329564 containerd[1942]: time="2025-01-13T20:08:06.329485969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:06.331633 containerd[1942]: time="2025-01-13T20:08:06.331518017Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470096" Jan 13 20:08:06.332388 containerd[1942]: time="2025-01-13T20:08:06.332307821Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:06.342739 containerd[1942]: time="2025-01-13T20:08:06.342634176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:06.346033 containerd[1942]: time="2025-01-13T20:08:06.344991129Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.32183087s" Jan 13 20:08:06.346033 containerd[1942]: time="2025-01-13T20:08:06.345049839Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 13 20:08:06.346033 containerd[1942]: time="2025-01-13T20:08:06.345693605Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:08:07.484571 containerd[1942]: time="2025-01-13T20:08:07.484509468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:07.486557 containerd[1942]: time="2025-01-13T20:08:07.486458133Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024202" Jan 13 20:08:07.487303 containerd[1942]: time="2025-01-13T20:08:07.487224310Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:07.492772 containerd[1942]: time="2025-01-13T20:08:07.492694845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:07.495754 containerd[1942]: time="2025-01-13T20:08:07.495095144Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.149355314s" Jan 13 20:08:07.495754 containerd[1942]: time="2025-01-13T20:08:07.495151384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 13 20:08:07.496223 containerd[1942]: time="2025-01-13T20:08:07.496182796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:08:08.693928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555458069.mount: Deactivated successfully. Jan 13 20:08:09.200465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:08:09.208647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:09.266647 containerd[1942]: time="2025-01-13T20:08:09.265180176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:09.268256 containerd[1942]: time="2025-01-13T20:08:09.268188380Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771426" Jan 13 20:08:09.273622 containerd[1942]: time="2025-01-13T20:08:09.272286130Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:09.278343 containerd[1942]: time="2025-01-13T20:08:09.278278140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:09.282770 containerd[1942]: time="2025-01-13T20:08:09.282705102Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.786359151s" Jan 13 20:08:09.283029 containerd[1942]: time="2025-01-13T20:08:09.282990751Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 20:08:09.284251 containerd[1942]: time="2025-01-13T20:08:09.284202840Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:08:09.513145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:09.514958 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:08:09.582685 kubelet[2572]: E0113 20:08:09.582527 2572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:08:09.586670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:08:09.586994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:08:09.834279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2496437856.mount: Deactivated successfully. Jan 13 20:08:10.943785 containerd[1942]: time="2025-01-13T20:08:10.943725381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:10.946414 containerd[1942]: time="2025-01-13T20:08:10.946351359Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 13 20:08:10.948336 containerd[1942]: time="2025-01-13T20:08:10.948293380Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:10.954264 containerd[1942]: time="2025-01-13T20:08:10.954215022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:10.956487 containerd[1942]: time="2025-01-13T20:08:10.956440496Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.672003269s" Jan 13 20:08:10.956628 containerd[1942]: time="2025-01-13T20:08:10.956600532Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:08:10.958263 containerd[1942]: time="2025-01-13T20:08:10.958206721Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:08:11.487319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070408974.mount: Deactivated successfully. Jan 13 20:08:11.500884 containerd[1942]: time="2025-01-13T20:08:11.500309220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:11.502226 containerd[1942]: time="2025-01-13T20:08:11.502147444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 13 20:08:11.504837 containerd[1942]: time="2025-01-13T20:08:11.504740595Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:11.509800 containerd[1942]: time="2025-01-13T20:08:11.509738255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:11.513618 containerd[1942]: time="2025-01-13T20:08:11.512137642Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 553.866969ms" Jan 13 20:08:11.513618 containerd[1942]: time="2025-01-13T20:08:11.512198872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 20:08:11.515276 containerd[1942]: time="2025-01-13T20:08:11.515208094Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:08:12.220773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103814253.mount: Deactivated successfully. Jan 13 20:08:14.356887 containerd[1942]: time="2025-01-13T20:08:14.356054685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:14.358666 containerd[1942]: time="2025-01-13T20:08:14.358579758Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 13 20:08:14.361328 containerd[1942]: time="2025-01-13T20:08:14.361249346Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:14.367796 containerd[1942]: time="2025-01-13T20:08:14.367700175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:14.370461 containerd[1942]: time="2025-01-13T20:08:14.370369932Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.855093891s" Jan 13 20:08:14.372223 containerd[1942]: time="2025-01-13T20:08:14.371023533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 13 20:08:15.734504 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:08:19.700800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:08:19.709276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:20.033221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:20.042755 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:08:20.078960 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:20.082302 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:08:20.083925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:20.093357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:20.147333 systemd[1]: Reloading requested from client PID 2725 ('systemctl') (unit session-9.scope)... Jan 13 20:08:20.147360 systemd[1]: Reloading... Jan 13 20:08:20.380849 zram_generator::config[2768]: No configuration found. Jan 13 20:08:20.625204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:20.793852 systemd[1]: Reloading finished in 645 ms. Jan 13 20:08:20.890593 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:08:20.890860 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:08:20.891439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:20.899393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:21.181457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:21.196346 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:08:21.264794 kubelet[2829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:08:21.264794 kubelet[2829]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:08:21.264794 kubelet[2829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:08:21.265377 kubelet[2829]: I0113 20:08:21.264979 2829 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:08:22.981615 kubelet[2829]: I0113 20:08:22.981488 2829 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:08:22.981615 kubelet[2829]: I0113 20:08:22.981558 2829 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:08:22.983859 kubelet[2829]: I0113 20:08:22.983309 2829 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:08:23.031272 kubelet[2829]: E0113 20:08:23.031193 2829 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.169:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.169:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:08:23.035027 kubelet[2829]: I0113 20:08:23.034966 2829 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:08:23.045913 kubelet[2829]: E0113 20:08:23.045864 2829 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:08:23.046329 kubelet[2829]: I0113 20:08:23.046171 2829 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:08:23.054329 kubelet[2829]: I0113 20:08:23.054243 2829 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:08:23.054499 kubelet[2829]: I0113 20:08:23.054474 2829 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:08:23.054779 kubelet[2829]: I0113 20:08:23.054726 2829 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:08:23.055094 kubelet[2829]: I0113 20:08:23.054781 2829 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-169","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:08:23.055278 kubelet[2829]: I0113 20:08:23.055139 2829 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:08:23.055278 kubelet[2829]: I0113 20:08:23.055160 2829 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:08:23.055405 kubelet[2829]: I0113 20:08:23.055354 2829 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:08:23.057791 kubelet[2829]: I0113 20:08:23.057750 2829 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:08:23.057791 kubelet[2829]: I0113 20:08:23.057789 2829 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:08:23.057966 kubelet[2829]: I0113 20:08:23.057888 2829 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:08:23.057966 kubelet[2829]: I0113 20:08:23.057911 2829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:08:23.063607 kubelet[2829]: I0113 20:08:23.063537 2829 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:08:23.066756 kubelet[2829]: I0113 20:08:23.066584 2829 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:08:23.068846 kubelet[2829]: W0113 20:08:23.067887 2829 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:08:23.069149 kubelet[2829]: I0113 20:08:23.069121 2829 server.go:1269] "Started kubelet" Jan 13 20:08:23.069502 kubelet[2829]: W0113 20:08:23.069447 2829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.169:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.169:6443: connect: connection refused Jan 13 20:08:23.069649 kubelet[2829]: E0113 20:08:23.069620 2829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.169:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.169:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:08:23.070987 kubelet[2829]: W0113 20:08:23.070907 2829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-169&limit=500&resourceVersion=0": dial tcp 172.31.28.169:6443: connect: connection refused Jan 13 20:08:23.071137 kubelet[2829]: E0113 20:08:23.070994 2829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-169&limit=500&resourceVersion=0\": dial tcp 172.31.28.169:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:08:23.072173 kubelet[2829]: I0113 20:08:23.072109 2829 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:08:23.079171 kubelet[2829]: I0113 20:08:23.079081 2829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:08:23.079926 kubelet[2829]: I0113 20:08:23.079895 2829 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:08:23.081850 kubelet[2829]: I0113 20:08:23.081777 2829 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:08:23.082719 kubelet[2829]: I0113 20:08:23.082671 2829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:08:23.088954 kubelet[2829]: E0113 20:08:23.086745 2829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.169:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.169:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-169.181a59667fb88485 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-169,UID:ip-172-31-28-169,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-169,},FirstTimestamp:2025-01-13 20:08:23.069082757 +0000 UTC m=+1.866702904,LastTimestamp:2025-01-13 20:08:23.069082757 +0000 UTC m=+1.866702904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-169,}" Jan 13 20:08:23.093863 kubelet[2829]: I0113 20:08:23.092539 2829 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:08:23.094228 kubelet[2829]: E0113 20:08:23.094184 2829 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-169\" not found" Jan 13 20:08:23.097254 kubelet[2829]: I0113 20:08:23.097216 2829 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:08:23.097583 kubelet[2829]: I0113 20:08:23.097562 2829 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:08:23.097997 kubelet[2829]: I0113 20:08:23.097974 2829 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:08:23.098676 kubelet[2829]: E0113 20:08:23.098621 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-169?timeout=10s\": dial tcp 172.31.28.169:6443: connect: connection refused" interval="200ms" Jan 13 20:08:23.099169 kubelet[2829]: I0113 20:08:23.099138 2829 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:08:23.099427 kubelet[2829]: I0113 20:08:23.099396 2829 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:08:23.103075 kubelet[2829]: E0113 20:08:23.103038 2829 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:08:23.103643 kubelet[2829]: I0113 20:08:23.103610 2829 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:08:23.115716 kubelet[2829]: W0113 20:08:23.115637 2829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.169:6443: connect: connection refused Jan 13 20:08:23.116046 kubelet[2829]: E0113 20:08:23.115947 2829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.169:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:08:23.127010 kubelet[2829]: I0113 20:08:23.126893 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:08:23.132408 kubelet[2829]: I0113 20:08:23.132343 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:08:23.132408 kubelet[2829]: I0113 20:08:23.132400 2829 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:08:23.132612 kubelet[2829]: I0113 20:08:23.132434 2829 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:08:23.132612 kubelet[2829]: E0113 20:08:23.132514 2829 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:08:23.139383 kubelet[2829]: W0113 20:08:23.139252 2829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.169:6443: connect: connection refused Jan 13 20:08:23.139651 kubelet[2829]: E0113 20:08:23.139354 2829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.169:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:08:23.142847 kubelet[2829]: I0113 20:08:23.142564 2829 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:08:23.142847 kubelet[2829]: I0113 20:08:23.142592 2829 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:08:23.142847 kubelet[2829]: I0113 20:08:23.142624 2829 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:08:23.147371 kubelet[2829]: I0113 20:08:23.147325 2829 policy_none.go:49] "None policy: Start" Jan 13 20:08:23.148694 kubelet[2829]: I0113 20:08:23.148661 2829 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:08:23.148842 kubelet[2829]: I0113 20:08:23.148705 2829 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:08:23.160571 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:08:23.172874 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:08:23.179899 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:08:23.194017 kubelet[2829]: I0113 20:08:23.193779 2829 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:08:23.194510 kubelet[2829]: E0113 20:08:23.194369 2829 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-169\" not found" Jan 13 20:08:23.194510 kubelet[2829]: I0113 20:08:23.194468 2829 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:08:23.194865 kubelet[2829]: I0113 20:08:23.194488 2829 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:08:23.195308 kubelet[2829]: I0113 20:08:23.195287 2829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:08:23.200512 kubelet[2829]: E0113 20:08:23.200402 2829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-169\" not found" Jan 13 20:08:23.249465 systemd[1]: Created slice kubepods-burstable-podffb2f22824058abeb25188caa1acd9ce.slice - libcontainer container kubepods-burstable-podffb2f22824058abeb25188caa1acd9ce.slice. Jan 13 20:08:23.272988 systemd[1]: Created slice kubepods-burstable-pod62ab4774bc52552ee02c4ec6812cf60a.slice - libcontainer container kubepods-burstable-pod62ab4774bc52552ee02c4ec6812cf60a.slice. Jan 13 20:08:23.289965 systemd[1]: Created slice kubepods-burstable-podd0d6c1c5df3fc0a08c430afbff421b86.slice - libcontainer container kubepods-burstable-podd0d6c1c5df3fc0a08c430afbff421b86.slice. Jan 13 20:08:23.297534 kubelet[2829]: I0113 20:08:23.297489 2829 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-169" Jan 13 20:08:23.298362 kubelet[2829]: E0113 20:08:23.298316 2829 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.169:6443/api/v1/nodes\": dial tcp 172.31.28.169:6443: connect: connection refused" node="ip-172-31-28-169" Jan 13 20:08:23.298456 kubelet[2829]: I0113 20:08:23.298407 2829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffb2f22824058abeb25188caa1acd9ce-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-169\" (UID: \"ffb2f22824058abeb25188caa1acd9ce\") " pod="kube-system/kube-apiserver-ip-172-31-28-169" Jan 13 20:08:23.298540 kubelet[2829]: I0113 20:08:23.298446 2829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:23.298540 kubelet[2829]: I0113 20:08:23.298485 2829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:23.298540 kubelet[2829]: I0113 20:08:23.298518 2829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:23.298686 kubelet[2829]: I0113 20:08:23.298551 2829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:23.298686 kubelet[2829]: I0113 20:08:23.298635 2829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffb2f22824058abeb25188caa1acd9ce-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-169\" (UID: \"ffb2f22824058abeb25188caa1acd9ce\") " pod="kube-system/kube-apiserver-ip-172-31-28-169" Jan 13 20:08:23.298686 kubelet[2829]: I0113 20:08:23.298670 2829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:23.298863 kubelet[2829]: I0113 20:08:23.298708 2829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0d6c1c5df3fc0a08c430afbff421b86-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-169\" (UID: \"d0d6c1c5df3fc0a08c430afbff421b86\") " pod="kube-system/kube-scheduler-ip-172-31-28-169" Jan 13 20:08:23.298863 kubelet[2829]: I0113 20:08:23.298740 2829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffb2f22824058abeb25188caa1acd9ce-ca-certs\") pod \"kube-apiserver-ip-172-31-28-169\" (UID: \"ffb2f22824058abeb25188caa1acd9ce\") " pod="kube-system/kube-apiserver-ip-172-31-28-169" Jan 13 20:08:23.299418 kubelet[2829]: E0113 20:08:23.299372 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-169?timeout=10s\": dial tcp 172.31.28.169:6443: connect: connection refused" interval="400ms" Jan 13 20:08:23.501119 kubelet[2829]: I0113 20:08:23.500879 2829 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-169" Jan 13 20:08:23.501968 kubelet[2829]: E0113 20:08:23.501434 2829 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.169:6443/api/v1/nodes\": dial tcp 172.31.28.169:6443: connect: connection refused" node="ip-172-31-28-169" Jan 13 20:08:23.568464 containerd[1942]: time="2025-01-13T20:08:23.568391252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-169,Uid:ffb2f22824058abeb25188caa1acd9ce,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:23.586565 containerd[1942]: time="2025-01-13T20:08:23.586492609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-169,Uid:62ab4774bc52552ee02c4ec6812cf60a,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:23.597443 containerd[1942]: time="2025-01-13T20:08:23.597366861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-169,Uid:d0d6c1c5df3fc0a08c430afbff421b86,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:23.700219 kubelet[2829]: E0113 20:08:23.700142 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-169?timeout=10s\": dial tcp 172.31.28.169:6443: connect: connection refused" interval="800ms" Jan 13 20:08:23.904073 kubelet[2829]: I0113 20:08:23.903929 2829 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-169" Jan 13 20:08:23.904898 kubelet[2829]: E0113 20:08:23.904841 2829 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.169:6443/api/v1/nodes\": dial tcp 172.31.28.169:6443: connect: connection refused" node="ip-172-31-28-169" Jan 13 20:08:23.933535 kubelet[2829]: W0113 20:08:23.933442 2829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-169&limit=500&resourceVersion=0": dial tcp 172.31.28.169:6443: connect: connection refused Jan 13 20:08:23.933686 kubelet[2829]: E0113 20:08:23.933544 2829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-169&limit=500&resourceVersion=0\": dial tcp 172.31.28.169:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:08:24.096822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836621023.mount: Deactivated successfully. Jan 13 20:08:24.110879 containerd[1942]: time="2025-01-13T20:08:24.110211940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:24.114476 containerd[1942]: time="2025-01-13T20:08:24.114393947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 20:08:24.121473 containerd[1942]: time="2025-01-13T20:08:24.121178570Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:24.123544 containerd[1942]: time="2025-01-13T20:08:24.123461160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:24.125339 containerd[1942]: time="2025-01-13T20:08:24.125260416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:08:24.129921 containerd[1942]: time="2025-01-13T20:08:24.128890867Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:24.131875 containerd[1942]: time="2025-01-13T20:08:24.131624156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:24.133723 containerd[1942]: time="2025-01-13T20:08:24.133348665Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:08:24.133723 containerd[1942]: time="2025-01-13T20:08:24.133432132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.930223ms" Jan 13 20:08:24.145169 containerd[1942]: time="2025-01-13T20:08:24.145111385Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.622822ms" Jan 13 20:08:24.146791 containerd[1942]: time="2025-01-13T20:08:24.146717238Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.109907ms" Jan 13 20:08:24.326546 kubelet[2829]: W0113 20:08:24.326495 2829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.169:6443: connect: connection refused Jan 13 20:08:24.328175 kubelet[2829]: E0113 20:08:24.327190 2829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.169:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:08:24.355518 containerd[1942]: time="2025-01-13T20:08:24.355281960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:24.356937 containerd[1942]: time="2025-01-13T20:08:24.356419039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:24.356937 containerd[1942]: time="2025-01-13T20:08:24.356496700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:24.356937 containerd[1942]: time="2025-01-13T20:08:24.356680400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:24.359407 containerd[1942]: time="2025-01-13T20:08:24.358439632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:24.359407 containerd[1942]: time="2025-01-13T20:08:24.358546691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:24.359407 containerd[1942]: time="2025-01-13T20:08:24.358582313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:24.359407 containerd[1942]: time="2025-01-13T20:08:24.358719129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:24.364672 containerd[1942]: time="2025-01-13T20:08:24.364499961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:24.364889 containerd[1942]: time="2025-01-13T20:08:24.364785946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:24.366065 containerd[1942]: time="2025-01-13T20:08:24.365756464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:24.366065 containerd[1942]: time="2025-01-13T20:08:24.365942479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:24.403142 systemd[1]: Started cri-containerd-6156609e2abfa901516d940f4d94f90a634763a734213340eee0246ccc330f6b.scope - libcontainer container 6156609e2abfa901516d940f4d94f90a634763a734213340eee0246ccc330f6b. Jan 13 20:08:24.435130 systemd[1]: Started cri-containerd-2a6a4416bd2c10551b2309cdb9e560890d7d7485948cb3bf9a2098a4ebd8b4cc.scope - libcontainer container 2a6a4416bd2c10551b2309cdb9e560890d7d7485948cb3bf9a2098a4ebd8b4cc. Jan 13 20:08:24.438603 systemd[1]: Started cri-containerd-da2e3cf038fcaece30e37c680e1c9c4235fb650778641c14f1cb769e597dc7e2.scope - libcontainer container da2e3cf038fcaece30e37c680e1c9c4235fb650778641c14f1cb769e597dc7e2. Jan 13 20:08:24.502408 kubelet[2829]: E0113 20:08:24.502342 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-169?timeout=10s\": dial tcp 172.31.28.169:6443: connect: connection refused" interval="1.6s" Jan 13 20:08:24.567616 containerd[1942]: time="2025-01-13T20:08:24.567545044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-169,Uid:62ab4774bc52552ee02c4ec6812cf60a,Namespace:kube-system,Attempt:0,} returns sandbox id \"da2e3cf038fcaece30e37c680e1c9c4235fb650778641c14f1cb769e597dc7e2\"" Jan 13 20:08:24.579701 containerd[1942]: time="2025-01-13T20:08:24.578236939Z" level=info msg="CreateContainer within sandbox \"da2e3cf038fcaece30e37c680e1c9c4235fb650778641c14f1cb769e597dc7e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:08:24.599575 kubelet[2829]: W0113 20:08:24.598950 2829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.169:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.169:6443: connect: connection refused Jan 13 20:08:24.599575 kubelet[2829]: E0113 20:08:24.599047 2829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.169:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.169:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:08:24.616238 containerd[1942]: time="2025-01-13T20:08:24.615473512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-169,Uid:ffb2f22824058abeb25188caa1acd9ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a6a4416bd2c10551b2309cdb9e560890d7d7485948cb3bf9a2098a4ebd8b4cc\"" Jan 13 20:08:24.618244 kubelet[2829]: W0113 20:08:24.618181 2829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.169:6443: connect: connection refused Jan 13 20:08:24.618728 kubelet[2829]: E0113 20:08:24.618688 2829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.169:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:08:24.619739 containerd[1942]: time="2025-01-13T20:08:24.619678165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-169,Uid:d0d6c1c5df3fc0a08c430afbff421b86,Namespace:kube-system,Attempt:0,} returns sandbox id \"6156609e2abfa901516d940f4d94f90a634763a734213340eee0246ccc330f6b\"" Jan 13 20:08:24.629889 containerd[1942]: time="2025-01-13T20:08:24.629352117Z" level=info msg="CreateContainer within sandbox \"2a6a4416bd2c10551b2309cdb9e560890d7d7485948cb3bf9a2098a4ebd8b4cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:08:24.638007 containerd[1942]: time="2025-01-13T20:08:24.637449315Z" level=info msg="CreateContainer within sandbox \"6156609e2abfa901516d940f4d94f90a634763a734213340eee0246ccc330f6b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:08:24.672783 containerd[1942]: time="2025-01-13T20:08:24.672691249Z" level=info msg="CreateContainer within sandbox \"da2e3cf038fcaece30e37c680e1c9c4235fb650778641c14f1cb769e597dc7e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447\"" Jan 13 20:08:24.673740 containerd[1942]: time="2025-01-13T20:08:24.673688574Z" level=info msg="StartContainer for \"c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447\"" Jan 13 20:08:24.685353 containerd[1942]: time="2025-01-13T20:08:24.685156757Z" level=info msg="CreateContainer within sandbox \"6156609e2abfa901516d940f4d94f90a634763a734213340eee0246ccc330f6b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846\"" Jan 13 20:08:24.686360 containerd[1942]: time="2025-01-13T20:08:24.686077764Z" level=info msg="StartContainer for \"7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846\"" Jan 13 20:08:24.686360 containerd[1942]: time="2025-01-13T20:08:24.686238940Z" level=info msg="CreateContainer within sandbox \"2a6a4416bd2c10551b2309cdb9e560890d7d7485948cb3bf9a2098a4ebd8b4cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"260649d68e7b63667ff8508ac81c6b40f507bd72f771c0bca2386033471a742e\"" Jan 13 20:08:24.687115 containerd[1942]: time="2025-01-13T20:08:24.686887551Z" level=info msg="StartContainer for \"260649d68e7b63667ff8508ac81c6b40f507bd72f771c0bca2386033471a742e\"" Jan 13 20:08:24.708310 kubelet[2829]: I0113 20:08:24.708243 2829 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-169" Jan 13 20:08:24.709145 kubelet[2829]: E0113 20:08:24.709073 2829 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.169:6443/api/v1/nodes\": dial tcp 172.31.28.169:6443: connect: connection refused" node="ip-172-31-28-169" Jan 13 20:08:24.753504 systemd[1]: Started cri-containerd-260649d68e7b63667ff8508ac81c6b40f507bd72f771c0bca2386033471a742e.scope - libcontainer container 260649d68e7b63667ff8508ac81c6b40f507bd72f771c0bca2386033471a742e. Jan 13 20:08:24.778172 systemd[1]: Started cri-containerd-7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846.scope - libcontainer container 7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846. Jan 13 20:08:24.782359 systemd[1]: Started cri-containerd-c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447.scope - libcontainer container c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447. Jan 13 20:08:24.878006 containerd[1942]: time="2025-01-13T20:08:24.877856104Z" level=info msg="StartContainer for \"260649d68e7b63667ff8508ac81c6b40f507bd72f771c0bca2386033471a742e\" returns successfully" Jan 13 20:08:24.893276 containerd[1942]: time="2025-01-13T20:08:24.893010295Z" level=info msg="StartContainer for \"c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447\" returns successfully" Jan 13 20:08:24.945956 containerd[1942]: time="2025-01-13T20:08:24.945886000Z" level=info msg="StartContainer for \"7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846\" returns successfully" Jan 13 20:08:26.311752 kubelet[2829]: I0113 20:08:26.311701 2829 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-169" Jan 13 20:08:28.257917 kubelet[2829]: E0113 20:08:28.257848 2829 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-169\" not found" node="ip-172-31-28-169" Jan 13 20:08:28.384095 kubelet[2829]: I0113 20:08:28.382466 2829 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-169" Jan 13 20:08:28.384095 kubelet[2829]: E0113 20:08:28.382532 2829 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-169\": node \"ip-172-31-28-169\" not found" Jan 13 20:08:28.508647 kubelet[2829]: E0113 20:08:28.508169 2829 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-28-169\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-169" Jan 13 20:08:29.065146 kubelet[2829]: I0113 20:08:29.064832 2829 apiserver.go:52] "Watching apiserver" Jan 13 20:08:29.098765 kubelet[2829]: I0113 20:08:29.098700 2829 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:08:29.663650 update_engine[1917]: I20250113 20:08:29.663558 1917 update_attempter.cc:509] Updating boot flags... Jan 13 20:08:29.823951 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3120) Jan 13 20:08:30.312861 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3120) Jan 13 20:08:30.761971 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3120) Jan 13 20:08:31.173856 systemd[1]: Reloading requested from client PID 3374 ('systemctl') (unit session-9.scope)... Jan 13 20:08:31.173886 systemd[1]: Reloading... Jan 13 20:08:31.423852 zram_generator::config[3414]: No configuration found. Jan 13 20:08:31.703001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:31.902713 systemd[1]: Reloading finished in 728 ms. Jan 13 20:08:31.980460 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:31.994539 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:08:31.995013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:31.995089 systemd[1]: kubelet.service: Consumed 2.621s CPU time, 114.3M memory peak, 0B memory swap peak. Jan 13 20:08:32.004437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:32.319563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:32.334588 (kubelet)[3474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:08:32.417837 kubelet[3474]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:08:32.419840 kubelet[3474]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:08:32.419840 kubelet[3474]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:08:32.419840 kubelet[3474]: I0113 20:08:32.418420 3474 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:08:32.438907 kubelet[3474]: I0113 20:08:32.438864 3474 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:08:32.439106 kubelet[3474]: I0113 20:08:32.439085 3474 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:08:32.439639 kubelet[3474]: I0113 20:08:32.439608 3474 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:08:32.442973 kubelet[3474]: I0113 20:08:32.442936 3474 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:08:32.447310 kubelet[3474]: I0113 20:08:32.447268 3474 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:08:32.454663 kubelet[3474]: E0113 20:08:32.454611 3474 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:08:32.455053 kubelet[3474]: I0113 20:08:32.455004 3474 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:08:32.462197 kubelet[3474]: I0113 20:08:32.462159 3474 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:08:32.462592 kubelet[3474]: I0113 20:08:32.462568 3474 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:08:32.463006 kubelet[3474]: I0113 20:08:32.462954 3474 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:08:32.463679 kubelet[3474]: I0113 20:08:32.463119 3474 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-169","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:08:32.463679 kubelet[3474]: I0113 20:08:32.463428 3474 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:08:32.463679 kubelet[3474]: I0113 20:08:32.463447 3474 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:08:32.463679 kubelet[3474]: I0113 20:08:32.463504 3474 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:08:32.464186 kubelet[3474]: I0113 20:08:32.464128 3474 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:08:32.465370 kubelet[3474]: I0113 20:08:32.464872 3474 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:08:32.465370 kubelet[3474]: I0113 20:08:32.464942 3474 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:08:32.465370 kubelet[3474]: I0113 20:08:32.464973 3474 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:08:32.468281 kubelet[3474]: I0113 20:08:32.468200 3474 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:08:32.469695 kubelet[3474]: I0113 20:08:32.469183 3474 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:08:32.470077 kubelet[3474]: I0113 20:08:32.469948 3474 server.go:1269] "Started kubelet" Jan 13 20:08:32.473478 kubelet[3474]: I0113 20:08:32.473394 3474 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:08:32.485989 kubelet[3474]: I0113 20:08:32.485903 3474 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:08:32.494788 sudo[3491]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:08:32.495455 sudo[3491]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:08:32.498995 kubelet[3474]: I0113 20:08:32.498938 3474 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:08:32.506279 kubelet[3474]: I0113 20:08:32.506175 3474 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:08:32.508339 kubelet[3474]: I0113 20:08:32.506568 3474 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:08:32.510291 kubelet[3474]: I0113 20:08:32.510238 3474 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:08:32.518275 kubelet[3474]: I0113 20:08:32.516332 3474 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:08:32.518275 kubelet[3474]: E0113 20:08:32.516718 3474 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-169\" not found" Jan 13 20:08:32.522217 kubelet[3474]: I0113 20:08:32.521674 3474 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:08:32.522217 kubelet[3474]: I0113 20:08:32.521979 3474 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:08:32.531860 kubelet[3474]: I0113 20:08:32.530449 3474 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:08:32.532211 kubelet[3474]: I0113 20:08:32.532173 3474 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:08:32.555260 kubelet[3474]: I0113 20:08:32.555214 3474 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:08:32.557579 kubelet[3474]: I0113 20:08:32.557538 3474 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:08:32.557740 kubelet[3474]: I0113 20:08:32.557721 3474 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:08:32.557888 kubelet[3474]: I0113 20:08:32.557871 3474 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:08:32.558087 kubelet[3474]: E0113 20:08:32.558051 3474 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:08:32.580882 kubelet[3474]: I0113 20:08:32.579316 3474 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:08:32.597305 kubelet[3474]: E0113 20:08:32.597248 3474 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:08:32.659106 kubelet[3474]: E0113 20:08:32.659063 3474 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:08:32.725932 kubelet[3474]: I0113 20:08:32.725494 3474 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:08:32.725932 kubelet[3474]: I0113 20:08:32.725524 3474 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:08:32.725932 kubelet[3474]: I0113 20:08:32.725556 3474 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:08:32.727048 kubelet[3474]: I0113 20:08:32.725794 3474 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:08:32.727048 kubelet[3474]: I0113 20:08:32.726428 3474 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:08:32.727048 kubelet[3474]: I0113 20:08:32.726472 3474 policy_none.go:49] "None policy: Start" Jan 13 20:08:32.728184 kubelet[3474]: I0113 20:08:32.728034 3474 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:08:32.728184 kubelet[3474]: I0113 20:08:32.728082 3474 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:08:32.728417 kubelet[3474]: I0113 20:08:32.728384 3474 state_mem.go:75] "Updated machine memory state" Jan 13 20:08:32.744563 kubelet[3474]: I0113 20:08:32.744483 3474 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:08:32.745746 kubelet[3474]: I0113 20:08:32.745541 3474 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:08:32.745746 kubelet[3474]: I0113 20:08:32.745592 3474 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:08:32.747106 kubelet[3474]: I0113 20:08:32.746860 3474 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:08:32.876114 kubelet[3474]: I0113 20:08:32.874139 3474 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-169" Jan 13 20:08:32.888949 kubelet[3474]: I0113 20:08:32.888893 3474 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-28-169" Jan 13 20:08:32.889242 kubelet[3474]: I0113 20:08:32.889034 3474 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-169" Jan 13 20:08:32.924701 kubelet[3474]: I0113 20:08:32.924651 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffb2f22824058abeb25188caa1acd9ce-ca-certs\") pod \"kube-apiserver-ip-172-31-28-169\" (UID: \"ffb2f22824058abeb25188caa1acd9ce\") " pod="kube-system/kube-apiserver-ip-172-31-28-169" Jan 13 20:08:32.925254 kubelet[3474]: I0113 20:08:32.925143 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffb2f22824058abeb25188caa1acd9ce-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-169\" (UID: \"ffb2f22824058abeb25188caa1acd9ce\") " pod="kube-system/kube-apiserver-ip-172-31-28-169" Jan 13 20:08:32.925254 kubelet[3474]: I0113 20:08:32.925213 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:32.925669 kubelet[3474]: I0113 20:08:32.925520 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:32.925669 kubelet[3474]: I0113 20:08:32.925614 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0d6c1c5df3fc0a08c430afbff421b86-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-169\" (UID: \"d0d6c1c5df3fc0a08c430afbff421b86\") " pod="kube-system/kube-scheduler-ip-172-31-28-169" Jan 13 20:08:32.926191 kubelet[3474]: I0113 20:08:32.926019 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:32.926191 kubelet[3474]: I0113 20:08:32.926074 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:32.926191 kubelet[3474]: I0113 20:08:32.926139 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62ab4774bc52552ee02c4ec6812cf60a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-169\" (UID: \"62ab4774bc52552ee02c4ec6812cf60a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-169" Jan 13 20:08:32.926513 kubelet[3474]: I0113 20:08:32.926382 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffb2f22824058abeb25188caa1acd9ce-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-169\" (UID: \"ffb2f22824058abeb25188caa1acd9ce\") " pod="kube-system/kube-apiserver-ip-172-31-28-169" Jan 13 20:08:33.351057 sudo[3491]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:33.493064 kubelet[3474]: I0113 20:08:33.491202 3474 apiserver.go:52] "Watching apiserver" Jan 13 20:08:33.522131 kubelet[3474]: I0113 20:08:33.522035 3474 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:08:33.540209 kubelet[3474]: I0113 20:08:33.539207 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-169" podStartSLOduration=1.5388978500000001 podStartE2EDuration="1.53889785s" podCreationTimestamp="2025-01-13 20:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:33.538421378 +0000 UTC m=+1.193217859" watchObservedRunningTime="2025-01-13 20:08:33.53889785 +0000 UTC m=+1.193694331" Jan 13 20:08:33.569705 kubelet[3474]: I0113 20:08:33.569468 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-169" podStartSLOduration=1.569446454 podStartE2EDuration="1.569446454s" podCreationTimestamp="2025-01-13 20:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:33.555167294 +0000 UTC m=+1.209963763" watchObservedRunningTime="2025-01-13 20:08:33.569446454 +0000 UTC m=+1.224242935" Jan 13 20:08:33.684942 kubelet[3474]: I0113 20:08:33.684482 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-169" podStartSLOduration=1.684460263 podStartE2EDuration="1.684460263s" podCreationTimestamp="2025-01-13 20:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:33.571352738 +0000 UTC m=+1.226149231" watchObservedRunningTime="2025-01-13 20:08:33.684460263 +0000 UTC m=+1.339256744" Jan 13 20:08:36.061151 sudo[2287]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:36.083274 sshd[2286]: Connection closed by 147.75.109.163 port 52590 Jan 13 20:08:36.085169 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:36.092321 systemd[1]: sshd@8-172.31.28.169:22-147.75.109.163:52590.service: Deactivated successfully. Jan 13 20:08:36.096175 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:08:36.097407 systemd[1]: session-9.scope: Consumed 9.385s CPU time, 155.5M memory peak, 0B memory swap peak. Jan 13 20:08:36.098838 systemd-logind[1914]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:08:36.100721 systemd-logind[1914]: Removed session 9. Jan 13 20:08:38.247744 kubelet[3474]: I0113 20:08:38.247420 3474 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:08:38.248324 containerd[1942]: time="2025-01-13T20:08:38.247954638Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:08:38.251704 kubelet[3474]: I0113 20:08:38.250272 3474 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:08:39.208700 systemd[1]: Created slice kubepods-besteffort-podf3e1180c_cb1e_4635_a6ef_70537975ba8d.slice - libcontainer container kubepods-besteffort-podf3e1180c_cb1e_4635_a6ef_70537975ba8d.slice. Jan 13 20:08:39.261655 systemd[1]: Created slice kubepods-burstable-podcb681531_5f52_4368_9118_05e452b2044c.slice - libcontainer container kubepods-burstable-podcb681531_5f52_4368_9118_05e452b2044c.slice. Jan 13 20:08:39.270383 kubelet[3474]: I0113 20:08:39.269577 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3e1180c-cb1e-4635-a6ef-70537975ba8d-kube-proxy\") pod \"kube-proxy-rftrs\" (UID: \"f3e1180c-cb1e-4635-a6ef-70537975ba8d\") " pod="kube-system/kube-proxy-rftrs" Jan 13 20:08:39.270383 kubelet[3474]: I0113 20:08:39.269644 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-host-proc-sys-kernel\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.270383 kubelet[3474]: I0113 20:08:39.269685 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xll\" (UniqueName: \"kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-kube-api-access-v7xll\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.270383 kubelet[3474]: I0113 20:08:39.269737 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-hubble-tls\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.270383 kubelet[3474]: I0113 20:08:39.269775 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8wlc\" (UniqueName: \"kubernetes.io/projected/f3e1180c-cb1e-4635-a6ef-70537975ba8d-kube-api-access-v8wlc\") pod \"kube-proxy-rftrs\" (UID: \"f3e1180c-cb1e-4635-a6ef-70537975ba8d\") " pod="kube-system/kube-proxy-rftrs" Jan 13 20:08:39.271206 kubelet[3474]: I0113 20:08:39.269835 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cni-path\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271206 kubelet[3474]: I0113 20:08:39.269874 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb681531-5f52-4368-9118-05e452b2044c-cilium-config-path\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271206 kubelet[3474]: I0113 20:08:39.269910 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-hostproc\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271206 kubelet[3474]: I0113 20:08:39.269943 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-host-proc-sys-net\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271206 kubelet[3474]: I0113 20:08:39.269983 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-xtables-lock\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271206 kubelet[3474]: I0113 20:08:39.270016 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3e1180c-cb1e-4635-a6ef-70537975ba8d-lib-modules\") pod \"kube-proxy-rftrs\" (UID: \"f3e1180c-cb1e-4635-a6ef-70537975ba8d\") " pod="kube-system/kube-proxy-rftrs" Jan 13 20:08:39.271542 kubelet[3474]: I0113 20:08:39.270052 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cilium-cgroup\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271542 kubelet[3474]: I0113 20:08:39.270099 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-lib-modules\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271542 kubelet[3474]: I0113 20:08:39.270141 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb681531-5f52-4368-9118-05e452b2044c-clustermesh-secrets\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271542 kubelet[3474]: I0113 20:08:39.270180 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-bpf-maps\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271542 kubelet[3474]: I0113 20:08:39.270219 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3e1180c-cb1e-4635-a6ef-70537975ba8d-xtables-lock\") pod \"kube-proxy-rftrs\" (UID: \"f3e1180c-cb1e-4635-a6ef-70537975ba8d\") " pod="kube-system/kube-proxy-rftrs" Jan 13 20:08:39.271542 kubelet[3474]: I0113 20:08:39.270253 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cilium-run\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.271899 kubelet[3474]: I0113 20:08:39.270289 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-etc-cni-netd\") pod \"cilium-n7clt\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " pod="kube-system/cilium-n7clt" Jan 13 20:08:39.273361 kubelet[3474]: W0113 20:08:39.273298 3474 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-28-169" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-169' and this object Jan 13 20:08:39.273540 kubelet[3474]: E0113 20:08:39.273367 3474 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-28-169\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-169' and this object" logger="UnhandledError" Jan 13 20:08:39.274122 kubelet[3474]: W0113 20:08:39.273690 3474 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-28-169" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-169' and this object Jan 13 20:08:39.274122 kubelet[3474]: E0113 20:08:39.273735 3474 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-28-169\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-169' and this object" logger="UnhandledError" Jan 13 20:08:39.274122 kubelet[3474]: W0113 20:08:39.274113 3474 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-28-169" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-169' and this object Jan 13 20:08:39.274355 kubelet[3474]: E0113 20:08:39.274174 3474 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-28-169\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-169' and this object" logger="UnhandledError" Jan 13 20:08:39.474626 kubelet[3474]: I0113 20:08:39.472934 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f9309ca-f664-4445-a2c5-6ed0db002d62-cilium-config-path\") pod \"cilium-operator-5d85765b45-x8mmv\" (UID: \"5f9309ca-f664-4445-a2c5-6ed0db002d62\") " pod="kube-system/cilium-operator-5d85765b45-x8mmv" Jan 13 20:08:39.474626 kubelet[3474]: I0113 20:08:39.473027 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp7pz\" (UniqueName: \"kubernetes.io/projected/5f9309ca-f664-4445-a2c5-6ed0db002d62-kube-api-access-cp7pz\") pod \"cilium-operator-5d85765b45-x8mmv\" (UID: \"5f9309ca-f664-4445-a2c5-6ed0db002d62\") " pod="kube-system/cilium-operator-5d85765b45-x8mmv" Jan 13 20:08:39.490907 systemd[1]: Created slice kubepods-besteffort-pod5f9309ca_f664_4445_a2c5_6ed0db002d62.slice - libcontainer container kubepods-besteffort-pod5f9309ca_f664_4445_a2c5_6ed0db002d62.slice. Jan 13 20:08:39.524309 containerd[1942]: time="2025-01-13T20:08:39.524258420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rftrs,Uid:f3e1180c-cb1e-4635-a6ef-70537975ba8d,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:39.575871 containerd[1942]: time="2025-01-13T20:08:39.574540004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:39.575871 containerd[1942]: time="2025-01-13T20:08:39.574735148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:39.575871 containerd[1942]: time="2025-01-13T20:08:39.574797512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:39.575871 containerd[1942]: time="2025-01-13T20:08:39.575036276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:39.622141 systemd[1]: Started cri-containerd-a8658706b8ef104d80090c724dc539d6c1a69de124d2ee6799e65eeb12a1dff1.scope - libcontainer container a8658706b8ef104d80090c724dc539d6c1a69de124d2ee6799e65eeb12a1dff1. Jan 13 20:08:39.664292 containerd[1942]: time="2025-01-13T20:08:39.664237965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rftrs,Uid:f3e1180c-cb1e-4635-a6ef-70537975ba8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8658706b8ef104d80090c724dc539d6c1a69de124d2ee6799e65eeb12a1dff1\"" Jan 13 20:08:39.670355 containerd[1942]: time="2025-01-13T20:08:39.670243329Z" level=info msg="CreateContainer within sandbox \"a8658706b8ef104d80090c724dc539d6c1a69de124d2ee6799e65eeb12a1dff1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:08:39.700597 containerd[1942]: time="2025-01-13T20:08:39.700524333Z" level=info msg="CreateContainer within sandbox \"a8658706b8ef104d80090c724dc539d6c1a69de124d2ee6799e65eeb12a1dff1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dcd265539284748421f6eeb1c7e5d95459cabf99222c4bbf34cfdfc518bbf0f8\"" Jan 13 20:08:39.703872 containerd[1942]: time="2025-01-13T20:08:39.701795037Z" level=info msg="StartContainer for \"dcd265539284748421f6eeb1c7e5d95459cabf99222c4bbf34cfdfc518bbf0f8\"" Jan 13 20:08:39.756173 systemd[1]: Started cri-containerd-dcd265539284748421f6eeb1c7e5d95459cabf99222c4bbf34cfdfc518bbf0f8.scope - libcontainer container dcd265539284748421f6eeb1c7e5d95459cabf99222c4bbf34cfdfc518bbf0f8. Jan 13 20:08:39.817626 containerd[1942]: time="2025-01-13T20:08:39.817554321Z" level=info msg="StartContainer for \"dcd265539284748421f6eeb1c7e5d95459cabf99222c4bbf34cfdfc518bbf0f8\" returns successfully" Jan 13 20:08:40.373523 kubelet[3474]: E0113 20:08:40.373468 3474 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:08:40.374088 kubelet[3474]: E0113 20:08:40.373607 3474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb681531-5f52-4368-9118-05e452b2044c-cilium-config-path podName:cb681531-5f52-4368-9118-05e452b2044c nodeName:}" failed. No retries permitted until 2025-01-13 20:08:40.873572824 +0000 UTC m=+8.528369293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/cb681531-5f52-4368-9118-05e452b2044c-cilium-config-path") pod "cilium-n7clt" (UID: "cb681531-5f52-4368-9118-05e452b2044c") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:08:40.374088 kubelet[3474]: E0113 20:08:40.373483 3474 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 20:08:40.374088 kubelet[3474]: E0113 20:08:40.373926 3474 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-n7clt: failed to sync secret cache: timed out waiting for the condition Jan 13 20:08:40.374088 kubelet[3474]: E0113 20:08:40.373994 3474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-hubble-tls podName:cb681531-5f52-4368-9118-05e452b2044c nodeName:}" failed. No retries permitted until 2025-01-13 20:08:40.873976276 +0000 UTC m=+8.528772757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-hubble-tls") pod "cilium-n7clt" (UID: "cb681531-5f52-4368-9118-05e452b2044c") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:08:40.682077 kubelet[3474]: I0113 20:08:40.680054 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rftrs" podStartSLOduration=1.680030398 podStartE2EDuration="1.680030398s" podCreationTimestamp="2025-01-13 20:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:40.67798885 +0000 UTC m=+8.332785343" watchObservedRunningTime="2025-01-13 20:08:40.680030398 +0000 UTC m=+8.334826879" Jan 13 20:08:40.697281 containerd[1942]: time="2025-01-13T20:08:40.697202866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-x8mmv,Uid:5f9309ca-f664-4445-a2c5-6ed0db002d62,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:40.750963 containerd[1942]: time="2025-01-13T20:08:40.750757234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:40.751223 containerd[1942]: time="2025-01-13T20:08:40.750917602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:40.751546 containerd[1942]: time="2025-01-13T20:08:40.751313110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:40.751730 containerd[1942]: time="2025-01-13T20:08:40.751654570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:40.795170 systemd[1]: Started cri-containerd-d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237.scope - libcontainer container d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237. Jan 13 20:08:40.854500 containerd[1942]: time="2025-01-13T20:08:40.854438927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-x8mmv,Uid:5f9309ca-f664-4445-a2c5-6ed0db002d62,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237\"" Jan 13 20:08:40.863093 containerd[1942]: time="2025-01-13T20:08:40.862913339Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:08:41.069225 containerd[1942]: time="2025-01-13T20:08:41.069156800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n7clt,Uid:cb681531-5f52-4368-9118-05e452b2044c,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:41.114701 containerd[1942]: time="2025-01-13T20:08:41.114209492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:41.114701 containerd[1942]: time="2025-01-13T20:08:41.114306224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:41.114701 containerd[1942]: time="2025-01-13T20:08:41.114331508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:41.114701 containerd[1942]: time="2025-01-13T20:08:41.114462068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:41.146125 systemd[1]: Started cri-containerd-e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9.scope - libcontainer container e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9. Jan 13 20:08:41.187791 containerd[1942]: time="2025-01-13T20:08:41.187696808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n7clt,Uid:cb681531-5f52-4368-9118-05e452b2044c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\"" Jan 13 20:08:45.842606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993230081.mount: Deactivated successfully. Jan 13 20:08:49.577333 containerd[1942]: time="2025-01-13T20:08:49.577253598Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:49.579079 containerd[1942]: time="2025-01-13T20:08:49.578992086Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137078" Jan 13 20:08:49.581249 containerd[1942]: time="2025-01-13T20:08:49.581174658Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:49.584680 containerd[1942]: time="2025-01-13T20:08:49.584083482Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 8.721093871s" Jan 13 20:08:49.584680 containerd[1942]: time="2025-01-13T20:08:49.584142486Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:08:49.586370 containerd[1942]: time="2025-01-13T20:08:49.586254330Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:08:49.592305 containerd[1942]: time="2025-01-13T20:08:49.591943098Z" level=info msg="CreateContainer within sandbox \"d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:08:49.622102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610483341.mount: Deactivated successfully. Jan 13 20:08:49.627553 containerd[1942]: time="2025-01-13T20:08:49.627496146Z" level=info msg="CreateContainer within sandbox \"d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\"" Jan 13 20:08:49.628596 containerd[1942]: time="2025-01-13T20:08:49.628528254Z" level=info msg="StartContainer for \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\"" Jan 13 20:08:49.682117 systemd[1]: Started cri-containerd-17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7.scope - libcontainer container 17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7. Jan 13 20:08:49.740322 containerd[1942]: time="2025-01-13T20:08:49.740229043Z" level=info msg="StartContainer for \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\" returns successfully" Jan 13 20:09:03.064498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3801031119.mount: Deactivated successfully. Jan 13 20:09:05.561240 containerd[1942]: time="2025-01-13T20:09:05.561179469Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:05.563641 containerd[1942]: time="2025-01-13T20:09:05.563559969Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651478" Jan 13 20:09:05.565571 containerd[1942]: time="2025-01-13T20:09:05.565478169Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:05.569318 containerd[1942]: time="2025-01-13T20:09:05.568774269Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 15.982451395s" Jan 13 20:09:05.569318 containerd[1942]: time="2025-01-13T20:09:05.568859457Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:09:05.574119 containerd[1942]: time="2025-01-13T20:09:05.574060425Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:09:05.596376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464599849.mount: Deactivated successfully. Jan 13 20:09:05.600746 containerd[1942]: time="2025-01-13T20:09:05.600654357Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\"" Jan 13 20:09:05.601851 containerd[1942]: time="2025-01-13T20:09:05.601775577Z" level=info msg="StartContainer for \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\"" Jan 13 20:09:05.663121 systemd[1]: Started cri-containerd-a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6.scope - libcontainer container a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6. Jan 13 20:09:05.710989 containerd[1942]: time="2025-01-13T20:09:05.710899030Z" level=info msg="StartContainer for \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\" returns successfully" Jan 13 20:09:05.730027 systemd[1]: cri-containerd-a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6.scope: Deactivated successfully. Jan 13 20:09:05.776216 kubelet[3474]: I0113 20:09:05.775615 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-x8mmv" podStartSLOduration=18.049967591 podStartE2EDuration="26.77559523s" podCreationTimestamp="2025-01-13 20:08:39 +0000 UTC" firstStartedPulling="2025-01-13 20:08:40.859933799 +0000 UTC m=+8.514730268" lastFinishedPulling="2025-01-13 20:08:49.585561414 +0000 UTC m=+17.240357907" observedRunningTime="2025-01-13 20:08:50.7770875 +0000 UTC m=+18.431884005" watchObservedRunningTime="2025-01-13 20:09:05.77559523 +0000 UTC m=+33.430391699" Jan 13 20:09:06.589920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6-rootfs.mount: Deactivated successfully. Jan 13 20:09:06.669523 containerd[1942]: time="2025-01-13T20:09:06.669438467Z" level=info msg="shim disconnected" id=a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6 namespace=k8s.io Jan 13 20:09:06.669523 containerd[1942]: time="2025-01-13T20:09:06.669515975Z" level=warning msg="cleaning up after shim disconnected" id=a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6 namespace=k8s.io Jan 13 20:09:06.670492 containerd[1942]: time="2025-01-13T20:09:06.669537599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:06.754928 containerd[1942]: time="2025-01-13T20:09:06.754855367Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:09:06.782859 containerd[1942]: time="2025-01-13T20:09:06.775554227Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\"" Jan 13 20:09:06.782859 containerd[1942]: time="2025-01-13T20:09:06.779311667Z" level=info msg="StartContainer for \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\"" Jan 13 20:09:06.784631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447629880.mount: Deactivated successfully. Jan 13 20:09:06.855114 systemd[1]: Started cri-containerd-98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957.scope - libcontainer container 98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957. Jan 13 20:09:06.904219 containerd[1942]: time="2025-01-13T20:09:06.904148124Z" level=info msg="StartContainer for \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\" returns successfully" Jan 13 20:09:06.926251 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:09:06.927500 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:06.927626 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:06.935572 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:06.936071 systemd[1]: cri-containerd-98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957.scope: Deactivated successfully. Jan 13 20:09:06.980170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:06.992720 containerd[1942]: time="2025-01-13T20:09:06.992572512Z" level=info msg="shim disconnected" id=98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957 namespace=k8s.io Jan 13 20:09:06.993106 containerd[1942]: time="2025-01-13T20:09:06.992864484Z" level=warning msg="cleaning up after shim disconnected" id=98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957 namespace=k8s.io Jan 13 20:09:06.993106 containerd[1942]: time="2025-01-13T20:09:06.992890032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:07.589355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957-rootfs.mount: Deactivated successfully. Jan 13 20:09:07.763941 containerd[1942]: time="2025-01-13T20:09:07.763795284Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:09:07.794043 containerd[1942]: time="2025-01-13T20:09:07.793783500Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\"" Jan 13 20:09:07.803263 containerd[1942]: time="2025-01-13T20:09:07.803198760Z" level=info msg="StartContainer for \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\"" Jan 13 20:09:07.867159 systemd[1]: Started cri-containerd-5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9.scope - libcontainer container 5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9. Jan 13 20:09:07.923559 containerd[1942]: time="2025-01-13T20:09:07.923488117Z" level=info msg="StartContainer for \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\" returns successfully" Jan 13 20:09:07.931085 systemd[1]: cri-containerd-5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9.scope: Deactivated successfully. Jan 13 20:09:07.971080 containerd[1942]: time="2025-01-13T20:09:07.970979245Z" level=info msg="shim disconnected" id=5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9 namespace=k8s.io Jan 13 20:09:07.971080 containerd[1942]: time="2025-01-13T20:09:07.971067709Z" level=warning msg="cleaning up after shim disconnected" id=5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9 namespace=k8s.io Jan 13 20:09:07.971461 containerd[1942]: time="2025-01-13T20:09:07.971089177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:08.589339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9-rootfs.mount: Deactivated successfully. Jan 13 20:09:08.766159 containerd[1942]: time="2025-01-13T20:09:08.765781681Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:09:08.789530 containerd[1942]: time="2025-01-13T20:09:08.789473389Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\"" Jan 13 20:09:08.796321 containerd[1942]: time="2025-01-13T20:09:08.796257829Z" level=info msg="StartContainer for \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\"" Jan 13 20:09:08.798470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253119093.mount: Deactivated successfully. Jan 13 20:09:08.860106 systemd[1]: Started cri-containerd-9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2.scope - libcontainer container 9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2. Jan 13 20:09:08.936564 containerd[1942]: time="2025-01-13T20:09:08.935847290Z" level=info msg="StartContainer for \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\" returns successfully" Jan 13 20:09:08.936123 systemd[1]: cri-containerd-9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2.scope: Deactivated successfully. Jan 13 20:09:08.986599 containerd[1942]: time="2025-01-13T20:09:08.986519894Z" level=info msg="shim disconnected" id=9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2 namespace=k8s.io Jan 13 20:09:08.986599 containerd[1942]: time="2025-01-13T20:09:08.986594918Z" level=warning msg="cleaning up after shim disconnected" id=9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2 namespace=k8s.io Jan 13 20:09:08.987180 containerd[1942]: time="2025-01-13T20:09:08.986618534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:09.589526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2-rootfs.mount: Deactivated successfully. Jan 13 20:09:09.775925 containerd[1942]: time="2025-01-13T20:09:09.775494362Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:09:09.820466 containerd[1942]: time="2025-01-13T20:09:09.820391186Z" level=info msg="CreateContainer within sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\"" Jan 13 20:09:09.821745 containerd[1942]: time="2025-01-13T20:09:09.821412830Z" level=info msg="StartContainer for \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\"" Jan 13 20:09:09.878138 systemd[1]: Started cri-containerd-1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7.scope - libcontainer container 1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7. Jan 13 20:09:09.931551 containerd[1942]: time="2025-01-13T20:09:09.931455807Z" level=info msg="StartContainer for \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\" returns successfully" Jan 13 20:09:10.107562 kubelet[3474]: I0113 20:09:10.107505 3474 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:09:10.179916 systemd[1]: Created slice kubepods-burstable-pod9e25a219_ee90_47b7_a529_3a2e32d4a303.slice - libcontainer container kubepods-burstable-pod9e25a219_ee90_47b7_a529_3a2e32d4a303.slice. Jan 13 20:09:10.195897 systemd[1]: Created slice kubepods-burstable-podfcefe4d3_6ad4_47c4_920c_a00303046cea.slice - libcontainer container kubepods-burstable-podfcefe4d3_6ad4_47c4_920c_a00303046cea.slice. Jan 13 20:09:10.202744 kubelet[3474]: I0113 20:09:10.202704 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcefe4d3-6ad4-47c4-920c-a00303046cea-config-volume\") pod \"coredns-6f6b679f8f-qb9pk\" (UID: \"fcefe4d3-6ad4-47c4-920c-a00303046cea\") " pod="kube-system/coredns-6f6b679f8f-qb9pk" Jan 13 20:09:10.203210 kubelet[3474]: I0113 20:09:10.202998 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e25a219-ee90-47b7-a529-3a2e32d4a303-config-volume\") pod \"coredns-6f6b679f8f-4f86s\" (UID: \"9e25a219-ee90-47b7-a529-3a2e32d4a303\") " pod="kube-system/coredns-6f6b679f8f-4f86s" Jan 13 20:09:10.203210 kubelet[3474]: I0113 20:09:10.203071 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b6r4\" (UniqueName: \"kubernetes.io/projected/fcefe4d3-6ad4-47c4-920c-a00303046cea-kube-api-access-8b6r4\") pod \"coredns-6f6b679f8f-qb9pk\" (UID: \"fcefe4d3-6ad4-47c4-920c-a00303046cea\") " pod="kube-system/coredns-6f6b679f8f-qb9pk" Jan 13 20:09:10.203210 kubelet[3474]: I0113 20:09:10.203142 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld55d\" (UniqueName: \"kubernetes.io/projected/9e25a219-ee90-47b7-a529-3a2e32d4a303-kube-api-access-ld55d\") pod \"coredns-6f6b679f8f-4f86s\" (UID: \"9e25a219-ee90-47b7-a529-3a2e32d4a303\") " pod="kube-system/coredns-6f6b679f8f-4f86s" Jan 13 20:09:10.494620 containerd[1942]: time="2025-01-13T20:09:10.494550818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4f86s,Uid:9e25a219-ee90-47b7-a529-3a2e32d4a303,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:10.505245 containerd[1942]: time="2025-01-13T20:09:10.505169438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qb9pk,Uid:fcefe4d3-6ad4-47c4-920c-a00303046cea,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:10.825026 kubelet[3474]: I0113 20:09:10.824798 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n7clt" podStartSLOduration=7.444773138 podStartE2EDuration="31.824753091s" podCreationTimestamp="2025-01-13 20:08:39 +0000 UTC" firstStartedPulling="2025-01-13 20:08:41.190608728 +0000 UTC m=+8.845405209" lastFinishedPulling="2025-01-13 20:09:05.570588681 +0000 UTC m=+33.225385162" observedRunningTime="2025-01-13 20:09:10.817653975 +0000 UTC m=+38.472450480" watchObservedRunningTime="2025-01-13 20:09:10.824753091 +0000 UTC m=+38.479549560" Jan 13 20:09:12.757325 (udev-worker)[4280]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:12.760931 systemd-networkd[1842]: cilium_host: Link UP Jan 13 20:09:12.762365 systemd-networkd[1842]: cilium_net: Link UP Jan 13 20:09:12.762374 systemd-networkd[1842]: cilium_net: Gained carrier Jan 13 20:09:12.762751 systemd-networkd[1842]: cilium_host: Gained carrier Jan 13 20:09:12.770894 (udev-worker)[4315]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:12.958474 systemd-networkd[1842]: cilium_vxlan: Link UP Jan 13 20:09:12.958493 systemd-networkd[1842]: cilium_vxlan: Gained carrier Jan 13 20:09:13.234098 systemd-networkd[1842]: cilium_host: Gained IPv6LL Jan 13 20:09:13.444859 kernel: NET: Registered PF_ALG protocol family Jan 13 20:09:13.627004 systemd-networkd[1842]: cilium_net: Gained IPv6LL Jan 13 20:09:14.586165 systemd-networkd[1842]: cilium_vxlan: Gained IPv6LL Jan 13 20:09:14.747046 systemd-networkd[1842]: lxc_health: Link UP Jan 13 20:09:14.753331 (udev-worker)[4331]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:14.758420 systemd-networkd[1842]: lxc_health: Gained carrier Jan 13 20:09:15.157234 systemd-networkd[1842]: lxcc1207d4e6b2c: Link UP Jan 13 20:09:15.168865 kernel: eth0: renamed from tmpe407f Jan 13 20:09:15.175190 systemd-networkd[1842]: lxcc1207d4e6b2c: Gained carrier Jan 13 20:09:15.211143 systemd-networkd[1842]: lxca8c84271ce95: Link UP Jan 13 20:09:15.221399 kernel: eth0: renamed from tmpb7b65 Jan 13 20:09:15.234118 systemd-networkd[1842]: lxca8c84271ce95: Gained carrier Jan 13 20:09:15.237025 (udev-worker)[4330]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:16.122204 systemd-networkd[1842]: lxc_health: Gained IPv6LL Jan 13 20:09:17.082566 systemd-networkd[1842]: lxca8c84271ce95: Gained IPv6LL Jan 13 20:09:17.146510 systemd-networkd[1842]: lxcc1207d4e6b2c: Gained IPv6LL Jan 13 20:09:18.256407 systemd[1]: Started sshd@9-172.31.28.169:22-147.75.109.163:38790.service - OpenSSH per-connection server daemon (147.75.109.163:38790). Jan 13 20:09:18.453391 sshd[4683]: Accepted publickey for core from 147.75.109.163 port 38790 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:18.456962 sshd-session[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:18.468800 systemd-logind[1914]: New session 10 of user core. Jan 13 20:09:18.477412 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:09:18.809460 sshd[4685]: Connection closed by 147.75.109.163 port 38790 Jan 13 20:09:18.814184 sshd-session[4683]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:18.821480 systemd[1]: sshd@9-172.31.28.169:22-147.75.109.163:38790.service: Deactivated successfully. Jan 13 20:09:18.828366 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:09:18.834038 systemd-logind[1914]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:09:18.838261 systemd-logind[1914]: Removed session 10. Jan 13 20:09:19.822782 ntpd[1909]: Listen normally on 8 cilium_host 192.168.0.121:123 Jan 13 20:09:19.824148 ntpd[1909]: 13 Jan 20:09:19 ntpd[1909]: Listen normally on 8 cilium_host 192.168.0.121:123 Jan 13 20:09:19.824148 ntpd[1909]: 13 Jan 20:09:19 ntpd[1909]: Listen normally on 9 cilium_net [fe80::7cbd:abff:fe8b:7544%4]:123 Jan 13 20:09:19.824148 ntpd[1909]: 13 Jan 20:09:19 ntpd[1909]: Listen normally on 10 cilium_host [fe80::500f:94ff:fe27:c256%5]:123 Jan 13 20:09:19.824148 ntpd[1909]: 13 Jan 20:09:19 ntpd[1909]: Listen normally on 11 cilium_vxlan [fe80::d434:daff:fedf:6414%6]:123 Jan 13 20:09:19.824148 ntpd[1909]: 13 Jan 20:09:19 ntpd[1909]: Listen normally on 12 lxc_health [fe80::45e:eaff:fe26:574d%8]:123 Jan 13 20:09:19.824148 ntpd[1909]: 13 Jan 20:09:19 ntpd[1909]: Listen normally on 13 lxcc1207d4e6b2c [fe80::2b:a5ff:fe0f:ef42%10]:123 Jan 13 20:09:19.824148 ntpd[1909]: 13 Jan 20:09:19 ntpd[1909]: Listen normally on 14 lxca8c84271ce95 [fe80::dc04:8eff:fe8a:9b1e%12]:123 Jan 13 20:09:19.823587 ntpd[1909]: Listen normally on 9 cilium_net [fe80::7cbd:abff:fe8b:7544%4]:123 Jan 13 20:09:19.823678 ntpd[1909]: Listen normally on 10 cilium_host [fe80::500f:94ff:fe27:c256%5]:123 Jan 13 20:09:19.823752 ntpd[1909]: Listen normally on 11 cilium_vxlan [fe80::d434:daff:fedf:6414%6]:123 Jan 13 20:09:19.823857 ntpd[1909]: Listen normally on 12 lxc_health [fe80::45e:eaff:fe26:574d%8]:123 Jan 13 20:09:19.823934 ntpd[1909]: Listen normally on 13 lxcc1207d4e6b2c [fe80::2b:a5ff:fe0f:ef42%10]:123 Jan 13 20:09:19.824008 ntpd[1909]: Listen normally on 14 lxca8c84271ce95 [fe80::dc04:8eff:fe8a:9b1e%12]:123 Jan 13 20:09:23.604854 containerd[1942]: time="2025-01-13T20:09:23.602094627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:23.604854 containerd[1942]: time="2025-01-13T20:09:23.602208999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:23.604854 containerd[1942]: time="2025-01-13T20:09:23.602247159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:23.604854 containerd[1942]: time="2025-01-13T20:09:23.602398491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:23.636220 containerd[1942]: time="2025-01-13T20:09:23.628629963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:23.636220 containerd[1942]: time="2025-01-13T20:09:23.628821879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:23.636220 containerd[1942]: time="2025-01-13T20:09:23.628910991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:23.636220 containerd[1942]: time="2025-01-13T20:09:23.629446935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:23.681112 systemd[1]: Started cri-containerd-b7b659f0d7d0e285f87d7a4d665965f3c22c381cd9cc5a10fbae3e1c3aaf3596.scope - libcontainer container b7b659f0d7d0e285f87d7a4d665965f3c22c381cd9cc5a10fbae3e1c3aaf3596. Jan 13 20:09:23.686582 systemd[1]: Started cri-containerd-e407f637e484e7895fe574c7d3d92d7226a10e3e2b5161639b8b4a3b310b2f17.scope - libcontainer container e407f637e484e7895fe574c7d3d92d7226a10e3e2b5161639b8b4a3b310b2f17. Jan 13 20:09:23.809278 containerd[1942]: time="2025-01-13T20:09:23.809224888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qb9pk,Uid:fcefe4d3-6ad4-47c4-920c-a00303046cea,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7b659f0d7d0e285f87d7a4d665965f3c22c381cd9cc5a10fbae3e1c3aaf3596\"" Jan 13 20:09:23.824117 containerd[1942]: time="2025-01-13T20:09:23.822975916Z" level=info msg="CreateContainer within sandbox \"b7b659f0d7d0e285f87d7a4d665965f3c22c381cd9cc5a10fbae3e1c3aaf3596\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:09:23.855956 systemd[1]: Started sshd@10-172.31.28.169:22-147.75.109.163:38792.service - OpenSSH per-connection server daemon (147.75.109.163:38792). Jan 13 20:09:23.864144 containerd[1942]: time="2025-01-13T20:09:23.864034468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4f86s,Uid:9e25a219-ee90-47b7-a529-3a2e32d4a303,Namespace:kube-system,Attempt:0,} returns sandbox id \"e407f637e484e7895fe574c7d3d92d7226a10e3e2b5161639b8b4a3b310b2f17\"" Jan 13 20:09:23.889835 containerd[1942]: time="2025-01-13T20:09:23.889357600Z" level=info msg="CreateContainer within sandbox \"e407f637e484e7895fe574c7d3d92d7226a10e3e2b5161639b8b4a3b310b2f17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:09:23.908838 containerd[1942]: time="2025-01-13T20:09:23.908746900Z" level=info msg="CreateContainer within sandbox \"b7b659f0d7d0e285f87d7a4d665965f3c22c381cd9cc5a10fbae3e1c3aaf3596\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd7d62cdd1b29baa791925b20f3c6a04a57742b570aa7e17671db3e2d4fecf25\"" Jan 13 20:09:23.911856 containerd[1942]: time="2025-01-13T20:09:23.910376800Z" level=info msg="StartContainer for \"bd7d62cdd1b29baa791925b20f3c6a04a57742b570aa7e17671db3e2d4fecf25\"" Jan 13 20:09:23.967861 containerd[1942]: time="2025-01-13T20:09:23.967769441Z" level=info msg="CreateContainer within sandbox \"e407f637e484e7895fe574c7d3d92d7226a10e3e2b5161639b8b4a3b310b2f17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32f75f2a200d8de0826d84707727110ade0d23ae5ebf7ffccbab751372dbf765\"" Jan 13 20:09:23.972843 containerd[1942]: time="2025-01-13T20:09:23.971419625Z" level=info msg="StartContainer for \"32f75f2a200d8de0826d84707727110ade0d23ae5ebf7ffccbab751372dbf765\"" Jan 13 20:09:23.999294 systemd[1]: Started cri-containerd-bd7d62cdd1b29baa791925b20f3c6a04a57742b570aa7e17671db3e2d4fecf25.scope - libcontainer container bd7d62cdd1b29baa791925b20f3c6a04a57742b570aa7e17671db3e2d4fecf25. Jan 13 20:09:24.058108 systemd[1]: Started cri-containerd-32f75f2a200d8de0826d84707727110ade0d23ae5ebf7ffccbab751372dbf765.scope - libcontainer container 32f75f2a200d8de0826d84707727110ade0d23ae5ebf7ffccbab751372dbf765. Jan 13 20:09:24.134492 sshd[4787]: Accepted publickey for core from 147.75.109.163 port 38792 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:24.141273 containerd[1942]: time="2025-01-13T20:09:24.139774970Z" level=info msg="StartContainer for \"bd7d62cdd1b29baa791925b20f3c6a04a57742b570aa7e17671db3e2d4fecf25\" returns successfully" Jan 13 20:09:24.141718 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:24.152080 containerd[1942]: time="2025-01-13T20:09:24.152010446Z" level=info msg="StartContainer for \"32f75f2a200d8de0826d84707727110ade0d23ae5ebf7ffccbab751372dbf765\" returns successfully" Jan 13 20:09:24.156986 systemd-logind[1914]: New session 11 of user core. Jan 13 20:09:24.165219 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:09:24.426891 sshd[4859]: Connection closed by 147.75.109.163 port 38792 Jan 13 20:09:24.428287 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:24.434894 systemd[1]: sshd@10-172.31.28.169:22-147.75.109.163:38792.service: Deactivated successfully. Jan 13 20:09:24.438091 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:09:24.440022 systemd-logind[1914]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:09:24.442291 systemd-logind[1914]: Removed session 11. Jan 13 20:09:24.856048 kubelet[3474]: I0113 20:09:24.855910 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qb9pk" podStartSLOduration=45.855880937 podStartE2EDuration="45.855880937s" podCreationTimestamp="2025-01-13 20:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:24.854688221 +0000 UTC m=+52.509484714" watchObservedRunningTime="2025-01-13 20:09:24.855880937 +0000 UTC m=+52.510677502" Jan 13 20:09:24.880720 kubelet[3474]: I0113 20:09:24.880205 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4f86s" podStartSLOduration=45.880180097 podStartE2EDuration="45.880180097s" podCreationTimestamp="2025-01-13 20:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:24.878463941 +0000 UTC m=+52.533260434" watchObservedRunningTime="2025-01-13 20:09:24.880180097 +0000 UTC m=+52.534976578" Jan 13 20:09:29.471359 systemd[1]: Started sshd@11-172.31.28.169:22-147.75.109.163:56534.service - OpenSSH per-connection server daemon (147.75.109.163:56534). Jan 13 20:09:29.663743 sshd[4881]: Accepted publickey for core from 147.75.109.163 port 56534 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:29.666372 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:29.674446 systemd-logind[1914]: New session 12 of user core. Jan 13 20:09:29.682096 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:09:29.928191 sshd[4883]: Connection closed by 147.75.109.163 port 56534 Jan 13 20:09:29.929090 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:29.934589 systemd[1]: sshd@11-172.31.28.169:22-147.75.109.163:56534.service: Deactivated successfully. Jan 13 20:09:29.938240 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:09:29.941965 systemd-logind[1914]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:09:29.943751 systemd-logind[1914]: Removed session 12. Jan 13 20:09:34.973326 systemd[1]: Started sshd@12-172.31.28.169:22-147.75.109.163:56546.service - OpenSSH per-connection server daemon (147.75.109.163:56546). Jan 13 20:09:35.169409 sshd[4900]: Accepted publickey for core from 147.75.109.163 port 56546 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:35.171940 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:35.179250 systemd-logind[1914]: New session 13 of user core. Jan 13 20:09:35.188061 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:09:35.430601 sshd[4902]: Connection closed by 147.75.109.163 port 56546 Jan 13 20:09:35.432193 sshd-session[4900]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:35.438534 systemd[1]: sshd@12-172.31.28.169:22-147.75.109.163:56546.service: Deactivated successfully. Jan 13 20:09:35.441904 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:09:35.443403 systemd-logind[1914]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:09:35.445395 systemd-logind[1914]: Removed session 13. Jan 13 20:09:40.472346 systemd[1]: Started sshd@13-172.31.28.169:22-147.75.109.163:52498.service - OpenSSH per-connection server daemon (147.75.109.163:52498). Jan 13 20:09:40.665474 sshd[4915]: Accepted publickey for core from 147.75.109.163 port 52498 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:40.667985 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:40.676872 systemd-logind[1914]: New session 14 of user core. Jan 13 20:09:40.682090 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:09:40.939887 sshd[4917]: Connection closed by 147.75.109.163 port 52498 Jan 13 20:09:40.940726 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:40.947362 systemd[1]: sshd@13-172.31.28.169:22-147.75.109.163:52498.service: Deactivated successfully. Jan 13 20:09:40.952223 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:09:40.953629 systemd-logind[1914]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:09:40.955442 systemd-logind[1914]: Removed session 14. Jan 13 20:09:40.978386 systemd[1]: Started sshd@14-172.31.28.169:22-147.75.109.163:52502.service - OpenSSH per-connection server daemon (147.75.109.163:52502). Jan 13 20:09:41.174624 sshd[4929]: Accepted publickey for core from 147.75.109.163 port 52502 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:41.177066 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:41.184341 systemd-logind[1914]: New session 15 of user core. Jan 13 20:09:41.192119 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:09:41.514067 sshd[4931]: Connection closed by 147.75.109.163 port 52502 Jan 13 20:09:41.514367 sshd-session[4929]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:41.522248 systemd[1]: sshd@14-172.31.28.169:22-147.75.109.163:52502.service: Deactivated successfully. Jan 13 20:09:41.527629 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:09:41.536159 systemd-logind[1914]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:09:41.568473 systemd[1]: Started sshd@15-172.31.28.169:22-147.75.109.163:52512.service - OpenSSH per-connection server daemon (147.75.109.163:52512). Jan 13 20:09:41.571924 systemd-logind[1914]: Removed session 15. Jan 13 20:09:41.748789 sshd[4940]: Accepted publickey for core from 147.75.109.163 port 52512 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:41.751283 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:41.758845 systemd-logind[1914]: New session 16 of user core. Jan 13 20:09:41.769070 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:09:42.009681 sshd[4942]: Connection closed by 147.75.109.163 port 52512 Jan 13 20:09:42.010583 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:42.017377 systemd[1]: sshd@15-172.31.28.169:22-147.75.109.163:52512.service: Deactivated successfully. Jan 13 20:09:42.023438 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:09:42.026111 systemd-logind[1914]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:09:42.028085 systemd-logind[1914]: Removed session 16. Jan 13 20:09:47.052337 systemd[1]: Started sshd@16-172.31.28.169:22-147.75.109.163:52516.service - OpenSSH per-connection server daemon (147.75.109.163:52516). Jan 13 20:09:47.234151 sshd[4953]: Accepted publickey for core from 147.75.109.163 port 52516 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:47.236910 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:47.245506 systemd-logind[1914]: New session 17 of user core. Jan 13 20:09:47.251065 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:09:47.493670 sshd[4955]: Connection closed by 147.75.109.163 port 52516 Jan 13 20:09:47.494269 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:47.500708 systemd[1]: sshd@16-172.31.28.169:22-147.75.109.163:52516.service: Deactivated successfully. Jan 13 20:09:47.504067 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:09:47.507365 systemd-logind[1914]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:09:47.509442 systemd-logind[1914]: Removed session 17. Jan 13 20:09:52.536335 systemd[1]: Started sshd@17-172.31.28.169:22-147.75.109.163:39804.service - OpenSSH per-connection server daemon (147.75.109.163:39804). Jan 13 20:09:52.724579 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 39804 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:52.727893 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:52.735155 systemd-logind[1914]: New session 18 of user core. Jan 13 20:09:52.745074 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:09:52.994130 sshd[4969]: Connection closed by 147.75.109.163 port 39804 Jan 13 20:09:52.995141 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:53.001057 systemd[1]: sshd@17-172.31.28.169:22-147.75.109.163:39804.service: Deactivated successfully. Jan 13 20:09:53.004981 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:09:53.006731 systemd-logind[1914]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:09:53.009111 systemd-logind[1914]: Removed session 18. Jan 13 20:09:58.036354 systemd[1]: Started sshd@18-172.31.28.169:22-147.75.109.163:49946.service - OpenSSH per-connection server daemon (147.75.109.163:49946). Jan 13 20:09:58.231981 sshd[4981]: Accepted publickey for core from 147.75.109.163 port 49946 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:58.234528 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:58.242191 systemd-logind[1914]: New session 19 of user core. Jan 13 20:09:58.255067 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:09:58.495764 sshd[4983]: Connection closed by 147.75.109.163 port 49946 Jan 13 20:09:58.496663 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:58.502920 systemd[1]: sshd@18-172.31.28.169:22-147.75.109.163:49946.service: Deactivated successfully. Jan 13 20:09:58.507785 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:09:58.510757 systemd-logind[1914]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:09:58.512768 systemd-logind[1914]: Removed session 19. Jan 13 20:10:03.536368 systemd[1]: Started sshd@19-172.31.28.169:22-147.75.109.163:49952.service - OpenSSH per-connection server daemon (147.75.109.163:49952). Jan 13 20:10:03.718396 sshd[4994]: Accepted publickey for core from 147.75.109.163 port 49952 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:03.720874 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:03.728299 systemd-logind[1914]: New session 20 of user core. Jan 13 20:10:03.738095 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:10:03.984478 sshd[4996]: Connection closed by 147.75.109.163 port 49952 Jan 13 20:10:03.985357 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:03.991763 systemd[1]: sshd@19-172.31.28.169:22-147.75.109.163:49952.service: Deactivated successfully. Jan 13 20:10:03.996783 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:10:03.998789 systemd-logind[1914]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:10:04.000550 systemd-logind[1914]: Removed session 20. Jan 13 20:10:04.024384 systemd[1]: Started sshd@20-172.31.28.169:22-147.75.109.163:49954.service - OpenSSH per-connection server daemon (147.75.109.163:49954). Jan 13 20:10:04.216988 sshd[5007]: Accepted publickey for core from 147.75.109.163 port 49954 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:04.219394 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:04.228306 systemd-logind[1914]: New session 21 of user core. Jan 13 20:10:04.233263 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:10:04.532249 sshd[5009]: Connection closed by 147.75.109.163 port 49954 Jan 13 20:10:04.533106 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:04.538375 systemd-logind[1914]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:10:04.539328 systemd[1]: sshd@20-172.31.28.169:22-147.75.109.163:49954.service: Deactivated successfully. Jan 13 20:10:04.543331 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:10:04.548691 systemd-logind[1914]: Removed session 21. Jan 13 20:10:04.576264 systemd[1]: Started sshd@21-172.31.28.169:22-147.75.109.163:49964.service - OpenSSH per-connection server daemon (147.75.109.163:49964). Jan 13 20:10:04.755238 sshd[5018]: Accepted publickey for core from 147.75.109.163 port 49964 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:04.757692 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:04.765226 systemd-logind[1914]: New session 22 of user core. Jan 13 20:10:04.774083 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:10:07.358398 sshd[5020]: Connection closed by 147.75.109.163 port 49964 Jan 13 20:10:07.359429 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:07.372018 systemd[1]: sshd@21-172.31.28.169:22-147.75.109.163:49964.service: Deactivated successfully. Jan 13 20:10:07.382132 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:10:07.387919 systemd-logind[1914]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:10:07.411311 systemd[1]: Started sshd@22-172.31.28.169:22-147.75.109.163:58178.service - OpenSSH per-connection server daemon (147.75.109.163:58178). Jan 13 20:10:07.414537 systemd-logind[1914]: Removed session 22. Jan 13 20:10:07.602486 sshd[5036]: Accepted publickey for core from 147.75.109.163 port 58178 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:07.605008 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:07.613245 systemd-logind[1914]: New session 23 of user core. Jan 13 20:10:07.620093 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:10:08.111977 sshd[5038]: Connection closed by 147.75.109.163 port 58178 Jan 13 20:10:08.112479 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:08.120585 systemd[1]: sshd@22-172.31.28.169:22-147.75.109.163:58178.service: Deactivated successfully. Jan 13 20:10:08.126205 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:10:08.129742 systemd-logind[1914]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:10:08.131745 systemd-logind[1914]: Removed session 23. Jan 13 20:10:08.145382 systemd[1]: Started sshd@23-172.31.28.169:22-147.75.109.163:58190.service - OpenSSH per-connection server daemon (147.75.109.163:58190). Jan 13 20:10:08.341559 sshd[5047]: Accepted publickey for core from 147.75.109.163 port 58190 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:08.344045 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:08.353082 systemd-logind[1914]: New session 24 of user core. Jan 13 20:10:08.361067 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:10:08.600409 sshd[5049]: Connection closed by 147.75.109.163 port 58190 Jan 13 20:10:08.601315 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:08.606777 systemd-logind[1914]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:10:08.609031 systemd[1]: sshd@23-172.31.28.169:22-147.75.109.163:58190.service: Deactivated successfully. Jan 13 20:10:08.612610 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:10:08.615053 systemd-logind[1914]: Removed session 24. Jan 13 20:10:13.639371 systemd[1]: Started sshd@24-172.31.28.169:22-147.75.109.163:58198.service - OpenSSH per-connection server daemon (147.75.109.163:58198). Jan 13 20:10:13.821199 sshd[5063]: Accepted publickey for core from 147.75.109.163 port 58198 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:13.823674 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:13.831150 systemd-logind[1914]: New session 25 of user core. Jan 13 20:10:13.838063 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:10:14.109451 sshd[5065]: Connection closed by 147.75.109.163 port 58198 Jan 13 20:10:14.110342 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:14.118010 systemd-logind[1914]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:10:14.119101 systemd[1]: sshd@24-172.31.28.169:22-147.75.109.163:58198.service: Deactivated successfully. Jan 13 20:10:14.122337 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:10:14.124992 systemd-logind[1914]: Removed session 25. Jan 13 20:10:19.151432 systemd[1]: Started sshd@25-172.31.28.169:22-147.75.109.163:35522.service - OpenSSH per-connection server daemon (147.75.109.163:35522). Jan 13 20:10:19.342536 sshd[5079]: Accepted publickey for core from 147.75.109.163 port 35522 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:19.345078 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:19.353917 systemd-logind[1914]: New session 26 of user core. Jan 13 20:10:19.359098 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:10:19.600864 sshd[5081]: Connection closed by 147.75.109.163 port 35522 Jan 13 20:10:19.600937 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:19.608434 systemd-logind[1914]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:10:19.609775 systemd[1]: sshd@25-172.31.28.169:22-147.75.109.163:35522.service: Deactivated successfully. Jan 13 20:10:19.614329 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:10:19.617139 systemd-logind[1914]: Removed session 26. Jan 13 20:10:24.640360 systemd[1]: Started sshd@26-172.31.28.169:22-147.75.109.163:35526.service - OpenSSH per-connection server daemon (147.75.109.163:35526). Jan 13 20:10:24.832071 sshd[5093]: Accepted publickey for core from 147.75.109.163 port 35526 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:24.834586 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:24.842432 systemd-logind[1914]: New session 27 of user core. Jan 13 20:10:24.855112 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:10:25.094480 sshd[5095]: Connection closed by 147.75.109.163 port 35526 Jan 13 20:10:25.096184 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:25.102744 systemd[1]: sshd@26-172.31.28.169:22-147.75.109.163:35526.service: Deactivated successfully. Jan 13 20:10:25.108911 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:10:25.113594 systemd-logind[1914]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:10:25.116513 systemd-logind[1914]: Removed session 27. Jan 13 20:10:30.134353 systemd[1]: Started sshd@27-172.31.28.169:22-147.75.109.163:33122.service - OpenSSH per-connection server daemon (147.75.109.163:33122). Jan 13 20:10:30.329344 sshd[5106]: Accepted publickey for core from 147.75.109.163 port 33122 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:30.331908 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:30.340457 systemd-logind[1914]: New session 28 of user core. Jan 13 20:10:30.350076 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:10:30.592329 sshd[5108]: Connection closed by 147.75.109.163 port 33122 Jan 13 20:10:30.593249 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:30.598137 systemd[1]: sshd@27-172.31.28.169:22-147.75.109.163:33122.service: Deactivated successfully. Jan 13 20:10:30.601972 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:10:30.606111 systemd-logind[1914]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:10:30.608476 systemd-logind[1914]: Removed session 28. Jan 13 20:10:30.630312 systemd[1]: Started sshd@28-172.31.28.169:22-147.75.109.163:33132.service - OpenSSH per-connection server daemon (147.75.109.163:33132). Jan 13 20:10:30.814472 sshd[5119]: Accepted publickey for core from 147.75.109.163 port 33132 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:30.817131 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:30.827017 systemd-logind[1914]: New session 29 of user core. Jan 13 20:10:30.835187 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:10:33.410549 containerd[1942]: time="2025-01-13T20:10:33.410086798Z" level=info msg="StopContainer for \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\" with timeout 30 (s)" Jan 13 20:10:33.416855 containerd[1942]: time="2025-01-13T20:10:33.413880958Z" level=info msg="Stop container \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\" with signal terminated" Jan 13 20:10:33.427061 systemd[1]: run-containerd-runc-k8s.io-1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7-runc.LAJu52.mount: Deactivated successfully. Jan 13 20:10:33.444387 systemd[1]: cri-containerd-17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7.scope: Deactivated successfully. Jan 13 20:10:33.461722 containerd[1942]: time="2025-01-13T20:10:33.461268214Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:10:33.476910 containerd[1942]: time="2025-01-13T20:10:33.476778514Z" level=info msg="StopContainer for \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\" with timeout 2 (s)" Jan 13 20:10:33.477584 containerd[1942]: time="2025-01-13T20:10:33.477464422Z" level=info msg="Stop container \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\" with signal terminated" Jan 13 20:10:33.498900 systemd-networkd[1842]: lxc_health: Link DOWN Jan 13 20:10:33.501335 systemd-networkd[1842]: lxc_health: Lost carrier Jan 13 20:10:33.519499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7-rootfs.mount: Deactivated successfully. Jan 13 20:10:33.535199 systemd[1]: cri-containerd-1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7.scope: Deactivated successfully. Jan 13 20:10:33.537983 systemd[1]: cri-containerd-1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7.scope: Consumed 14.204s CPU time. Jan 13 20:10:33.549825 containerd[1942]: time="2025-01-13T20:10:33.549350986Z" level=info msg="shim disconnected" id=17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7 namespace=k8s.io Jan 13 20:10:33.549825 containerd[1942]: time="2025-01-13T20:10:33.549605374Z" level=warning msg="cleaning up after shim disconnected" id=17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7 namespace=k8s.io Jan 13 20:10:33.549825 containerd[1942]: time="2025-01-13T20:10:33.549677434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:33.584256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7-rootfs.mount: Deactivated successfully. Jan 13 20:10:33.588530 containerd[1942]: time="2025-01-13T20:10:33.588453526Z" level=info msg="StopContainer for \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\" returns successfully" Jan 13 20:10:33.589846 containerd[1942]: time="2025-01-13T20:10:33.589695718Z" level=info msg="StopPodSandbox for \"d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237\"" Jan 13 20:10:33.589846 containerd[1942]: time="2025-01-13T20:10:33.589755454Z" level=info msg="Container to stop \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:10:33.593338 containerd[1942]: time="2025-01-13T20:10:33.592746227Z" level=info msg="shim disconnected" id=1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7 namespace=k8s.io Jan 13 20:10:33.593338 containerd[1942]: time="2025-01-13T20:10:33.592844735Z" level=warning msg="cleaning up after shim disconnected" id=1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7 namespace=k8s.io Jan 13 20:10:33.593338 containerd[1942]: time="2025-01-13T20:10:33.592866191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:33.594497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237-shm.mount: Deactivated successfully. Jan 13 20:10:33.610331 systemd[1]: cri-containerd-d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237.scope: Deactivated successfully. Jan 13 20:10:33.636059 containerd[1942]: time="2025-01-13T20:10:33.635975987Z" level=info msg="StopContainer for \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\" returns successfully" Jan 13 20:10:33.637372 containerd[1942]: time="2025-01-13T20:10:33.637285859Z" level=info msg="StopPodSandbox for \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\"" Jan 13 20:10:33.637489 containerd[1942]: time="2025-01-13T20:10:33.637405415Z" level=info msg="Container to stop \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:10:33.637489 containerd[1942]: time="2025-01-13T20:10:33.637432775Z" level=info msg="Container to stop \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:10:33.637617 containerd[1942]: time="2025-01-13T20:10:33.637488887Z" level=info msg="Container to stop \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:10:33.637617 containerd[1942]: time="2025-01-13T20:10:33.637510955Z" level=info msg="Container to stop \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:10:33.637617 containerd[1942]: time="2025-01-13T20:10:33.637534415Z" level=info msg="Container to stop \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:10:33.652542 systemd[1]: cri-containerd-e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9.scope: Deactivated successfully. Jan 13 20:10:33.676233 containerd[1942]: time="2025-01-13T20:10:33.675764783Z" level=info msg="shim disconnected" id=d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237 namespace=k8s.io Jan 13 20:10:33.676233 containerd[1942]: time="2025-01-13T20:10:33.675868475Z" level=warning msg="cleaning up after shim disconnected" id=d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237 namespace=k8s.io Jan 13 20:10:33.676233 containerd[1942]: time="2025-01-13T20:10:33.675889175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:33.708412 containerd[1942]: time="2025-01-13T20:10:33.708064679Z" level=info msg="shim disconnected" id=e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9 namespace=k8s.io Jan 13 20:10:33.708412 containerd[1942]: time="2025-01-13T20:10:33.708178787Z" level=warning msg="cleaning up after shim disconnected" id=e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9 namespace=k8s.io Jan 13 20:10:33.708412 containerd[1942]: time="2025-01-13T20:10:33.708199991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:33.716388 containerd[1942]: time="2025-01-13T20:10:33.716235983Z" level=info msg="TearDown network for sandbox \"d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237\" successfully" Jan 13 20:10:33.716735 containerd[1942]: time="2025-01-13T20:10:33.716590403Z" level=info msg="StopPodSandbox for \"d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237\" returns successfully" Jan 13 20:10:33.741227 containerd[1942]: time="2025-01-13T20:10:33.741090767Z" level=info msg="TearDown network for sandbox \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" successfully" Jan 13 20:10:33.741227 containerd[1942]: time="2025-01-13T20:10:33.741145343Z" level=info msg="StopPodSandbox for \"e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9\" returns successfully" Jan 13 20:10:33.842827 kubelet[3474]: I0113 20:10:33.842740 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb681531-5f52-4368-9118-05e452b2044c-clustermesh-secrets\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843413 kubelet[3474]: I0113 20:10:33.842839 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7xll\" (UniqueName: \"kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-kube-api-access-v7xll\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843413 kubelet[3474]: I0113 20:10:33.842882 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-hubble-tls\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843413 kubelet[3474]: I0113 20:10:33.842919 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cni-path\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843413 kubelet[3474]: I0113 20:10:33.842953 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-xtables-lock\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843413 kubelet[3474]: I0113 20:10:33.842984 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-lib-modules\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843413 kubelet[3474]: I0113 20:10:33.843019 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-bpf-maps\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843739 kubelet[3474]: I0113 20:10:33.843054 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-host-proc-sys-kernel\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843739 kubelet[3474]: I0113 20:10:33.843086 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cilium-run\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843739 kubelet[3474]: I0113 20:10:33.843120 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-hostproc\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843739 kubelet[3474]: I0113 20:10:33.843153 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-host-proc-sys-net\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843739 kubelet[3474]: I0113 20:10:33.843188 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cilium-cgroup\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.843739 kubelet[3474]: I0113 20:10:33.843226 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp7pz\" (UniqueName: \"kubernetes.io/projected/5f9309ca-f664-4445-a2c5-6ed0db002d62-kube-api-access-cp7pz\") pod \"5f9309ca-f664-4445-a2c5-6ed0db002d62\" (UID: \"5f9309ca-f664-4445-a2c5-6ed0db002d62\") " Jan 13 20:10:33.844111 kubelet[3474]: I0113 20:10:33.843266 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f9309ca-f664-4445-a2c5-6ed0db002d62-cilium-config-path\") pod \"5f9309ca-f664-4445-a2c5-6ed0db002d62\" (UID: \"5f9309ca-f664-4445-a2c5-6ed0db002d62\") " Jan 13 20:10:33.844111 kubelet[3474]: I0113 20:10:33.843303 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb681531-5f52-4368-9118-05e452b2044c-cilium-config-path\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.844111 kubelet[3474]: I0113 20:10:33.843336 3474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-etc-cni-netd\") pod \"cb681531-5f52-4368-9118-05e452b2044c\" (UID: \"cb681531-5f52-4368-9118-05e452b2044c\") " Jan 13 20:10:33.844111 kubelet[3474]: I0113 20:10:33.843438 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.844936 kubelet[3474]: I0113 20:10:33.844442 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.847035 kubelet[3474]: I0113 20:10:33.846984 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.850836 kubelet[3474]: I0113 20:10:33.847711 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-hostproc" (OuterVolumeSpecName: "hostproc") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.850836 kubelet[3474]: I0113 20:10:33.847774 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.850836 kubelet[3474]: I0113 20:10:33.847882 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.850836 kubelet[3474]: I0113 20:10:33.849446 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cni-path" (OuterVolumeSpecName: "cni-path") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.850836 kubelet[3474]: I0113 20:10:33.849484 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.851187 kubelet[3474]: I0113 20:10:33.849509 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.851187 kubelet[3474]: I0113 20:10:33.849532 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:10:33.852102 kubelet[3474]: I0113 20:10:33.852052 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-kube-api-access-v7xll" (OuterVolumeSpecName: "kube-api-access-v7xll") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "kube-api-access-v7xll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:10:33.855319 kubelet[3474]: I0113 20:10:33.855250 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb681531-5f52-4368-9118-05e452b2044c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:10:33.858392 kubelet[3474]: I0113 20:10:33.858326 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:10:33.859557 kubelet[3474]: I0113 20:10:33.859487 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9309ca-f664-4445-a2c5-6ed0db002d62-kube-api-access-cp7pz" (OuterVolumeSpecName: "kube-api-access-cp7pz") pod "5f9309ca-f664-4445-a2c5-6ed0db002d62" (UID: "5f9309ca-f664-4445-a2c5-6ed0db002d62"). InnerVolumeSpecName "kube-api-access-cp7pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:10:33.861729 kubelet[3474]: I0113 20:10:33.861686 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9309ca-f664-4445-a2c5-6ed0db002d62-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5f9309ca-f664-4445-a2c5-6ed0db002d62" (UID: "5f9309ca-f664-4445-a2c5-6ed0db002d62"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:10:33.863148 kubelet[3474]: I0113 20:10:33.863090 3474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb681531-5f52-4368-9118-05e452b2044c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cb681531-5f52-4368-9118-05e452b2044c" (UID: "cb681531-5f52-4368-9118-05e452b2044c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:10:33.944343 kubelet[3474]: I0113 20:10:33.944186 3474 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb681531-5f52-4368-9118-05e452b2044c-clustermesh-secrets\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.944343 kubelet[3474]: I0113 20:10:33.944244 3474 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-v7xll\" (UniqueName: \"kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-kube-api-access-v7xll\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.944343 kubelet[3474]: I0113 20:10:33.944270 3474 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb681531-5f52-4368-9118-05e452b2044c-hubble-tls\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.944343 kubelet[3474]: I0113 20:10:33.944294 3474 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cni-path\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.944343 kubelet[3474]: I0113 20:10:33.944313 3474 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-xtables-lock\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.945752 kubelet[3474]: I0113 20:10:33.945667 3474 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-lib-modules\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.946331 kubelet[3474]: I0113 20:10:33.946006 3474 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-bpf-maps\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.946331 kubelet[3474]: I0113 20:10:33.946201 3474 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-host-proc-sys-kernel\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.946649 kubelet[3474]: I0113 20:10:33.946483 3474 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cilium-run\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.946649 kubelet[3474]: I0113 20:10:33.946516 3474 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-hostproc\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.946649 kubelet[3474]: I0113 20:10:33.946538 3474 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-host-proc-sys-net\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.946649 kubelet[3474]: I0113 20:10:33.946586 3474 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-cilium-cgroup\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.946649 kubelet[3474]: I0113 20:10:33.946607 3474 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cp7pz\" (UniqueName: \"kubernetes.io/projected/5f9309ca-f664-4445-a2c5-6ed0db002d62-kube-api-access-cp7pz\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.947051 kubelet[3474]: I0113 20:10:33.946627 3474 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f9309ca-f664-4445-a2c5-6ed0db002d62-cilium-config-path\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.947051 kubelet[3474]: I0113 20:10:33.946984 3474 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb681531-5f52-4368-9118-05e452b2044c-cilium-config-path\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:33.947051 kubelet[3474]: I0113 20:10:33.947005 3474 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb681531-5f52-4368-9118-05e452b2044c-etc-cni-netd\") on node \"ip-172-31-28-169\" DevicePath \"\"" Jan 13 20:10:34.027653 kubelet[3474]: I0113 20:10:34.026453 3474 scope.go:117] "RemoveContainer" containerID="1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7" Jan 13 20:10:34.033061 containerd[1942]: time="2025-01-13T20:10:34.032868225Z" level=info msg="RemoveContainer for \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\"" Jan 13 20:10:34.047411 systemd[1]: Removed slice kubepods-burstable-podcb681531_5f52_4368_9118_05e452b2044c.slice - libcontainer container kubepods-burstable-podcb681531_5f52_4368_9118_05e452b2044c.slice. Jan 13 20:10:34.048111 containerd[1942]: time="2025-01-13T20:10:34.047516049Z" level=info msg="RemoveContainer for \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\" returns successfully" Jan 13 20:10:34.047618 systemd[1]: kubepods-burstable-podcb681531_5f52_4368_9118_05e452b2044c.slice: Consumed 14.352s CPU time. Jan 13 20:10:34.050870 kubelet[3474]: I0113 20:10:34.049310 3474 scope.go:117] "RemoveContainer" containerID="9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2" Jan 13 20:10:34.056211 containerd[1942]: time="2025-01-13T20:10:34.056094717Z" level=info msg="RemoveContainer for \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\"" Jan 13 20:10:34.060242 systemd[1]: Removed slice kubepods-besteffort-pod5f9309ca_f664_4445_a2c5_6ed0db002d62.slice - libcontainer container kubepods-besteffort-pod5f9309ca_f664_4445_a2c5_6ed0db002d62.slice. Jan 13 20:10:34.063659 containerd[1942]: time="2025-01-13T20:10:34.063526797Z" level=info msg="RemoveContainer for \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\" returns successfully" Jan 13 20:10:34.064117 kubelet[3474]: I0113 20:10:34.064070 3474 scope.go:117] "RemoveContainer" containerID="5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9" Jan 13 20:10:34.069976 containerd[1942]: time="2025-01-13T20:10:34.068638713Z" level=info msg="RemoveContainer for \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\"" Jan 13 20:10:34.096951 containerd[1942]: time="2025-01-13T20:10:34.096563241Z" level=info msg="RemoveContainer for \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\" returns successfully" Jan 13 20:10:34.098508 kubelet[3474]: I0113 20:10:34.098475 3474 scope.go:117] "RemoveContainer" containerID="98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957" Jan 13 20:10:34.106249 containerd[1942]: time="2025-01-13T20:10:34.106188897Z" level=info msg="RemoveContainer for \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\"" Jan 13 20:10:34.117652 containerd[1942]: time="2025-01-13T20:10:34.117600429Z" level=info msg="RemoveContainer for \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\" returns successfully" Jan 13 20:10:34.118530 kubelet[3474]: I0113 20:10:34.118497 3474 scope.go:117] "RemoveContainer" containerID="a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6" Jan 13 20:10:34.121311 containerd[1942]: time="2025-01-13T20:10:34.120918513Z" level=info msg="RemoveContainer for \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\"" Jan 13 20:10:34.126718 containerd[1942]: time="2025-01-13T20:10:34.126584301Z" level=info msg="RemoveContainer for \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\" returns successfully" Jan 13 20:10:34.126979 kubelet[3474]: I0113 20:10:34.126921 3474 scope.go:117] "RemoveContainer" containerID="1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7" Jan 13 20:10:34.127511 containerd[1942]: time="2025-01-13T20:10:34.127435653Z" level=error msg="ContainerStatus for \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\": not found" Jan 13 20:10:34.127787 kubelet[3474]: E0113 20:10:34.127756 3474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\": not found" containerID="1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7" Jan 13 20:10:34.127963 kubelet[3474]: I0113 20:10:34.127802 3474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7"} err="failed to get container status \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d62aebe297341981257eb570bb8e3fb1d369517806674f6f21d7dbce46dbcb7\": not found" Jan 13 20:10:34.128039 kubelet[3474]: I0113 20:10:34.127963 3474 scope.go:117] "RemoveContainer" containerID="9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2" Jan 13 20:10:34.128340 containerd[1942]: time="2025-01-13T20:10:34.128287773Z" level=error msg="ContainerStatus for \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\": not found" Jan 13 20:10:34.128579 kubelet[3474]: E0113 20:10:34.128511 3474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\": not found" containerID="9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2" Jan 13 20:10:34.128579 kubelet[3474]: I0113 20:10:34.128560 3474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2"} err="failed to get container status \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a20951cc62993ba00f345bf58e4c959d975f58d6939b09e3ff2a1a1034308b2\": not found" Jan 13 20:10:34.128710 kubelet[3474]: I0113 20:10:34.128595 3474 scope.go:117] "RemoveContainer" containerID="5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9" Jan 13 20:10:34.129446 containerd[1942]: time="2025-01-13T20:10:34.129224949Z" level=error msg="ContainerStatus for \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\": not found" Jan 13 20:10:34.129754 kubelet[3474]: E0113 20:10:34.129428 3474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\": not found" containerID="5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9" Jan 13 20:10:34.129754 kubelet[3474]: I0113 20:10:34.129465 3474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9"} err="failed to get container status \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d152a82b4b264d482d932647d0a8b3233cffd1c30e307976e7431b726acdbf9\": not found" Jan 13 20:10:34.129754 kubelet[3474]: I0113 20:10:34.129494 3474 scope.go:117] "RemoveContainer" containerID="98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957" Jan 13 20:10:34.130198 containerd[1942]: time="2025-01-13T20:10:34.129974937Z" level=error msg="ContainerStatus for \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\": not found" Jan 13 20:10:34.130556 kubelet[3474]: E0113 20:10:34.130364 3474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\": not found" containerID="98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957" Jan 13 20:10:34.130556 kubelet[3474]: I0113 20:10:34.130408 3474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957"} err="failed to get container status \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\": rpc error: code = NotFound desc = an error occurred when try to find container \"98347be87a9802a9f3704f23cdcbec2565d83e86ebd8ebb5afa0f11409ff3957\": not found" Jan 13 20:10:34.130556 kubelet[3474]: I0113 20:10:34.130439 3474 scope.go:117] "RemoveContainer" containerID="a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6" Jan 13 20:10:34.130972 containerd[1942]: time="2025-01-13T20:10:34.130900377Z" level=error msg="ContainerStatus for \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\": not found" Jan 13 20:10:34.131324 kubelet[3474]: E0113 20:10:34.131285 3474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\": not found" containerID="a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6" Jan 13 20:10:34.131404 kubelet[3474]: I0113 20:10:34.131332 3474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6"} err="failed to get container status \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a246041033bf3484d36e5c9a5ffe0829298ca72af5b88521258be438f2e16af6\": not found" Jan 13 20:10:34.131404 kubelet[3474]: I0113 20:10:34.131365 3474 scope.go:117] "RemoveContainer" containerID="17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7" Jan 13 20:10:34.133589 containerd[1942]: time="2025-01-13T20:10:34.133538121Z" level=info msg="RemoveContainer for \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\"" Jan 13 20:10:34.139550 containerd[1942]: time="2025-01-13T20:10:34.139493841Z" level=info msg="RemoveContainer for \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\" returns successfully" Jan 13 20:10:34.140230 kubelet[3474]: I0113 20:10:34.140060 3474 scope.go:117] "RemoveContainer" containerID="17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7" Jan 13 20:10:34.140582 containerd[1942]: time="2025-01-13T20:10:34.140525721Z" level=error msg="ContainerStatus for \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\": not found" Jan 13 20:10:34.140889 kubelet[3474]: E0113 20:10:34.140847 3474 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\": not found" containerID="17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7" Jan 13 20:10:34.140994 kubelet[3474]: I0113 20:10:34.140900 3474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7"} err="failed to get container status \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"17f1b90f6beb4f1feccc556259ef83eb2f3dccf6e3c37022e7cbb10d677da5f7\": not found" Jan 13 20:10:34.413655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9-rootfs.mount: Deactivated successfully. Jan 13 20:10:34.414127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2543cd8f5d4338006ceaa8500791aa23c26bf0e85d488acc488feacbf8fecf9-shm.mount: Deactivated successfully. Jan 13 20:10:34.414264 systemd[1]: var-lib-kubelet-pods-cb681531\x2d5f52\x2d4368\x2d9118\x2d05e452b2044c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:10:34.414397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3526b62f536fa2a5aae7d03fb8b98dfb099c62345356cd10dfeda631724b237-rootfs.mount: Deactivated successfully. Jan 13 20:10:34.414527 systemd[1]: var-lib-kubelet-pods-cb681531\x2d5f52\x2d4368\x2d9118\x2d05e452b2044c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:10:34.414655 systemd[1]: var-lib-kubelet-pods-5f9309ca\x2df664\x2d4445\x2da2c5\x2d6ed0db002d62-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcp7pz.mount: Deactivated successfully. Jan 13 20:10:34.414785 systemd[1]: var-lib-kubelet-pods-cb681531\x2d5f52\x2d4368\x2d9118\x2d05e452b2044c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7xll.mount: Deactivated successfully. Jan 13 20:10:34.563259 kubelet[3474]: I0113 20:10:34.563190 3474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9309ca-f664-4445-a2c5-6ed0db002d62" path="/var/lib/kubelet/pods/5f9309ca-f664-4445-a2c5-6ed0db002d62/volumes" Jan 13 20:10:34.564356 kubelet[3474]: I0113 20:10:34.564305 3474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb681531-5f52-4368-9118-05e452b2044c" path="/var/lib/kubelet/pods/cb681531-5f52-4368-9118-05e452b2044c/volumes" Jan 13 20:10:35.321014 sshd[5121]: Connection closed by 147.75.109.163 port 33132 Jan 13 20:10:35.320893 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:35.326681 systemd[1]: sshd@28-172.31.28.169:22-147.75.109.163:33132.service: Deactivated successfully. Jan 13 20:10:35.327789 systemd-logind[1914]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:10:35.331609 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:10:35.332257 systemd[1]: session-29.scope: Consumed 1.803s CPU time. Jan 13 20:10:35.335754 systemd-logind[1914]: Removed session 29. Jan 13 20:10:35.357363 systemd[1]: Started sshd@29-172.31.28.169:22-147.75.109.163:33136.service - OpenSSH per-connection server daemon (147.75.109.163:33136). Jan 13 20:10:35.549750 sshd[5285]: Accepted publickey for core from 147.75.109.163 port 33136 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:35.552192 sshd-session[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:35.559322 systemd-logind[1914]: New session 30 of user core. Jan 13 20:10:35.567071 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 20:10:35.822781 ntpd[1909]: Deleting interface #12 lxc_health, fe80::45e:eaff:fe26:574d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Jan 13 20:10:35.823341 ntpd[1909]: 13 Jan 20:10:35 ntpd[1909]: Deleting interface #12 lxc_health, fe80::45e:eaff:fe26:574d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Jan 13 20:10:37.263134 sshd[5287]: Connection closed by 147.75.109.163 port 33136 Jan 13 20:10:37.266219 sshd-session[5285]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:37.274860 systemd[1]: sshd@29-172.31.28.169:22-147.75.109.163:33136.service: Deactivated successfully. Jan 13 20:10:37.282717 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 20:10:37.286479 systemd[1]: session-30.scope: Consumed 1.497s CPU time. Jan 13 20:10:37.290780 kubelet[3474]: E0113 20:10:37.290724 3474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb681531-5f52-4368-9118-05e452b2044c" containerName="cilium-agent" Jan 13 20:10:37.290780 kubelet[3474]: E0113 20:10:37.290772 3474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb681531-5f52-4368-9118-05e452b2044c" containerName="apply-sysctl-overwrites" Jan 13 20:10:37.290780 kubelet[3474]: E0113 20:10:37.290790 3474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb681531-5f52-4368-9118-05e452b2044c" containerName="mount-bpf-fs" Jan 13 20:10:37.292068 kubelet[3474]: E0113 20:10:37.291944 3474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f9309ca-f664-4445-a2c5-6ed0db002d62" containerName="cilium-operator" Jan 13 20:10:37.292068 kubelet[3474]: E0113 20:10:37.292003 3474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb681531-5f52-4368-9118-05e452b2044c" containerName="mount-cgroup" Jan 13 20:10:37.292068 kubelet[3474]: E0113 20:10:37.292031 3474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb681531-5f52-4368-9118-05e452b2044c" containerName="clean-cilium-state" Jan 13 20:10:37.292310 kubelet[3474]: I0113 20:10:37.292099 3474 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9309ca-f664-4445-a2c5-6ed0db002d62" containerName="cilium-operator" Jan 13 20:10:37.292310 kubelet[3474]: I0113 20:10:37.292131 3474 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb681531-5f52-4368-9118-05e452b2044c" containerName="cilium-agent" Jan 13 20:10:37.297884 systemd-logind[1914]: Session 30 logged out. Waiting for processes to exit. Jan 13 20:10:37.329183 systemd[1]: Started sshd@30-172.31.28.169:22-147.75.109.163:33142.service - OpenSSH per-connection server daemon (147.75.109.163:33142). Jan 13 20:10:37.331611 systemd-logind[1914]: Removed session 30. Jan 13 20:10:37.353750 systemd[1]: Created slice kubepods-burstable-pod2eb2e637_1769_455a_aa24_8c1e2a506e2f.slice - libcontainer container kubepods-burstable-pod2eb2e637_1769_455a_aa24_8c1e2a506e2f.slice. Jan 13 20:10:37.369285 kubelet[3474]: I0113 20:10:37.369209 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-xtables-lock\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.369285 kubelet[3474]: I0113 20:10:37.369280 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2eb2e637-1769-455a-aa24-8c1e2a506e2f-cilium-config-path\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371034 kubelet[3474]: I0113 20:10:37.369323 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tqsl\" (UniqueName: \"kubernetes.io/projected/2eb2e637-1769-455a-aa24-8c1e2a506e2f-kube-api-access-8tqsl\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371034 kubelet[3474]: I0113 20:10:37.369360 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-lib-modules\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371034 kubelet[3474]: I0113 20:10:37.369393 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-cilium-run\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371034 kubelet[3474]: I0113 20:10:37.369431 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-etc-cni-netd\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371034 kubelet[3474]: I0113 20:10:37.369793 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-cni-path\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371839 kubelet[3474]: I0113 20:10:37.371374 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2eb2e637-1769-455a-aa24-8c1e2a506e2f-cilium-ipsec-secrets\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371839 kubelet[3474]: I0113 20:10:37.371476 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2eb2e637-1769-455a-aa24-8c1e2a506e2f-hubble-tls\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371839 kubelet[3474]: I0113 20:10:37.371526 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-cilium-cgroup\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371839 kubelet[3474]: I0113 20:10:37.371565 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2eb2e637-1769-455a-aa24-8c1e2a506e2f-clustermesh-secrets\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371839 kubelet[3474]: I0113 20:10:37.371605 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-host-proc-sys-net\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.371839 kubelet[3474]: I0113 20:10:37.371643 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-bpf-maps\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.372202 kubelet[3474]: I0113 20:10:37.371679 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-hostproc\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.372202 kubelet[3474]: I0113 20:10:37.371714 3474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2eb2e637-1769-455a-aa24-8c1e2a506e2f-host-proc-sys-kernel\") pod \"cilium-fskjv\" (UID: \"2eb2e637-1769-455a-aa24-8c1e2a506e2f\") " pod="kube-system/cilium-fskjv" Jan 13 20:10:37.576423 sshd[5296]: Accepted publickey for core from 147.75.109.163 port 33142 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:37.579046 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:37.587227 systemd-logind[1914]: New session 31 of user core. Jan 13 20:10:37.595093 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 13 20:10:37.670618 containerd[1942]: time="2025-01-13T20:10:37.670545567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fskjv,Uid:2eb2e637-1769-455a-aa24-8c1e2a506e2f,Namespace:kube-system,Attempt:0,}" Jan 13 20:10:37.714507 containerd[1942]: time="2025-01-13T20:10:37.714167103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:37.714507 containerd[1942]: time="2025-01-13T20:10:37.714274707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:37.714507 containerd[1942]: time="2025-01-13T20:10:37.714304539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:37.715894 containerd[1942]: time="2025-01-13T20:10:37.715716171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:37.725888 sshd[5302]: Connection closed by 147.75.109.163 port 33142 Jan 13 20:10:37.727311 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:37.737607 systemd[1]: sshd@30-172.31.28.169:22-147.75.109.163:33142.service: Deactivated successfully. Jan 13 20:10:37.745263 systemd[1]: session-31.scope: Deactivated successfully. Jan 13 20:10:37.749243 systemd-logind[1914]: Session 31 logged out. Waiting for processes to exit. Jan 13 20:10:37.777119 systemd[1]: Started cri-containerd-213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32.scope - libcontainer container 213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32. Jan 13 20:10:37.781357 systemd[1]: Started sshd@31-172.31.28.169:22-147.75.109.163:49958.service - OpenSSH per-connection server daemon (147.75.109.163:49958). Jan 13 20:10:37.788172 systemd-logind[1914]: Removed session 31. Jan 13 20:10:37.793302 kubelet[3474]: E0113 20:10:37.793202 3474 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:10:37.842160 containerd[1942]: time="2025-01-13T20:10:37.841555600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fskjv,Uid:2eb2e637-1769-455a-aa24-8c1e2a506e2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\"" Jan 13 20:10:37.850246 containerd[1942]: time="2025-01-13T20:10:37.850179988Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:10:37.873862 containerd[1942]: time="2025-01-13T20:10:37.873733204Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"109f4afc5fc140acc6b18a3184af9f67c893c8b91a6b495d536babc2f1e3c7cf\"" Jan 13 20:10:37.875185 containerd[1942]: time="2025-01-13T20:10:37.875118412Z" level=info msg="StartContainer for \"109f4afc5fc140acc6b18a3184af9f67c893c8b91a6b495d536babc2f1e3c7cf\"" Jan 13 20:10:37.925187 systemd[1]: Started cri-containerd-109f4afc5fc140acc6b18a3184af9f67c893c8b91a6b495d536babc2f1e3c7cf.scope - libcontainer container 109f4afc5fc140acc6b18a3184af9f67c893c8b91a6b495d536babc2f1e3c7cf. Jan 13 20:10:37.972667 containerd[1942]: time="2025-01-13T20:10:37.972495724Z" level=info msg="StartContainer for \"109f4afc5fc140acc6b18a3184af9f67c893c8b91a6b495d536babc2f1e3c7cf\" returns successfully" Jan 13 20:10:37.992624 systemd[1]: cri-containerd-109f4afc5fc140acc6b18a3184af9f67c893c8b91a6b495d536babc2f1e3c7cf.scope: Deactivated successfully. Jan 13 20:10:37.996222 sshd[5336]: Accepted publickey for core from 147.75.109.163 port 49958 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:38.001167 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:38.012180 systemd-logind[1914]: New session 32 of user core. Jan 13 20:10:38.021634 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 13 20:10:38.073518 containerd[1942]: time="2025-01-13T20:10:38.073382641Z" level=info msg="shim disconnected" id=109f4afc5fc140acc6b18a3184af9f67c893c8b91a6b495d536babc2f1e3c7cf namespace=k8s.io Jan 13 20:10:38.073518 containerd[1942]: time="2025-01-13T20:10:38.073467061Z" level=warning msg="cleaning up after shim disconnected" id=109f4afc5fc140acc6b18a3184af9f67c893c8b91a6b495d536babc2f1e3c7cf namespace=k8s.io Jan 13 20:10:38.073518 containerd[1942]: time="2025-01-13T20:10:38.073487413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:39.078703 containerd[1942]: time="2025-01-13T20:10:39.078633818Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:10:39.108480 containerd[1942]: time="2025-01-13T20:10:39.106954934Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"528b9adf0aff89fc870dc92ac262fd1fb5645841b87add402fd70e78426fc4cb\"" Jan 13 20:10:39.111403 containerd[1942]: time="2025-01-13T20:10:39.109411838Z" level=info msg="StartContainer for \"528b9adf0aff89fc870dc92ac262fd1fb5645841b87add402fd70e78426fc4cb\"" Jan 13 20:10:39.168368 systemd[1]: Started cri-containerd-528b9adf0aff89fc870dc92ac262fd1fb5645841b87add402fd70e78426fc4cb.scope - libcontainer container 528b9adf0aff89fc870dc92ac262fd1fb5645841b87add402fd70e78426fc4cb. Jan 13 20:10:39.225757 containerd[1942]: time="2025-01-13T20:10:39.225686078Z" level=info msg="StartContainer for \"528b9adf0aff89fc870dc92ac262fd1fb5645841b87add402fd70e78426fc4cb\" returns successfully" Jan 13 20:10:39.241379 systemd[1]: cri-containerd-528b9adf0aff89fc870dc92ac262fd1fb5645841b87add402fd70e78426fc4cb.scope: Deactivated successfully. Jan 13 20:10:39.296439 containerd[1942]: time="2025-01-13T20:10:39.296353515Z" level=info msg="shim disconnected" id=528b9adf0aff89fc870dc92ac262fd1fb5645841b87add402fd70e78426fc4cb namespace=k8s.io Jan 13 20:10:39.296439 containerd[1942]: time="2025-01-13T20:10:39.296429715Z" level=warning msg="cleaning up after shim disconnected" id=528b9adf0aff89fc870dc92ac262fd1fb5645841b87add402fd70e78426fc4cb namespace=k8s.io Jan 13 20:10:39.296764 containerd[1942]: time="2025-01-13T20:10:39.296451663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:39.482379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-528b9adf0aff89fc870dc92ac262fd1fb5645841b87add402fd70e78426fc4cb-rootfs.mount: Deactivated successfully. Jan 13 20:10:40.086508 containerd[1942]: time="2025-01-13T20:10:40.086434047Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:10:40.126851 containerd[1942]: time="2025-01-13T20:10:40.126634731Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"03ea8301ad3e58c239ec4f1cbe142c2dbac429f1edde18b7ab119de5d0df1da0\"" Jan 13 20:10:40.127775 containerd[1942]: time="2025-01-13T20:10:40.127691115Z" level=info msg="StartContainer for \"03ea8301ad3e58c239ec4f1cbe142c2dbac429f1edde18b7ab119de5d0df1da0\"" Jan 13 20:10:40.219121 systemd[1]: Started cri-containerd-03ea8301ad3e58c239ec4f1cbe142c2dbac429f1edde18b7ab119de5d0df1da0.scope - libcontainer container 03ea8301ad3e58c239ec4f1cbe142c2dbac429f1edde18b7ab119de5d0df1da0. Jan 13 20:10:40.378471 containerd[1942]: time="2025-01-13T20:10:40.378157684Z" level=info msg="StartContainer for \"03ea8301ad3e58c239ec4f1cbe142c2dbac429f1edde18b7ab119de5d0df1da0\" returns successfully" Jan 13 20:10:40.390585 systemd[1]: cri-containerd-03ea8301ad3e58c239ec4f1cbe142c2dbac429f1edde18b7ab119de5d0df1da0.scope: Deactivated successfully. Jan 13 20:10:40.439550 containerd[1942]: time="2025-01-13T20:10:40.439454273Z" level=info msg="shim disconnected" id=03ea8301ad3e58c239ec4f1cbe142c2dbac429f1edde18b7ab119de5d0df1da0 namespace=k8s.io Jan 13 20:10:40.439550 containerd[1942]: time="2025-01-13T20:10:40.439531517Z" level=warning msg="cleaning up after shim disconnected" id=03ea8301ad3e58c239ec4f1cbe142c2dbac429f1edde18b7ab119de5d0df1da0 namespace=k8s.io Jan 13 20:10:40.439550 containerd[1942]: time="2025-01-13T20:10:40.439553093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:40.485876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03ea8301ad3e58c239ec4f1cbe142c2dbac429f1edde18b7ab119de5d0df1da0-rootfs.mount: Deactivated successfully. Jan 13 20:10:41.100250 containerd[1942]: time="2025-01-13T20:10:41.100176484Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:10:41.137982 containerd[1942]: time="2025-01-13T20:10:41.137911564Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52c20508b0faad1fc8dbfe54679cc110221196de26d47bce1bfa001a5b8430e4\"" Jan 13 20:10:41.139135 containerd[1942]: time="2025-01-13T20:10:41.138921736Z" level=info msg="StartContainer for \"52c20508b0faad1fc8dbfe54679cc110221196de26d47bce1bfa001a5b8430e4\"" Jan 13 20:10:41.200224 systemd[1]: Started cri-containerd-52c20508b0faad1fc8dbfe54679cc110221196de26d47bce1bfa001a5b8430e4.scope - libcontainer container 52c20508b0faad1fc8dbfe54679cc110221196de26d47bce1bfa001a5b8430e4. Jan 13 20:10:41.245065 systemd[1]: cri-containerd-52c20508b0faad1fc8dbfe54679cc110221196de26d47bce1bfa001a5b8430e4.scope: Deactivated successfully. Jan 13 20:10:41.251552 containerd[1942]: time="2025-01-13T20:10:41.251376497Z" level=info msg="StartContainer for \"52c20508b0faad1fc8dbfe54679cc110221196de26d47bce1bfa001a5b8430e4\" returns successfully" Jan 13 20:10:41.296270 containerd[1942]: time="2025-01-13T20:10:41.296144717Z" level=info msg="shim disconnected" id=52c20508b0faad1fc8dbfe54679cc110221196de26d47bce1bfa001a5b8430e4 namespace=k8s.io Jan 13 20:10:41.296585 containerd[1942]: time="2025-01-13T20:10:41.296277293Z" level=warning msg="cleaning up after shim disconnected" id=52c20508b0faad1fc8dbfe54679cc110221196de26d47bce1bfa001a5b8430e4 namespace=k8s.io Jan 13 20:10:41.296585 containerd[1942]: time="2025-01-13T20:10:41.296302949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:41.482694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52c20508b0faad1fc8dbfe54679cc110221196de26d47bce1bfa001a5b8430e4-rootfs.mount: Deactivated successfully. Jan 13 20:10:42.109077 containerd[1942]: time="2025-01-13T20:10:42.107248229Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:10:42.150241 containerd[1942]: time="2025-01-13T20:10:42.149463077Z" level=info msg="CreateContainer within sandbox \"213d0c56d0e1698166f9fedc912f2c6923addee1d90b1ce5aca362366c7f0a32\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43612751c85f0c5bbdecd39d1c56d66d04992d9e9e017b01955c3fea834daa3d\"" Jan 13 20:10:42.154683 containerd[1942]: time="2025-01-13T20:10:42.154468517Z" level=info msg="StartContainer for \"43612751c85f0c5bbdecd39d1c56d66d04992d9e9e017b01955c3fea834daa3d\"" Jan 13 20:10:42.215143 systemd[1]: Started cri-containerd-43612751c85f0c5bbdecd39d1c56d66d04992d9e9e017b01955c3fea834daa3d.scope - libcontainer container 43612751c85f0c5bbdecd39d1c56d66d04992d9e9e017b01955c3fea834daa3d. Jan 13 20:10:42.279022 containerd[1942]: time="2025-01-13T20:10:42.278945202Z" level=info msg="StartContainer for \"43612751c85f0c5bbdecd39d1c56d66d04992d9e9e017b01955c3fea834daa3d\" returns successfully" Jan 13 20:10:43.085994 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:10:44.663066 kubelet[3474]: E0113 20:10:44.662971 3474 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51338->127.0.0.1:40147: write tcp 127.0.0.1:51338->127.0.0.1:40147: write: broken pipe Jan 13 20:10:47.250782 (udev-worker)[6147]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:47.255502 (udev-worker)[6148]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:47.259895 systemd-networkd[1842]: lxc_health: Link UP Jan 13 20:10:47.279093 systemd-networkd[1842]: lxc_health: Gained carrier Jan 13 20:10:47.727213 kubelet[3474]: I0113 20:10:47.727106 3474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fskjv" podStartSLOduration=10.727081945 podStartE2EDuration="10.727081945s" podCreationTimestamp="2025-01-13 20:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:10:43.161167302 +0000 UTC m=+130.815963831" watchObservedRunningTime="2025-01-13 20:10:47.727081945 +0000 UTC m=+135.381878426" Jan 13 20:10:48.794157 systemd-networkd[1842]: lxc_health: Gained IPv6LL Jan 13 20:10:50.822882 ntpd[1909]: Listen normally on 15 lxc_health [fe80::50af:7cff:fed3:f842%14]:123 Jan 13 20:10:50.823454 ntpd[1909]: 13 Jan 20:10:50 ntpd[1909]: Listen normally on 15 lxc_health [fe80::50af:7cff:fed3:f842%14]:123 Jan 13 20:10:51.355446 systemd[1]: run-containerd-runc-k8s.io-43612751c85f0c5bbdecd39d1c56d66d04992d9e9e017b01955c3fea834daa3d-runc.kXrsnV.mount: Deactivated successfully. Jan 13 20:10:53.728249 sshd[5400]: Connection closed by 147.75.109.163 port 49958 Jan 13 20:10:53.728727 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:53.736384 systemd-logind[1914]: Session 32 logged out. Waiting for processes to exit. Jan 13 20:10:53.739361 systemd[1]: sshd@31-172.31.28.169:22-147.75.109.163:49958.service: Deactivated successfully. Jan 13 20:10:53.744537 systemd[1]: session-32.scope: Deactivated successfully. Jan 13 20:10:53.747941 systemd-logind[1914]: Removed session 32. Jan 13 20:11:08.068753 systemd[1]: cri-containerd-c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447.scope: Deactivated successfully. Jan 13 20:11:08.069236 systemd[1]: cri-containerd-c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447.scope: Consumed 5.622s CPU time, 18.1M memory peak, 0B memory swap peak. Jan 13 20:11:08.109416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447-rootfs.mount: Deactivated successfully. Jan 13 20:11:08.126388 containerd[1942]: time="2025-01-13T20:11:08.126213066Z" level=info msg="shim disconnected" id=c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447 namespace=k8s.io Jan 13 20:11:08.126388 containerd[1942]: time="2025-01-13T20:11:08.126301350Z" level=warning msg="cleaning up after shim disconnected" id=c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447 namespace=k8s.io Jan 13 20:11:08.126388 containerd[1942]: time="2025-01-13T20:11:08.126323406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:08.146185 containerd[1942]: time="2025-01-13T20:11:08.146063598Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:11:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:11:08.187045 kubelet[3474]: I0113 20:11:08.186785 3474 scope.go:117] "RemoveContainer" containerID="c0a960f1293cfcb6643a05b1ba6cfa322b96265dda0b03474997246d81962447" Jan 13 20:11:08.191161 containerd[1942]: time="2025-01-13T20:11:08.191095290Z" level=info msg="CreateContainer within sandbox \"da2e3cf038fcaece30e37c680e1c9c4235fb650778641c14f1cb769e597dc7e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:11:08.221672 containerd[1942]: time="2025-01-13T20:11:08.221598799Z" level=info msg="CreateContainer within sandbox \"da2e3cf038fcaece30e37c680e1c9c4235fb650778641c14f1cb769e597dc7e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"10db74f9ef4676bf2e63f89eeaed68040c7a97184f9d60a3be3ec395eddd4f64\"" Jan 13 20:11:08.222740 containerd[1942]: time="2025-01-13T20:11:08.222689263Z" level=info msg="StartContainer for \"10db74f9ef4676bf2e63f89eeaed68040c7a97184f9d60a3be3ec395eddd4f64\"" Jan 13 20:11:08.278128 systemd[1]: Started cri-containerd-10db74f9ef4676bf2e63f89eeaed68040c7a97184f9d60a3be3ec395eddd4f64.scope - libcontainer container 10db74f9ef4676bf2e63f89eeaed68040c7a97184f9d60a3be3ec395eddd4f64. Jan 13 20:11:08.347408 containerd[1942]: time="2025-01-13T20:11:08.347221543Z" level=info msg="StartContainer for \"10db74f9ef4676bf2e63f89eeaed68040c7a97184f9d60a3be3ec395eddd4f64\" returns successfully" Jan 13 20:11:13.274498 systemd[1]: cri-containerd-7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846.scope: Deactivated successfully. Jan 13 20:11:13.277008 systemd[1]: cri-containerd-7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846.scope: Consumed 3.334s CPU time, 15.7M memory peak, 0B memory swap peak. Jan 13 20:11:13.315510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846-rootfs.mount: Deactivated successfully. Jan 13 20:11:13.329641 containerd[1942]: time="2025-01-13T20:11:13.329558688Z" level=info msg="shim disconnected" id=7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846 namespace=k8s.io Jan 13 20:11:13.329641 containerd[1942]: time="2025-01-13T20:11:13.329629560Z" level=warning msg="cleaning up after shim disconnected" id=7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846 namespace=k8s.io Jan 13 20:11:13.330386 containerd[1942]: time="2025-01-13T20:11:13.329653068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:14.208905 kubelet[3474]: I0113 20:11:14.208674 3474 scope.go:117] "RemoveContainer" containerID="7b8bed738b17ef2759ab8580259a7907ecd06da503fab5a2358106facd6f5846" Jan 13 20:11:14.212418 containerd[1942]: time="2025-01-13T20:11:14.212332188Z" level=info msg="CreateContainer within sandbox \"6156609e2abfa901516d940f4d94f90a634763a734213340eee0246ccc330f6b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:11:14.245016 containerd[1942]: time="2025-01-13T20:11:14.244929780Z" level=info msg="CreateContainer within sandbox \"6156609e2abfa901516d940f4d94f90a634763a734213340eee0246ccc330f6b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d56de7e345610351c48d58767c2e8a80d0c1f618add0e42744a36e72d4e1f4b4\"" Jan 13 20:11:14.246004 containerd[1942]: time="2025-01-13T20:11:14.245960628Z" level=info msg="StartContainer for \"d56de7e345610351c48d58767c2e8a80d0c1f618add0e42744a36e72d4e1f4b4\"" Jan 13 20:11:14.307165 systemd[1]: Started cri-containerd-d56de7e345610351c48d58767c2e8a80d0c1f618add0e42744a36e72d4e1f4b4.scope - libcontainer container d56de7e345610351c48d58767c2e8a80d0c1f618add0e42744a36e72d4e1f4b4. Jan 13 20:11:14.371921 containerd[1942]: time="2025-01-13T20:11:14.371803369Z" level=info msg="StartContainer for \"d56de7e345610351c48d58767c2e8a80d0c1f618add0e42744a36e72d4e1f4b4\" returns successfully" Jan 13 20:11:15.302045 kubelet[3474]: E0113 20:11:15.301975 3474 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-169?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:11:25.303288 kubelet[3474]: E0113 20:11:25.302941 3474 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-169?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"