Feb 13 19:03:29.202234 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:03:29.202280 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:03:29.202306 kernel: KASLR disabled due to lack of seed Feb 13 19:03:29.202322 kernel: efi: EFI v2.7 by EDK II Feb 13 19:03:29.202338 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 19:03:29.202353 kernel: secureboot: Secure boot disabled Feb 13 19:03:29.202371 kernel: ACPI: Early table checksum verification disabled Feb 13 19:03:29.202386 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:03:29.202401 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:03:29.202416 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:03:29.202436 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:03:29.202452 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:03:29.202468 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:03:29.202483 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:03:29.202502 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:03:29.202523 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:03:29.202540 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:03:29.202557 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:03:29.202572 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:03:29.202589 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:03:29.202605 kernel: printk: bootconsole [uart0] enabled Feb 13 19:03:29.202621 kernel: NUMA: Failed to initialise from firmware Feb 13 19:03:29.202637 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:03:29.202654 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:03:29.202670 kernel: Zone ranges: Feb 13 19:03:29.202686 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:03:29.202707 kernel: DMA32 empty Feb 13 19:03:29.202724 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:03:29.202740 kernel: Movable zone start for each node Feb 13 19:03:29.202756 kernel: Early memory node ranges Feb 13 19:03:29.202772 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:03:29.202788 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:03:29.202804 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:03:29.202820 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:03:29.202836 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:03:29.202853 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:03:29.202869 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:03:29.202885 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:03:29.202905 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:03:29.202922 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:03:29.205038 kernel: psci: probing for conduit method from ACPI. Feb 13 19:03:29.205058 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:03:29.205076 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:03:29.205102 kernel: psci: Trusted OS migration not required Feb 13 19:03:29.205122 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:03:29.205141 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:03:29.205158 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:03:29.205177 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:03:29.205194 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:03:29.205212 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:03:29.205229 kernel: CPU features: detected: Spectre-v2 Feb 13 19:03:29.205246 kernel: CPU features: detected: Spectre-v3a Feb 13 19:03:29.205263 kernel: CPU features: detected: Spectre-BHB Feb 13 19:03:29.205280 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:03:29.205298 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:03:29.205321 kernel: alternatives: applying boot alternatives Feb 13 19:03:29.205340 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:03:29.205359 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:03:29.205377 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:03:29.205395 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:03:29.205412 kernel: Fallback order for Node 0: 0 Feb 13 19:03:29.205429 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:03:29.205446 kernel: Policy zone: Normal Feb 13 19:03:29.205463 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:03:29.205481 kernel: software IO TLB: area num 2. Feb 13 19:03:29.205503 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:03:29.205521 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 19:03:29.205539 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:03:29.205556 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:03:29.205575 kernel: rcu: RCU event tracing is enabled. Feb 13 19:03:29.205592 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:03:29.205610 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:03:29.205628 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:03:29.205645 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:03:29.205662 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:03:29.205680 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:03:29.205703 kernel: GICv3: 96 SPIs implemented Feb 13 19:03:29.205721 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:03:29.205739 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:03:29.205758 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:03:29.205780 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:03:29.205797 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:03:29.205814 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:03:29.205832 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:03:29.205850 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:03:29.205867 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:03:29.205885 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:03:29.205902 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:03:29.205948 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:03:29.205975 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:03:29.205994 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:03:29.206012 kernel: Console: colour dummy device 80x25 Feb 13 19:03:29.206030 kernel: printk: console [tty1] enabled Feb 13 19:03:29.206048 kernel: ACPI: Core revision 20230628 Feb 13 19:03:29.206068 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:03:29.206087 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:03:29.206106 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:03:29.206128 kernel: landlock: Up and running. Feb 13 19:03:29.206156 kernel: SELinux: Initializing. Feb 13 19:03:29.206174 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:03:29.206192 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:03:29.206210 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:03:29.206227 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:03:29.206245 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:03:29.206264 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:03:29.206281 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:03:29.206304 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:03:29.206322 kernel: Remapping and enabling EFI services. Feb 13 19:03:29.206343 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:03:29.206361 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:03:29.206379 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:03:29.206397 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:03:29.206415 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:03:29.206433 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:03:29.206450 kernel: SMP: Total of 2 processors activated. Feb 13 19:03:29.206467 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:03:29.206490 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:03:29.206508 kernel: CPU features: detected: CRC32 instructions Feb 13 19:03:29.206537 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:03:29.206561 kernel: alternatives: applying system-wide alternatives Feb 13 19:03:29.206579 kernel: devtmpfs: initialized Feb 13 19:03:29.206597 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:03:29.206615 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:03:29.206633 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:03:29.206651 kernel: SMBIOS 3.0.0 present. Feb 13 19:03:29.206674 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:03:29.206692 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:03:29.206710 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:03:29.206729 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:03:29.206747 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:03:29.206765 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:03:29.206784 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Feb 13 19:03:29.206807 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:03:29.206825 kernel: cpuidle: using governor menu Feb 13 19:03:29.206844 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:03:29.206862 kernel: ASID allocator initialised with 65536 entries Feb 13 19:03:29.206880 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:03:29.206899 kernel: Serial: AMBA PL011 UART driver Feb 13 19:03:29.206918 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 19:03:29.208361 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:03:29.208388 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:03:29.208419 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:03:29.208438 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:03:29.208457 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:03:29.208475 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:03:29.208494 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:03:29.208512 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:03:29.208723 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:03:29.209145 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:03:29.209171 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:03:29.209197 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:03:29.209216 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:03:29.209234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:03:29.209252 kernel: ACPI: Interpreter enabled Feb 13 19:03:29.209270 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:03:29.209288 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:03:29.209307 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:03:29.209627 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:03:29.209848 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:03:29.212350 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:03:29.212617 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:03:29.212829 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:03:29.212855 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:03:29.212874 kernel: acpiphp: Slot [1] registered Feb 13 19:03:29.212893 kernel: acpiphp: Slot [2] registered Feb 13 19:03:29.212911 kernel: acpiphp: Slot [3] registered Feb 13 19:03:29.212975 kernel: acpiphp: Slot [4] registered Feb 13 19:03:29.212996 kernel: acpiphp: Slot [5] registered Feb 13 19:03:29.213015 kernel: acpiphp: Slot [6] registered Feb 13 19:03:29.213033 kernel: acpiphp: Slot [7] registered Feb 13 19:03:29.213051 kernel: acpiphp: Slot [8] registered Feb 13 19:03:29.213069 kernel: acpiphp: Slot [9] registered Feb 13 19:03:29.213087 kernel: acpiphp: Slot [10] registered Feb 13 19:03:29.213105 kernel: acpiphp: Slot [11] registered Feb 13 19:03:29.213123 kernel: acpiphp: Slot [12] registered Feb 13 19:03:29.213141 kernel: acpiphp: Slot [13] registered Feb 13 19:03:29.213165 kernel: acpiphp: Slot [14] registered Feb 13 19:03:29.213183 kernel: acpiphp: Slot [15] registered Feb 13 19:03:29.213201 kernel: acpiphp: Slot [16] registered Feb 13 19:03:29.213219 kernel: acpiphp: Slot [17] registered Feb 13 19:03:29.213237 kernel: acpiphp: Slot [18] registered Feb 13 19:03:29.213255 kernel: acpiphp: Slot [19] registered Feb 13 19:03:29.213274 kernel: acpiphp: Slot [20] registered Feb 13 19:03:29.213292 kernel: acpiphp: Slot [21] registered Feb 13 19:03:29.213310 kernel: acpiphp: Slot [22] registered Feb 13 19:03:29.213332 kernel: acpiphp: Slot [23] registered Feb 13 19:03:29.213351 kernel: acpiphp: Slot [24] registered Feb 13 19:03:29.213369 kernel: acpiphp: Slot [25] registered Feb 13 19:03:29.213387 kernel: acpiphp: Slot [26] registered Feb 13 19:03:29.213405 kernel: acpiphp: Slot [27] registered Feb 13 19:03:29.213423 kernel: acpiphp: Slot [28] registered Feb 13 19:03:29.213442 kernel: acpiphp: Slot [29] registered Feb 13 19:03:29.213460 kernel: acpiphp: Slot [30] registered Feb 13 19:03:29.213478 kernel: acpiphp: Slot [31] registered Feb 13 19:03:29.213496 kernel: PCI host bridge to bus 0000:00 Feb 13 19:03:29.213712 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:03:29.213896 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:03:29.214183 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:03:29.214371 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:03:29.214644 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:03:29.214970 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:03:29.215215 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:03:29.215438 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:03:29.215644 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:03:29.215853 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:03:29.216121 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:03:29.216336 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:03:29.216561 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:03:29.216783 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:03:29.217013 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:03:29.217219 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:03:29.217434 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:03:29.217712 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:03:29.218029 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:03:29.218255 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:03:29.218458 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:03:29.218641 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:03:29.218825 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:03:29.218850 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:03:29.218869 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:03:29.218887 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:03:29.218906 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:03:29.219496 kernel: iommu: Default domain type: Translated Feb 13 19:03:29.219545 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:03:29.219566 kernel: efivars: Registered efivars operations Feb 13 19:03:29.219585 kernel: vgaarb: loaded Feb 13 19:03:29.219604 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:03:29.219622 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:03:29.219641 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:03:29.219660 kernel: pnp: PnP ACPI init Feb 13 19:03:29.219952 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:03:29.219998 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:03:29.220019 kernel: NET: Registered PF_INET protocol family Feb 13 19:03:29.220039 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:03:29.220059 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:03:29.220079 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:03:29.220098 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:03:29.220116 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:03:29.220135 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:03:29.220153 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:03:29.220177 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:03:29.220195 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:03:29.220213 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:03:29.220232 kernel: kvm [1]: HYP mode not available Feb 13 19:03:29.220250 kernel: Initialise system trusted keyrings Feb 13 19:03:29.220268 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:03:29.220286 kernel: Key type asymmetric registered Feb 13 19:03:29.220304 kernel: Asymmetric key parser 'x509' registered Feb 13 19:03:29.220322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:03:29.220345 kernel: io scheduler mq-deadline registered Feb 13 19:03:29.220363 kernel: io scheduler kyber registered Feb 13 19:03:29.220382 kernel: io scheduler bfq registered Feb 13 19:03:29.220641 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:03:29.220671 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:03:29.220691 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:03:29.220710 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:03:29.220728 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:03:29.220753 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:03:29.220773 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:03:29.221028 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:03:29.221059 kernel: printk: console [ttyS0] disabled Feb 13 19:03:29.221078 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:03:29.221097 kernel: printk: console [ttyS0] enabled Feb 13 19:03:29.221116 kernel: printk: bootconsole [uart0] disabled Feb 13 19:03:29.221135 kernel: thunder_xcv, ver 1.0 Feb 13 19:03:29.221153 kernel: thunder_bgx, ver 1.0 Feb 13 19:03:29.221172 kernel: nicpf, ver 1.0 Feb 13 19:03:29.221199 kernel: nicvf, ver 1.0 Feb 13 19:03:29.221423 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:03:29.221617 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:03:28 UTC (1739473408) Feb 13 19:03:29.221643 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:03:29.221663 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:03:29.221682 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:03:29.221700 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:03:29.221724 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:03:29.221744 kernel: Segment Routing with IPv6 Feb 13 19:03:29.221762 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:03:29.221785 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:03:29.221805 kernel: Key type dns_resolver registered Feb 13 19:03:29.221823 kernel: registered taskstats version 1 Feb 13 19:03:29.221841 kernel: Loading compiled-in X.509 certificates Feb 13 19:03:29.221860 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:03:29.221878 kernel: Key type .fscrypt registered Feb 13 19:03:29.221896 kernel: Key type fscrypt-provisioning registered Feb 13 19:03:29.221919 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:03:29.221992 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:03:29.222011 kernel: ima: No architecture policies found Feb 13 19:03:29.222030 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:03:29.222048 kernel: clk: Disabling unused clocks Feb 13 19:03:29.222066 kernel: Freeing unused kernel memory: 39680K Feb 13 19:03:29.222084 kernel: Run /init as init process Feb 13 19:03:29.222103 kernel: with arguments: Feb 13 19:03:29.222121 kernel: /init Feb 13 19:03:29.222145 kernel: with environment: Feb 13 19:03:29.222163 kernel: HOME=/ Feb 13 19:03:29.222183 kernel: TERM=linux Feb 13 19:03:29.222201 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:03:29.222224 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:03:29.222247 systemd[1]: Detected virtualization amazon. Feb 13 19:03:29.222267 systemd[1]: Detected architecture arm64. Feb 13 19:03:29.222292 systemd[1]: Running in initrd. Feb 13 19:03:29.222312 systemd[1]: No hostname configured, using default hostname. Feb 13 19:03:29.222331 systemd[1]: Hostname set to . Feb 13 19:03:29.222352 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:03:29.222372 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:03:29.222392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:29.222412 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:29.222433 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:03:29.222459 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:03:29.222480 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:03:29.222500 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:03:29.222525 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:03:29.222545 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:03:29.222565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:29.222585 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:29.222609 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:03:29.222630 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:03:29.222650 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:03:29.222670 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:03:29.222690 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:03:29.222710 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:03:29.222732 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:03:29.222752 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:03:29.222772 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:29.222798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:29.222818 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:29.222838 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:03:29.222859 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:03:29.222879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:03:29.222899 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:03:29.222919 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:03:29.222966 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:03:29.223013 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:03:29.223034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:29.223054 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:03:29.223075 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:29.223095 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:03:29.223166 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 19:03:29.223215 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:03:29.223236 systemd-journald[252]: Journal started Feb 13 19:03:29.223278 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2ccdfec55a10171ed1d776a0bd4734) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:03:29.211012 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 19:03:29.226955 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:03:29.231851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:29.248002 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:03:29.250323 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:29.253369 kernel: Bridge firewalling registered Feb 13 19:03:29.250842 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 19:03:29.263252 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:03:29.264990 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:29.273281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:03:29.297045 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:29.310232 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:29.322953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:29.327793 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:29.334858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:29.358774 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:03:29.378399 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:29.387235 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:03:29.422292 dracut-cmdline[287]: dracut-dracut-053 Feb 13 19:03:29.430000 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:03:29.475073 systemd-resolved[290]: Positive Trust Anchors: Feb 13 19:03:29.476747 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:03:29.479558 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:03:29.622235 kernel: SCSI subsystem initialized Feb 13 19:03:29.630067 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:03:29.643065 kernel: iscsi: registered transport (tcp) Feb 13 19:03:29.665179 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:03:29.665254 kernel: QLogic iSCSI HBA Driver Feb 13 19:03:29.717095 kernel: random: crng init done Feb 13 19:03:29.717411 systemd-resolved[290]: Defaulting to hostname 'linux'. Feb 13 19:03:29.721371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:03:29.725858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:29.759918 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:03:29.779475 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:03:29.813871 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:03:29.813970 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:03:29.814000 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:03:29.882994 kernel: raid6: neonx8 gen() 6662 MB/s Feb 13 19:03:29.899988 kernel: raid6: neonx4 gen() 6473 MB/s Feb 13 19:03:29.916984 kernel: raid6: neonx2 gen() 5425 MB/s Feb 13 19:03:29.933980 kernel: raid6: neonx1 gen() 3936 MB/s Feb 13 19:03:29.950985 kernel: raid6: int64x8 gen() 3791 MB/s Feb 13 19:03:29.967982 kernel: raid6: int64x4 gen() 3707 MB/s Feb 13 19:03:29.984987 kernel: raid6: int64x2 gen() 3587 MB/s Feb 13 19:03:30.002786 kernel: raid6: int64x1 gen() 2765 MB/s Feb 13 19:03:30.002866 kernel: raid6: using algorithm neonx8 gen() 6662 MB/s Feb 13 19:03:30.020771 kernel: raid6: .... xor() 4910 MB/s, rmw enabled Feb 13 19:03:30.020850 kernel: raid6: using neon recovery algorithm Feb 13 19:03:30.029746 kernel: xor: measuring software checksum speed Feb 13 19:03:30.029821 kernel: 8regs : 11023 MB/sec Feb 13 19:03:30.030966 kernel: 32regs : 11089 MB/sec Feb 13 19:03:30.032973 kernel: arm64_neon : 8871 MB/sec Feb 13 19:03:30.033041 kernel: xor: using function: 32regs (11089 MB/sec) Feb 13 19:03:30.117982 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:03:30.137708 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:03:30.147271 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:30.186739 systemd-udevd[472]: Using default interface naming scheme 'v255'. Feb 13 19:03:30.196440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:30.211226 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:03:30.247252 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Feb 13 19:03:30.303351 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:03:30.313264 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:03:30.438571 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:30.450176 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:03:30.497378 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:03:30.502677 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:03:30.505964 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:30.512711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:03:30.526698 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:03:30.566366 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:03:30.653000 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:03:30.653072 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:03:30.671222 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:03:30.671487 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:03:30.671720 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:38:35:7d:55:83 Feb 13 19:03:30.672652 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:03:30.675266 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:30.680381 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:30.682508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:03:30.683897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:30.688104 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:30.694281 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:30.703493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:30.732023 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:03:30.734159 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:03:30.742072 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:03:30.745950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:30.758566 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:03:30.758640 kernel: GPT:9289727 != 16777215 Feb 13 19:03:30.758666 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:03:30.758691 kernel: GPT:9289727 != 16777215 Feb 13 19:03:30.758716 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:03:30.758740 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:30.761141 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:30.791831 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:30.886002 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (520) Feb 13 19:03:30.895513 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (516) Feb 13 19:03:30.924969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:03:31.020623 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:03:31.036809 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:03:31.039150 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:03:31.055478 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:03:31.069663 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:03:31.083309 disk-uuid[661]: Primary Header is updated. Feb 13 19:03:31.083309 disk-uuid[661]: Secondary Entries is updated. Feb 13 19:03:31.083309 disk-uuid[661]: Secondary Header is updated. Feb 13 19:03:31.091972 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:32.110973 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:32.112389 disk-uuid[662]: The operation has completed successfully. Feb 13 19:03:32.304385 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:03:32.305855 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:03:32.357193 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:03:32.365827 sh[922]: Success Feb 13 19:03:32.384566 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:03:32.499295 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:03:32.513340 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:03:32.520850 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:03:32.561971 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:03:32.562041 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:32.562067 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:03:32.564827 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:03:32.564862 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:03:32.656975 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:03:32.697460 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:03:32.701450 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:03:32.718164 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:03:32.725212 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:03:32.756996 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:32.757067 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:32.758231 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:03:32.766988 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:03:32.784539 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:03:32.787369 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:32.796532 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:03:32.808276 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:03:32.913134 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:03:32.923290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:03:32.985901 systemd-networkd[1114]: lo: Link UP Feb 13 19:03:32.985993 systemd-networkd[1114]: lo: Gained carrier Feb 13 19:03:32.991097 systemd-networkd[1114]: Enumeration completed Feb 13 19:03:32.991865 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:32.991872 systemd-networkd[1114]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:03:32.992026 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:03:32.994267 systemd[1]: Reached target network.target - Network. Feb 13 19:03:33.008366 systemd-networkd[1114]: eth0: Link UP Feb 13 19:03:33.008384 systemd-networkd[1114]: eth0: Gained carrier Feb 13 19:03:33.008403 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:33.030054 systemd-networkd[1114]: eth0: DHCPv4 address 172.31.18.68/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:03:33.188075 ignition[1028]: Ignition 2.20.0 Feb 13 19:03:33.188107 ignition[1028]: Stage: fetch-offline Feb 13 19:03:33.188680 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:33.189771 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:33.191528 ignition[1028]: Ignition finished successfully Feb 13 19:03:33.197413 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:03:33.218912 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:03:33.243605 ignition[1124]: Ignition 2.20.0 Feb 13 19:03:33.243644 ignition[1124]: Stage: fetch Feb 13 19:03:33.244498 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:33.244611 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:33.244814 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:33.257085 ignition[1124]: PUT result: OK Feb 13 19:03:33.259664 ignition[1124]: parsed url from cmdline: "" Feb 13 19:03:33.259690 ignition[1124]: no config URL provided Feb 13 19:03:33.259707 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:03:33.259735 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:03:33.259772 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:33.261432 ignition[1124]: PUT result: OK Feb 13 19:03:33.261531 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:03:33.266780 ignition[1124]: GET result: OK Feb 13 19:03:33.268267 ignition[1124]: parsing config with SHA512: 719983811b2367b6372659026d16460d965d0b2e954c64ad20fc16e7d1dd5a00e061e6534e8c122d065fa74bb495d3a9707b5305d72d89bdadf12cbc3ecf2a95 Feb 13 19:03:33.285388 unknown[1124]: fetched base config from "system" Feb 13 19:03:33.285413 unknown[1124]: fetched base config from "system" Feb 13 19:03:33.286495 ignition[1124]: fetch: fetch complete Feb 13 19:03:33.285427 unknown[1124]: fetched user config from "aws" Feb 13 19:03:33.286508 ignition[1124]: fetch: fetch passed Feb 13 19:03:33.286609 ignition[1124]: Ignition finished successfully Feb 13 19:03:33.299010 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:03:33.308271 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:03:33.338721 ignition[1130]: Ignition 2.20.0 Feb 13 19:03:33.338751 ignition[1130]: Stage: kargs Feb 13 19:03:33.339435 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:33.339463 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:33.339638 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:33.344674 ignition[1130]: PUT result: OK Feb 13 19:03:33.354119 ignition[1130]: kargs: kargs passed Feb 13 19:03:33.354517 ignition[1130]: Ignition finished successfully Feb 13 19:03:33.360990 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:03:33.375788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:03:33.398191 ignition[1136]: Ignition 2.20.0 Feb 13 19:03:33.398212 ignition[1136]: Stage: disks Feb 13 19:03:33.398861 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:33.398886 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:33.399089 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:33.400363 ignition[1136]: PUT result: OK Feb 13 19:03:33.406559 ignition[1136]: disks: disks passed Feb 13 19:03:33.406651 ignition[1136]: Ignition finished successfully Feb 13 19:03:33.417007 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:03:33.419668 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:03:33.421897 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:03:33.424207 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:03:33.426131 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:03:33.428132 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:03:33.439294 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:03:33.497702 systemd-fsck[1145]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:03:33.504006 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:03:33.520164 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:03:33.595971 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:03:33.597253 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:03:33.601524 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:03:33.622095 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:03:33.628475 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:03:33.632119 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:03:33.632214 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:03:33.632333 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:03:33.655980 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1164) Feb 13 19:03:33.659949 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:33.659987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:33.660024 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:03:33.670220 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:03:33.675046 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:03:33.685985 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:03:33.694175 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:03:34.117826 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:03:34.126894 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:03:34.149294 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:03:34.157098 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:03:34.550101 systemd-networkd[1114]: eth0: Gained IPv6LL Feb 13 19:03:34.690516 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:03:34.706122 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:03:34.713231 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:03:34.727547 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:03:34.729722 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:34.773910 ignition[1277]: INFO : Ignition 2.20.0 Feb 13 19:03:34.773910 ignition[1277]: INFO : Stage: mount Feb 13 19:03:34.779156 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:34.779156 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:34.779156 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:34.779156 ignition[1277]: INFO : PUT result: OK Feb 13 19:03:34.792500 ignition[1277]: INFO : mount: mount passed Feb 13 19:03:34.792500 ignition[1277]: INFO : Ignition finished successfully Feb 13 19:03:34.783723 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:03:34.795609 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:03:34.813269 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:03:34.830301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:03:34.860974 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1288) Feb 13 19:03:34.864613 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:34.864651 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:34.864677 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:03:34.870963 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:03:34.874548 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:03:34.907464 ignition[1304]: INFO : Ignition 2.20.0 Feb 13 19:03:34.907464 ignition[1304]: INFO : Stage: files Feb 13 19:03:34.911018 ignition[1304]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:34.911018 ignition[1304]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:34.911018 ignition[1304]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:34.917609 ignition[1304]: INFO : PUT result: OK Feb 13 19:03:34.922482 ignition[1304]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:03:34.927726 ignition[1304]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:03:34.927726 ignition[1304]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:03:34.950434 ignition[1304]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:03:34.955650 ignition[1304]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:03:34.958684 unknown[1304]: wrote ssh authorized keys file for user: core Feb 13 19:03:34.962685 ignition[1304]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:03:34.965304 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:03:34.969075 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:03:34.969075 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:03:34.969075 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:03:35.117711 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:03:35.552223 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:03:35.552223 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:03:35.552223 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:03:36.076651 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 19:03:36.192637 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:03:36.192637 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:03:36.199179 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:03:36.512885 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Feb 13 19:03:36.834490 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:03:36.834490 ignition[1304]: INFO : files: op(d): [started] processing unit "containerd.service" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: op(d): [finished] processing unit "containerd.service" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:03:36.841858 ignition[1304]: INFO : files: files passed Feb 13 19:03:36.841858 ignition[1304]: INFO : Ignition finished successfully Feb 13 19:03:36.883973 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:03:36.901250 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:03:36.909958 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:03:36.912793 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:03:36.917309 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:03:36.969545 initrd-setup-root-after-ignition[1333]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:36.973656 initrd-setup-root-after-ignition[1333]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:36.976686 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:36.980083 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:03:36.983276 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:03:36.996413 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:03:37.043131 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:03:37.043508 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:03:37.051076 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:03:37.053028 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:03:37.055031 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:03:37.069266 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:03:37.105998 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:03:37.117237 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:03:37.143247 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:37.145777 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:37.150122 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:03:37.155334 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:03:37.155568 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:03:37.158599 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:03:37.164012 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:03:37.166515 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:03:37.170316 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:03:37.172833 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:03:37.182444 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:03:37.184703 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:03:37.190984 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:03:37.193517 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:03:37.198677 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:03:37.200442 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:03:37.200779 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:03:37.207848 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:37.210199 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:37.216347 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:03:37.220034 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:37.225072 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:03:37.225806 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:03:37.231334 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:03:37.231561 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:03:37.235920 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:03:37.237089 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:03:37.256135 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:03:37.261763 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:03:37.263425 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:37.282428 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:03:37.286034 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:03:37.299087 ignition[1357]: INFO : Ignition 2.20.0 Feb 13 19:03:37.299087 ignition[1357]: INFO : Stage: umount Feb 13 19:03:37.299087 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:37.299087 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:37.299087 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:37.299087 ignition[1357]: INFO : PUT result: OK Feb 13 19:03:37.286339 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:37.299124 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:03:37.299407 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:03:37.326103 ignition[1357]: INFO : umount: umount passed Feb 13 19:03:37.326103 ignition[1357]: INFO : Ignition finished successfully Feb 13 19:03:37.339661 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:03:37.345013 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:03:37.345890 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:03:37.353265 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:03:37.353508 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:03:37.363039 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:03:37.363431 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:03:37.371774 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:03:37.371976 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:03:37.374010 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:03:37.374095 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:03:37.376031 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:03:37.376110 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:03:37.379150 systemd[1]: Stopped target network.target - Network. Feb 13 19:03:37.391410 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:03:37.391515 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:03:37.393722 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:03:37.395371 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:03:37.403090 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:37.405380 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:03:37.407024 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:03:37.408828 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:03:37.408905 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:03:37.410761 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:03:37.410826 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:03:37.412687 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:03:37.412769 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:03:37.414630 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:03:37.414708 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:03:37.416739 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:03:37.416825 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:03:37.423645 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:03:37.426597 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:03:37.456046 systemd-networkd[1114]: eth0: DHCPv6 lease lost Feb 13 19:03:37.459424 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:03:37.459636 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:03:37.481261 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:03:37.483618 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:03:37.488552 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:03:37.488674 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:37.500078 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:03:37.507623 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:03:37.507748 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:03:37.510420 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:03:37.510505 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:37.512866 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:03:37.512965 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:37.515551 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:03:37.515627 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:37.520130 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:37.551338 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:03:37.551829 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:37.563030 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:03:37.563323 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:37.569438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:03:37.569521 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:37.572033 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:03:37.573914 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:03:37.581912 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:03:37.582038 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:03:37.584180 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:03:37.584262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:37.601343 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:03:37.604857 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:03:37.605001 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:37.608050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:03:37.608133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:37.612160 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:03:37.612343 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:03:37.645373 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:03:37.645752 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:03:37.653177 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:03:37.666697 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:03:37.698945 systemd[1]: Switching root. Feb 13 19:03:37.744071 systemd-journald[252]: Journal stopped Feb 13 19:03:40.399023 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 19:03:40.399164 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:03:40.399210 kernel: SELinux: policy capability open_perms=1 Feb 13 19:03:40.399243 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:03:40.399283 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:03:40.399315 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:03:40.399343 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:03:40.399377 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:03:40.399407 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:03:40.399438 kernel: audit: type=1403 audit(1739473418.582:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:03:40.399477 systemd[1]: Successfully loaded SELinux policy in 73.165ms. Feb 13 19:03:40.399530 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.739ms. Feb 13 19:03:40.399563 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:03:40.399597 systemd[1]: Detected virtualization amazon. Feb 13 19:03:40.399628 systemd[1]: Detected architecture arm64. Feb 13 19:03:40.399658 systemd[1]: Detected first boot. Feb 13 19:03:40.399692 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:03:40.399724 zram_generator::config[1416]: No configuration found. Feb 13 19:03:40.399758 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:03:40.399790 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:03:40.399820 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:03:40.399854 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:03:40.399883 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:03:40.399914 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:03:40.400012 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:03:40.400047 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:03:40.400080 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:03:40.400113 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:03:40.400145 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:03:40.400180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:40.400210 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:40.400241 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:03:40.400272 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:03:40.400306 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:03:40.400338 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:03:40.400368 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:03:40.400397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:40.400427 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:03:40.400457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:40.400488 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:03:40.400540 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:03:40.400576 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:03:40.400607 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:03:40.400638 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:03:40.400673 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:03:40.400703 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:03:40.400732 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:40.400760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:40.400790 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:40.400819 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:03:40.400851 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:03:40.400880 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:03:40.400909 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:03:40.400973 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:03:40.401033 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:03:40.401066 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:03:40.401098 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:03:40.401129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:40.401161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:03:40.401197 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:03:40.401226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:40.401255 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:03:40.401284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:40.401315 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:03:40.401343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:40.401373 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:03:40.401404 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:03:40.401441 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:03:40.401470 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:03:40.401500 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:03:40.401528 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:03:40.401558 kernel: fuse: init (API version 7.39) Feb 13 19:03:40.401585 kernel: loop: module loaded Feb 13 19:03:40.401613 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:03:40.401644 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:03:40.401674 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:03:40.401707 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:03:40.401736 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:03:40.401767 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:03:40.401795 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:03:40.401825 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:03:40.401854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:40.401885 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:03:40.401913 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:03:40.402070 systemd-journald[1516]: Collecting audit messages is disabled. Feb 13 19:03:40.402136 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:40.402170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:40.402200 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:03:40.402228 systemd-journald[1516]: Journal started Feb 13 19:03:40.402274 systemd-journald[1516]: Runtime Journal (/run/log/journal/ec2ccdfec55a10171ed1d776a0bd4734) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:03:40.410949 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:03:40.415489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:40.415875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:40.420779 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:03:40.421178 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:03:40.425532 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:40.425890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:40.428867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:40.432571 kernel: ACPI: bus type drm_connector registered Feb 13 19:03:40.432823 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:03:40.438188 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:03:40.438600 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:03:40.442999 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:03:40.473199 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:03:40.485273 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:03:40.497903 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:03:40.500098 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:03:40.515870 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:03:40.523271 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:03:40.525706 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:03:40.550179 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:03:40.552591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:03:40.560049 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:40.572256 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:03:40.587917 systemd-journald[1516]: Time spent on flushing to /var/log/journal/ec2ccdfec55a10171ed1d776a0bd4734 is 55.012ms for 895 entries. Feb 13 19:03:40.587917 systemd-journald[1516]: System Journal (/var/log/journal/ec2ccdfec55a10171ed1d776a0bd4734) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:03:40.681211 systemd-journald[1516]: Received client request to flush runtime journal. Feb 13 19:03:40.588539 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:03:40.591956 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:03:40.609764 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:03:40.612222 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:03:40.686913 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:03:40.696429 systemd-tmpfiles[1569]: ACLs are not supported, ignoring. Feb 13 19:03:40.696473 systemd-tmpfiles[1569]: ACLs are not supported, ignoring. Feb 13 19:03:40.705652 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:40.723356 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:03:40.731074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:40.734334 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:40.761309 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:03:40.783549 udevadm[1582]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:03:40.832661 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:03:40.850321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:03:40.893778 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Feb 13 19:03:40.893819 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Feb 13 19:03:40.902907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:41.559109 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:03:41.571356 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:41.627413 systemd-udevd[1597]: Using default interface naming scheme 'v255'. Feb 13 19:03:41.685785 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:41.701650 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:03:41.737461 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:03:41.833810 (udev-worker)[1598]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:41.860948 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 19:03:41.875177 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:03:42.041747 systemd-networkd[1601]: lo: Link UP Feb 13 19:03:42.041762 systemd-networkd[1601]: lo: Gained carrier Feb 13 19:03:42.045633 systemd-networkd[1601]: Enumeration completed Feb 13 19:03:42.046071 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:03:42.048295 systemd-networkd[1601]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:42.048302 systemd-networkd[1601]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:03:42.051420 systemd-networkd[1601]: eth0: Link UP Feb 13 19:03:42.051726 systemd-networkd[1601]: eth0: Gained carrier Feb 13 19:03:42.051757 systemd-networkd[1601]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:42.059337 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:03:42.068116 systemd-networkd[1601]: eth0: DHCPv4 address 172.31.18.68/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:03:42.131984 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1600) Feb 13 19:03:42.162563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:42.349716 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:03:42.367712 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:03:42.371114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:42.381264 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:03:42.415475 lvm[1726]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:03:42.455417 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:03:42.458395 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:42.467231 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:03:42.490380 lvm[1729]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:03:42.531529 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:03:42.534372 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:03:42.537292 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:03:42.537542 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:03:42.539674 systemd[1]: Reached target machines.target - Containers. Feb 13 19:03:42.543675 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:03:42.554229 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:03:42.565224 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:03:42.567862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:42.570533 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:03:42.589347 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:03:42.595744 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:03:42.602441 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:03:42.632740 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:03:42.634147 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:03:42.644379 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:03:42.654036 kernel: loop0: detected capacity change from 0 to 113536 Feb 13 19:03:42.744313 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:03:42.778009 kernel: loop1: detected capacity change from 0 to 53784 Feb 13 19:03:42.876111 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:03:42.941539 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 19:03:43.040975 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 19:03:43.061985 kernel: loop5: detected capacity change from 0 to 53784 Feb 13 19:03:43.076000 kernel: loop6: detected capacity change from 0 to 194096 Feb 13 19:03:43.103952 kernel: loop7: detected capacity change from 0 to 116808 Feb 13 19:03:43.116570 (sd-merge)[1751]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:03:43.117755 (sd-merge)[1751]: Merged extensions into '/usr'. Feb 13 19:03:43.126181 systemd[1]: Reloading requested from client PID 1737 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:03:43.126216 systemd[1]: Reloading... Feb 13 19:03:43.245984 zram_generator::config[1779]: No configuration found. Feb 13 19:03:43.517803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:43.575106 systemd-networkd[1601]: eth0: Gained IPv6LL Feb 13 19:03:43.655919 systemd[1]: Reloading finished in 528 ms. Feb 13 19:03:43.661614 ldconfig[1733]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:03:43.681147 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:03:43.684532 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:03:43.687725 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:03:43.705235 systemd[1]: Starting ensure-sysext.service... Feb 13 19:03:43.718152 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:03:43.739736 systemd[1]: Reloading requested from client PID 1840 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:03:43.739776 systemd[1]: Reloading... Feb 13 19:03:43.768064 systemd-tmpfiles[1841]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:03:43.769561 systemd-tmpfiles[1841]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:03:43.773077 systemd-tmpfiles[1841]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:03:43.773577 systemd-tmpfiles[1841]: ACLs are not supported, ignoring. Feb 13 19:03:43.773720 systemd-tmpfiles[1841]: ACLs are not supported, ignoring. Feb 13 19:03:43.783467 systemd-tmpfiles[1841]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:03:43.785179 systemd-tmpfiles[1841]: Skipping /boot Feb 13 19:03:43.807592 systemd-tmpfiles[1841]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:03:43.807998 systemd-tmpfiles[1841]: Skipping /boot Feb 13 19:03:43.900974 zram_generator::config[1873]: No configuration found. Feb 13 19:03:44.129883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:44.266328 systemd[1]: Reloading finished in 525 ms. Feb 13 19:03:44.299031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:44.327240 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:03:44.334212 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:03:44.347745 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:03:44.361678 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:03:44.380223 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:03:44.395358 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:44.401460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:44.407846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:44.427562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:44.430912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:44.446860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:44.450403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:44.471730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:44.472134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:44.481729 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:03:44.486171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:44.486514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:44.491714 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:44.494578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:44.531428 systemd[1]: Finished ensure-sysext.service. Feb 13 19:03:44.540065 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:03:44.552833 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:44.560448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:03:44.562814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:44.562892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:03:44.563016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:03:44.563150 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:03:44.574181 augenrules[1971]: No rules Feb 13 19:03:44.578215 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:03:44.581084 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:03:44.581578 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:03:44.598539 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:03:44.599483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:03:44.641847 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:03:44.653818 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:03:44.656454 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:03:44.677531 systemd-resolved[1938]: Positive Trust Anchors: Feb 13 19:03:44.677597 systemd-resolved[1938]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:03:44.677660 systemd-resolved[1938]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:03:44.686159 systemd-resolved[1938]: Defaulting to hostname 'linux'. Feb 13 19:03:44.689363 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:03:44.692114 systemd[1]: Reached target network.target - Network. Feb 13 19:03:44.693894 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:03:44.695916 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:44.698167 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:03:44.700597 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:03:44.702885 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:03:44.705462 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:03:44.707634 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:03:44.709904 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:03:44.712206 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:03:44.712258 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:03:44.713908 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:03:44.717173 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:03:44.722312 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:03:44.739370 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:03:44.743694 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:03:44.745991 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:03:44.747909 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:03:44.749946 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:03:44.750022 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:03:44.750067 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:03:44.752885 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:03:44.762304 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:03:44.782212 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:03:44.787799 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:03:44.804185 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:03:44.810363 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:03:44.828773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:44.837779 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:03:44.859410 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:03:44.868337 jq[1991]: false Feb 13 19:03:44.878274 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:03:44.896244 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:03:44.924114 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:03:44.927091 dbus-daemon[1990]: [system] SELinux support is enabled Feb 13 19:03:44.932226 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:03:44.940382 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:03:44.949393 dbus-daemon[1990]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1601 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:03:44.968965 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:03:44.972437 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:03:44.978148 extend-filesystems[1993]: Found loop4 Feb 13 19:03:44.980400 extend-filesystems[1993]: Found loop5 Feb 13 19:03:44.980400 extend-filesystems[1993]: Found loop6 Feb 13 19:03:44.980400 extend-filesystems[1993]: Found loop7 Feb 13 19:03:44.980400 extend-filesystems[1993]: Found nvme0n1 Feb 13 19:03:44.980400 extend-filesystems[1993]: Found nvme0n1p1 Feb 13 19:03:44.980400 extend-filesystems[1993]: Found nvme0n1p2 Feb 13 19:03:44.980400 extend-filesystems[1993]: Found nvme0n1p3 Feb 13 19:03:44.980400 extend-filesystems[1993]: Found usr Feb 13 19:03:45.010168 extend-filesystems[1993]: Found nvme0n1p4 Feb 13 19:03:45.010168 extend-filesystems[1993]: Found nvme0n1p6 Feb 13 19:03:45.010168 extend-filesystems[1993]: Found nvme0n1p7 Feb 13 19:03:45.010168 extend-filesystems[1993]: Found nvme0n1p9 Feb 13 19:03:45.010168 extend-filesystems[1993]: Checking size of /dev/nvme0n1p9 Feb 13 19:03:44.991505 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:03:45.025101 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:03:45.030774 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:03:45.048166 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:03:45.048166 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:03:45.048166 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: ---------------------------------------------------- Feb 13 19:03:45.048166 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:03:45.048166 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:03:45.048166 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: corporation. Support and training for ntp-4 are Feb 13 19:03:45.048166 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: available at https://www.nwtime.org/support Feb 13 19:03:45.048166 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: ---------------------------------------------------- Feb 13 19:03:45.033944 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:03:45.030821 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:03:45.030841 ntpd[1997]: ---------------------------------------------------- Feb 13 19:03:45.030860 ntpd[1997]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:03:45.030879 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:03:45.030897 ntpd[1997]: corporation. Support and training for ntp-4 are Feb 13 19:03:45.030915 ntpd[1997]: available at https://www.nwtime.org/support Feb 13 19:03:45.030956 ntpd[1997]: ---------------------------------------------------- Feb 13 19:03:45.051893 ntpd[1997]: proto: precision = 0.108 usec (-23) Feb 13 19:03:45.057636 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: proto: precision = 0.108 usec (-23) Feb 13 19:03:45.057636 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: basedate set to 2025-02-01 Feb 13 19:03:45.057636 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: gps base set to 2025-02-02 (week 2352) Feb 13 19:03:45.052836 ntpd[1997]: basedate set to 2025-02-01 Feb 13 19:03:45.052871 ntpd[1997]: gps base set to 2025-02-02 (week 2352) Feb 13 19:03:45.060772 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:03:45.064802 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:03:45.064802 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:03:45.064802 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:03:45.064802 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: Listen normally on 3 eth0 172.31.18.68:123 Feb 13 19:03:45.064802 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: Listen normally on 4 lo [::1]:123 Feb 13 19:03:45.064802 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: Listen normally on 5 eth0 [fe80::438:35ff:fe7d:5583%2]:123 Feb 13 19:03:45.064802 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: Listening on routing socket on fd #22 for interface updates Feb 13 19:03:45.060865 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:03:45.064209 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:03:45.064282 ntpd[1997]: Listen normally on 3 eth0 172.31.18.68:123 Feb 13 19:03:45.064349 ntpd[1997]: Listen normally on 4 lo [::1]:123 Feb 13 19:03:45.064426 ntpd[1997]: Listen normally on 5 eth0 [fe80::438:35ff:fe7d:5583%2]:123 Feb 13 19:03:45.064511 ntpd[1997]: Listening on routing socket on fd #22 for interface updates Feb 13 19:03:45.083716 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:03:45.085364 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:03:45.091751 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:03:45.092316 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:03:45.096018 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:45.096080 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:45.096248 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:45.096248 ntpd[1997]: 13 Feb 19:03:45 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:45.128803 extend-filesystems[1993]: Resized partition /dev/nvme0n1p9 Feb 13 19:03:45.136606 jq[2021]: true Feb 13 19:03:45.159142 update_engine[2017]: I20250213 19:03:45.141878 2017 main.cc:92] Flatcar Update Engine starting Feb 13 19:03:45.159142 update_engine[2017]: I20250213 19:03:45.151001 2017 update_check_scheduler.cc:74] Next update check in 7m22s Feb 13 19:03:45.210302 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:03:45.210348 extend-filesystems[2038]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:03:45.154346 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:03:45.154839 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:03:45.171903 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:03:45.241227 jq[2042]: true Feb 13 19:03:45.279026 (ntainerd)[2049]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:03:45.295017 dbus-daemon[1990]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:03:45.304700 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:03:45.308170 tar[2035]: linux-arm64/helm Feb 13 19:03:45.318407 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:03:45.318478 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:03:45.325777 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:03:45.329157 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:03:45.329214 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:03:45.332880 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:03:45.345250 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:03:45.379097 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:03:45.419103 extend-filesystems[2038]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:03:45.419103 extend-filesystems[2038]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:03:45.419103 extend-filesystems[2038]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:03:45.429896 extend-filesystems[1993]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:03:45.437535 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:03:45.438140 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:03:45.440239 coreos-metadata[1989]: Feb 13 19:03:45.440 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:03:45.462369 coreos-metadata[1989]: Feb 13 19:03:45.455 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:03:45.463719 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:03:45.481741 coreos-metadata[1989]: Feb 13 19:03:45.480 INFO Fetch successful Feb 13 19:03:45.481741 coreos-metadata[1989]: Feb 13 19:03:45.481 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:03:45.485203 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:03:45.496167 coreos-metadata[1989]: Feb 13 19:03:45.491 INFO Fetch successful Feb 13 19:03:45.496167 coreos-metadata[1989]: Feb 13 19:03:45.491 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:03:45.496167 coreos-metadata[1989]: Feb 13 19:03:45.493 INFO Fetch successful Feb 13 19:03:45.496167 coreos-metadata[1989]: Feb 13 19:03:45.493 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:03:45.496626 coreos-metadata[1989]: Feb 13 19:03:45.496 INFO Fetch successful Feb 13 19:03:45.496626 coreos-metadata[1989]: Feb 13 19:03:45.496 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.510 INFO Fetch failed with 404: resource not found Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.510 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.510 INFO Fetch successful Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.510 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.510 INFO Fetch successful Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.510 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.510 INFO Fetch successful Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.511 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.511 INFO Fetch successful Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.511 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:03:45.512566 coreos-metadata[1989]: Feb 13 19:03:45.511 INFO Fetch successful Feb 13 19:03:45.611066 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:03:45.615262 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:03:45.634953 bash[2103]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:03:45.640492 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:03:45.673436 systemd[1]: Starting sshkeys.service... Feb 13 19:03:45.694330 systemd-logind[2015]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:03:45.694384 systemd-logind[2015]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:03:45.694730 systemd-logind[2015]: New seat seat0. Feb 13 19:03:45.697058 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:03:45.749688 amazon-ssm-agent[2081]: Initializing new seelog logger Feb 13 19:03:45.749688 amazon-ssm-agent[2081]: New Seelog Logger Creation Complete Feb 13 19:03:45.749688 amazon-ssm-agent[2081]: 2025/02/13 19:03:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:45.749688 amazon-ssm-agent[2081]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:45.749688 amazon-ssm-agent[2081]: 2025/02/13 19:03:45 processing appconfig overrides Feb 13 19:03:45.749688 amazon-ssm-agent[2081]: 2025/02/13 19:03:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:45.749688 amazon-ssm-agent[2081]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:45.749688 amazon-ssm-agent[2081]: 2025/02/13 19:03:45 processing appconfig overrides Feb 13 19:03:45.770098 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2110) Feb 13 19:03:45.759381 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:03:45.775958 amazon-ssm-agent[2081]: 2025/02/13 19:03:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:45.775958 amazon-ssm-agent[2081]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:45.775958 amazon-ssm-agent[2081]: 2025/02/13 19:03:45 processing appconfig overrides Feb 13 19:03:45.775958 amazon-ssm-agent[2081]: 2025-02-13 19:03:45 INFO Proxy environment variables: Feb 13 19:03:45.784967 amazon-ssm-agent[2081]: 2025/02/13 19:03:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:45.784967 amazon-ssm-agent[2081]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:45.784967 amazon-ssm-agent[2081]: 2025/02/13 19:03:45 processing appconfig overrides Feb 13 19:03:45.790701 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:03:45.853140 locksmithd[2067]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:03:45.901914 amazon-ssm-agent[2081]: 2025-02-13 19:03:45 INFO no_proxy: Feb 13 19:03:46.006039 amazon-ssm-agent[2081]: 2025-02-13 19:03:45 INFO https_proxy: Feb 13 19:03:46.067191 coreos-metadata[2118]: Feb 13 19:03:46.066 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:03:46.076145 coreos-metadata[2118]: Feb 13 19:03:46.076 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:03:46.080676 coreos-metadata[2118]: Feb 13 19:03:46.080 INFO Fetch successful Feb 13 19:03:46.080676 coreos-metadata[2118]: Feb 13 19:03:46.080 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:03:46.083556 coreos-metadata[2118]: Feb 13 19:03:46.083 INFO Fetch successful Feb 13 19:03:46.090087 unknown[2118]: wrote ssh authorized keys file for user: core Feb 13 19:03:46.111175 amazon-ssm-agent[2081]: 2025-02-13 19:03:45 INFO http_proxy: Feb 13 19:03:46.210257 amazon-ssm-agent[2081]: 2025-02-13 19:03:45 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:03:46.242225 containerd[2049]: time="2025-02-13T19:03:46.242099244Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:03:46.289508 dbus-daemon[1990]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:03:46.298097 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:03:46.307084 dbus-daemon[1990]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2064 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:03:46.311248 amazon-ssm-agent[2081]: 2025-02-13 19:03:45 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:03:46.322632 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:03:46.357880 update-ssh-keys[2186]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:03:46.359178 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:03:46.382701 systemd[1]: Finished sshkeys.service. Feb 13 19:03:46.410044 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO Agent will take identity from EC2 Feb 13 19:03:46.412372 polkitd[2206]: Started polkitd version 121 Feb 13 19:03:46.475337 polkitd[2206]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:03:46.475463 polkitd[2206]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:03:46.476455 polkitd[2206]: Finished loading, compiling and executing 2 rules Feb 13 19:03:46.481688 dbus-daemon[1990]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:03:46.482063 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:03:46.489065 polkitd[2206]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:03:46.509082 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:03:46.545371 containerd[2049]: time="2025-02-13T19:03:46.543736777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:46.557963 containerd[2049]: time="2025-02-13T19:03:46.557873821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:46.558132 containerd[2049]: time="2025-02-13T19:03:46.558001105Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:03:46.558132 containerd[2049]: time="2025-02-13T19:03:46.558040945Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:03:46.558385 containerd[2049]: time="2025-02-13T19:03:46.558343297Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:03:46.558446 containerd[2049]: time="2025-02-13T19:03:46.558392641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.558519697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.558558277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.558910477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.558961609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.558998041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.559023049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.559624357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.560330137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.561134617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:46.561612 containerd[2049]: time="2025-02-13T19:03:46.561173221Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:03:46.571980 containerd[2049]: time="2025-02-13T19:03:46.568159765Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:03:46.571980 containerd[2049]: time="2025-02-13T19:03:46.568376593Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:03:46.574561 systemd-hostnamed[2064]: Hostname set to (transient) Feb 13 19:03:46.575704 systemd-resolved[1938]: System hostname changed to 'ip-172-31-18-68'. Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.586082413Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.586166377Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.586204489Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.586244281Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.586280833Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.586553857Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.587202781Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.587439961Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.587489377Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.587522749Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.587554501Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.587583301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.587612293Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:03:46.593978 containerd[2049]: time="2025-02-13T19:03:46.587642965Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:03:46.594597 containerd[2049]: time="2025-02-13T19:03:46.587675461Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:03:46.594597 containerd[2049]: time="2025-02-13T19:03:46.587710561Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:03:46.594597 containerd[2049]: time="2025-02-13T19:03:46.587740501Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:03:46.594597 containerd[2049]: time="2025-02-13T19:03:46.587769637Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:03:46.594597 containerd[2049]: time="2025-02-13T19:03:46.587810113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.594597 containerd[2049]: time="2025-02-13T19:03:46.587841649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.594597 containerd[2049]: time="2025-02-13T19:03:46.587871745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.594597 containerd[2049]: time="2025-02-13T19:03:46.587902717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.600300793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.600372301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.600404569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.600436825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.600487357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.602537029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.602603545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.602636389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.602666773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.602701117Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.602772073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.602817433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.602849377Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:03:46.603236 containerd[2049]: time="2025-02-13T19:03:46.603027721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:03:46.603875 containerd[2049]: time="2025-02-13T19:03:46.603191461Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:03:46.603875 containerd[2049]: time="2025-02-13T19:03:46.603229909Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:03:46.603875 containerd[2049]: time="2025-02-13T19:03:46.603261925Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:03:46.603875 containerd[2049]: time="2025-02-13T19:03:46.603285517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.603875 containerd[2049]: time="2025-02-13T19:03:46.603317593Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:03:46.603875 containerd[2049]: time="2025-02-13T19:03:46.603342217Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:03:46.603875 containerd[2049]: time="2025-02-13T19:03:46.603367561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:03:46.606842 containerd[2049]: time="2025-02-13T19:03:46.603875821Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:03:46.606842 containerd[2049]: time="2025-02-13T19:03:46.604238569Z" level=info msg="Connect containerd service" Feb 13 19:03:46.606842 containerd[2049]: time="2025-02-13T19:03:46.604315621Z" level=info msg="using legacy CRI server" Feb 13 19:03:46.606842 containerd[2049]: time="2025-02-13T19:03:46.604333117Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:03:46.606842 containerd[2049]: time="2025-02-13T19:03:46.605713357Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.607626649Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.607841641Z" level=info msg="Start subscribing containerd event" Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.607914313Z" level=info msg="Start recovering state" Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.608048317Z" level=info msg="Start event monitor" Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.608071309Z" level=info msg="Start snapshots syncer" Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.608091505Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.608114149Z" level=info msg="Start streaming server" Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.610274629Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.610385257Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:03:46.614251 containerd[2049]: time="2025-02-13T19:03:46.610495249Z" level=info msg="containerd successfully booted in 0.373095s" Feb 13 19:03:46.610664 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:03:46.614818 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:03:46.710964 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:03:46.808008 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:03:46.910028 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:03:47.009625 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:03:47.109972 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:03:47.215948 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO [Registrar] Starting registrar module Feb 13 19:03:47.315671 amazon-ssm-agent[2081]: 2025-02-13 19:03:46 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:03:47.356996 tar[2035]: linux-arm64/LICENSE Feb 13 19:03:47.356996 tar[2035]: linux-arm64/README.md Feb 13 19:03:47.393680 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:03:47.654570 amazon-ssm-agent[2081]: 2025-02-13 19:03:47 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:03:47.693992 amazon-ssm-agent[2081]: 2025-02-13 19:03:47 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:03:47.693992 amazon-ssm-agent[2081]: 2025-02-13 19:03:47 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:03:47.693992 amazon-ssm-agent[2081]: 2025-02-13 19:03:47 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:03:47.752295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:47.754802 amazon-ssm-agent[2081]: 2025-02-13 19:03:47 INFO [CredentialRefresher] Next credential rotation will be in 30.191659206266667 minutes Feb 13 19:03:47.770570 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:03:48.718463 sshd_keygen[2039]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:03:48.741636 amazon-ssm-agent[2081]: 2025-02-13 19:03:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:03:48.809680 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:03:48.826412 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:03:48.842338 amazon-ssm-agent[2081]: 2025-02-13 19:03:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2276) started Feb 13 19:03:48.863619 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:03:48.864197 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:03:48.876892 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:03:48.901094 kubelet[2264]: E0213 19:03:48.899987 2264 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:03:48.909282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:03:48.909672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:03:48.928802 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:03:48.941735 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:03:48.944488 amazon-ssm-agent[2081]: 2025-02-13 19:03:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:03:48.955503 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:03:48.958404 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:03:48.961615 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:03:48.976677 systemd[1]: Startup finished in 10.904s (kernel) + 10.464s (userspace) = 21.369s. Feb 13 19:03:52.354329 systemd-resolved[1938]: Clock change detected. Flushing caches. Feb 13 19:03:53.407003 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:03:53.416121 systemd[1]: Started sshd@0-172.31.18.68:22-147.75.109.163:55280.service - OpenSSH per-connection server daemon (147.75.109.163:55280). Feb 13 19:03:53.608328 sshd[2307]: Accepted publickey for core from 147.75.109.163 port 55280 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:53.612216 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:53.626890 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:03:53.635095 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:03:53.639405 systemd-logind[2015]: New session 1 of user core. Feb 13 19:03:53.663210 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:03:53.675396 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:03:53.695001 (systemd)[2313]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:03:53.902750 systemd[2313]: Queued start job for default target default.target. Feb 13 19:03:53.903501 systemd[2313]: Created slice app.slice - User Application Slice. Feb 13 19:03:53.903559 systemd[2313]: Reached target paths.target - Paths. Feb 13 19:03:53.903591 systemd[2313]: Reached target timers.target - Timers. Feb 13 19:03:53.911828 systemd[2313]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:03:53.927808 systemd[2313]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:03:53.927945 systemd[2313]: Reached target sockets.target - Sockets. Feb 13 19:03:53.927978 systemd[2313]: Reached target basic.target - Basic System. Feb 13 19:03:53.928075 systemd[2313]: Reached target default.target - Main User Target. Feb 13 19:03:53.928143 systemd[2313]: Startup finished in 221ms. Feb 13 19:03:53.928403 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:03:53.938376 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:03:54.091015 systemd[1]: Started sshd@1-172.31.18.68:22-147.75.109.163:55286.service - OpenSSH per-connection server daemon (147.75.109.163:55286). Feb 13 19:03:54.279207 sshd[2325]: Accepted publickey for core from 147.75.109.163 port 55286 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:54.281601 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:54.290060 systemd-logind[2015]: New session 2 of user core. Feb 13 19:03:54.299138 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:03:54.427336 sshd[2328]: Connection closed by 147.75.109.163 port 55286 Feb 13 19:03:54.427219 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:54.432074 systemd[1]: sshd@1-172.31.18.68:22-147.75.109.163:55286.service: Deactivated successfully. Feb 13 19:03:54.439300 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:03:54.439525 systemd-logind[2015]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:03:54.442752 systemd-logind[2015]: Removed session 2. Feb 13 19:03:54.456158 systemd[1]: Started sshd@2-172.31.18.68:22-147.75.109.163:55294.service - OpenSSH per-connection server daemon (147.75.109.163:55294). Feb 13 19:03:54.647905 sshd[2333]: Accepted publickey for core from 147.75.109.163 port 55294 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:54.649806 sshd-session[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:54.658995 systemd-logind[2015]: New session 3 of user core. Feb 13 19:03:54.673148 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:03:54.793364 sshd[2336]: Connection closed by 147.75.109.163 port 55294 Feb 13 19:03:54.793979 sshd-session[2333]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:54.801538 systemd[1]: sshd@2-172.31.18.68:22-147.75.109.163:55294.service: Deactivated successfully. Feb 13 19:03:54.806960 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:03:54.808485 systemd-logind[2015]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:03:54.810398 systemd-logind[2015]: Removed session 3. Feb 13 19:03:54.827110 systemd[1]: Started sshd@3-172.31.18.68:22-147.75.109.163:55310.service - OpenSSH per-connection server daemon (147.75.109.163:55310). Feb 13 19:03:55.014933 sshd[2341]: Accepted publickey for core from 147.75.109.163 port 55310 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:55.017322 sshd-session[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:55.025562 systemd-logind[2015]: New session 4 of user core. Feb 13 19:03:55.032137 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:03:55.162149 sshd[2344]: Connection closed by 147.75.109.163 port 55310 Feb 13 19:03:55.162998 sshd-session[2341]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:55.169157 systemd[1]: sshd@3-172.31.18.68:22-147.75.109.163:55310.service: Deactivated successfully. Feb 13 19:03:55.175587 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:03:55.177183 systemd-logind[2015]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:03:55.178893 systemd-logind[2015]: Removed session 4. Feb 13 19:03:55.196085 systemd[1]: Started sshd@4-172.31.18.68:22-147.75.109.163:55320.service - OpenSSH per-connection server daemon (147.75.109.163:55320). Feb 13 19:03:55.372031 sshd[2349]: Accepted publickey for core from 147.75.109.163 port 55320 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:55.374711 sshd-session[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:55.382021 systemd-logind[2015]: New session 5 of user core. Feb 13 19:03:55.399084 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:03:55.519982 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:03:55.520582 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:55.539780 sudo[2353]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:55.564667 sshd[2352]: Connection closed by 147.75.109.163 port 55320 Feb 13 19:03:55.563384 sshd-session[2349]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:55.569687 systemd[1]: sshd@4-172.31.18.68:22-147.75.109.163:55320.service: Deactivated successfully. Feb 13 19:03:55.571026 systemd-logind[2015]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:03:55.577379 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:03:55.579369 systemd-logind[2015]: Removed session 5. Feb 13 19:03:55.596054 systemd[1]: Started sshd@5-172.31.18.68:22-147.75.109.163:55330.service - OpenSSH per-connection server daemon (147.75.109.163:55330). Feb 13 19:03:55.769163 sshd[2358]: Accepted publickey for core from 147.75.109.163 port 55330 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:55.771663 sshd-session[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:55.780055 systemd-logind[2015]: New session 6 of user core. Feb 13 19:03:55.791257 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:03:55.895596 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:03:55.896268 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:55.903114 sudo[2363]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:55.913255 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:03:55.913896 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:55.935267 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:03:55.996783 augenrules[2385]: No rules Feb 13 19:03:55.998874 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:03:55.999428 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:03:56.003321 sudo[2362]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:56.026759 sshd[2361]: Connection closed by 147.75.109.163 port 55330 Feb 13 19:03:56.028525 sshd-session[2358]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:56.035416 systemd[1]: sshd@5-172.31.18.68:22-147.75.109.163:55330.service: Deactivated successfully. Feb 13 19:03:56.040252 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:03:56.041387 systemd-logind[2015]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:03:56.043000 systemd-logind[2015]: Removed session 6. Feb 13 19:03:56.056260 systemd[1]: Started sshd@6-172.31.18.68:22-147.75.109.163:55342.service - OpenSSH per-connection server daemon (147.75.109.163:55342). Feb 13 19:03:56.243005 sshd[2394]: Accepted publickey for core from 147.75.109.163 port 55342 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:56.245325 sshd-session[2394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:56.254009 systemd-logind[2015]: New session 7 of user core. Feb 13 19:03:56.264144 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:03:56.368965 sudo[2398]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:03:56.369594 sudo[2398]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:56.882101 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:03:56.884807 (dockerd)[2416]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:03:57.222056 dockerd[2416]: time="2025-02-13T19:03:57.221908704Z" level=info msg="Starting up" Feb 13 19:03:57.336057 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport583203157-merged.mount: Deactivated successfully. Feb 13 19:03:57.571066 dockerd[2416]: time="2025-02-13T19:03:57.570918277Z" level=info msg="Loading containers: start." Feb 13 19:03:57.815691 kernel: Initializing XFRM netlink socket Feb 13 19:03:57.849203 (udev-worker)[2438]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:57.945450 systemd-networkd[1601]: docker0: Link UP Feb 13 19:03:57.979037 dockerd[2416]: time="2025-02-13T19:03:57.978981051Z" level=info msg="Loading containers: done." Feb 13 19:03:58.005021 dockerd[2416]: time="2025-02-13T19:03:58.004846848Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:03:58.005240 dockerd[2416]: time="2025-02-13T19:03:58.005025888Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:03:58.005240 dockerd[2416]: time="2025-02-13T19:03:58.005217768Z" level=info msg="Daemon has completed initialization" Feb 13 19:03:58.058158 dockerd[2416]: time="2025-02-13T19:03:58.057943632Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:03:58.058773 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:03:59.261029 containerd[2049]: time="2025-02-13T19:03:59.260610062Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:03:59.450240 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:03:59.460986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:59.768079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:59.787329 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:03:59.882885 kubelet[2620]: E0213 19:03:59.882800 2620 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:03:59.891529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:03:59.892120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:03:59.905188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573728491.mount: Deactivated successfully. Feb 13 19:04:01.347307 containerd[2049]: time="2025-02-13T19:04:01.347241652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:01.348946 containerd[2049]: time="2025-02-13T19:04:01.348819064Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 19:04:01.351222 containerd[2049]: time="2025-02-13T19:04:01.351135952Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:01.359248 containerd[2049]: time="2025-02-13T19:04:01.359167492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:01.361540 containerd[2049]: time="2025-02-13T19:04:01.361267900Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.100555574s" Feb 13 19:04:01.361540 containerd[2049]: time="2025-02-13T19:04:01.361329088Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:04:01.400206 containerd[2049]: time="2025-02-13T19:04:01.399910756Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:04:02.941921 containerd[2049]: time="2025-02-13T19:04:02.941021588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:02.943211 containerd[2049]: time="2025-02-13T19:04:02.943108784Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 19:04:02.944746 containerd[2049]: time="2025-02-13T19:04:02.944676200Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:02.950591 containerd[2049]: time="2025-02-13T19:04:02.950494208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:02.953053 containerd[2049]: time="2025-02-13T19:04:02.952740740Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.5527768s" Feb 13 19:04:02.953053 containerd[2049]: time="2025-02-13T19:04:02.952798412Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:04:02.995318 containerd[2049]: time="2025-02-13T19:04:02.995248676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:04:04.190692 containerd[2049]: time="2025-02-13T19:04:04.190088466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:04.192220 containerd[2049]: time="2025-02-13T19:04:04.192135966Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 19:04:04.193886 containerd[2049]: time="2025-02-13T19:04:04.193814346Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:04.199250 containerd[2049]: time="2025-02-13T19:04:04.199199802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:04.201673 containerd[2049]: time="2025-02-13T19:04:04.201479442Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.206166878s" Feb 13 19:04:04.201673 containerd[2049]: time="2025-02-13T19:04:04.201529098Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:04:04.240559 containerd[2049]: time="2025-02-13T19:04:04.240491707Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:04:05.468604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2898273764.mount: Deactivated successfully. Feb 13 19:04:06.009683 containerd[2049]: time="2025-02-13T19:04:06.008944423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:06.011537 containerd[2049]: time="2025-02-13T19:04:06.011453659Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 19:04:06.012854 containerd[2049]: time="2025-02-13T19:04:06.012767287Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:06.016238 containerd[2049]: time="2025-02-13T19:04:06.016109935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:06.017951 containerd[2049]: time="2025-02-13T19:04:06.017735623Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.77717296s" Feb 13 19:04:06.017951 containerd[2049]: time="2025-02-13T19:04:06.017789827Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:04:06.062983 containerd[2049]: time="2025-02-13T19:04:06.062917928Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:04:06.583275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118177249.mount: Deactivated successfully. Feb 13 19:04:08.089643 containerd[2049]: time="2025-02-13T19:04:08.089557654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:08.091768 containerd[2049]: time="2025-02-13T19:04:08.091698394Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:04:08.093060 containerd[2049]: time="2025-02-13T19:04:08.092971354Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:08.100306 containerd[2049]: time="2025-02-13T19:04:08.100226038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:08.102705 containerd[2049]: time="2025-02-13T19:04:08.102405742Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.039421406s" Feb 13 19:04:08.102705 containerd[2049]: time="2025-02-13T19:04:08.102464914Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:04:08.141296 containerd[2049]: time="2025-02-13T19:04:08.141251614Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:04:08.690315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451845596.mount: Deactivated successfully. Feb 13 19:04:08.701735 containerd[2049]: time="2025-02-13T19:04:08.701377765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:08.703013 containerd[2049]: time="2025-02-13T19:04:08.702937681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 19:04:08.704704 containerd[2049]: time="2025-02-13T19:04:08.704604601Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:08.714693 containerd[2049]: time="2025-02-13T19:04:08.712703089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:08.715746 containerd[2049]: time="2025-02-13T19:04:08.715689433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 574.119951ms" Feb 13 19:04:08.716968 containerd[2049]: time="2025-02-13T19:04:08.716062225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:04:08.755485 containerd[2049]: time="2025-02-13T19:04:08.755420425Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:04:09.292753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379931106.mount: Deactivated successfully. Feb 13 19:04:09.953050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:04:09.965297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:11.622882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:11.636442 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:04:11.736248 kubelet[2811]: E0213 19:04:11.736089 2811 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:04:11.744978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:04:11.745420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:04:12.890191 containerd[2049]: time="2025-02-13T19:04:12.890132934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:12.892234 containerd[2049]: time="2025-02-13T19:04:12.892135938Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 19:04:12.893662 containerd[2049]: time="2025-02-13T19:04:12.892966734Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:12.899093 containerd[2049]: time="2025-02-13T19:04:12.899002350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:12.901662 containerd[2049]: time="2025-02-13T19:04:12.901522134Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.146041097s" Feb 13 19:04:12.901662 containerd[2049]: time="2025-02-13T19:04:12.901574310Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:04:16.933591 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:04:21.478295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:21.495117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:21.534178 systemd[1]: Reloading requested from client PID 2911 ('systemctl') (unit session-7.scope)... Feb 13 19:04:21.534392 systemd[1]: Reloading... Feb 13 19:04:21.713675 zram_generator::config[2951]: No configuration found. Feb 13 19:04:21.981698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:04:22.141362 systemd[1]: Reloading finished in 606 ms. Feb 13 19:04:22.234390 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:04:22.234610 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:04:22.236331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:22.252264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:22.556068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:22.574397 (kubelet)[3026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:04:22.654772 kubelet[3026]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:22.654772 kubelet[3026]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:04:22.654772 kubelet[3026]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:22.656656 kubelet[3026]: I0213 19:04:22.656536 3026 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:04:24.041236 kubelet[3026]: I0213 19:04:24.041173 3026 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:04:24.041236 kubelet[3026]: I0213 19:04:24.041220 3026 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:04:24.042011 kubelet[3026]: I0213 19:04:24.041590 3026 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:04:24.070806 kubelet[3026]: I0213 19:04:24.070567 3026 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:04:24.071969 kubelet[3026]: E0213 19:04:24.071917 3026 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:24.084672 kubelet[3026]: I0213 19:04:24.084377 3026 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:04:24.085170 kubelet[3026]: I0213 19:04:24.085123 3026 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:04:24.085483 kubelet[3026]: I0213 19:04:24.085170 3026 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-68","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:04:24.085673 kubelet[3026]: I0213 19:04:24.085495 3026 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:04:24.085673 kubelet[3026]: I0213 19:04:24.085522 3026 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:04:24.085799 kubelet[3026]: I0213 19:04:24.085789 3026 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:24.087452 kubelet[3026]: I0213 19:04:24.087419 3026 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:04:24.087452 kubelet[3026]: I0213 19:04:24.087456 3026 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:04:24.087615 kubelet[3026]: I0213 19:04:24.087531 3026 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:04:24.087615 kubelet[3026]: I0213 19:04:24.087601 3026 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:04:24.091675 kubelet[3026]: I0213 19:04:24.090696 3026 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:04:24.091675 kubelet[3026]: I0213 19:04:24.091102 3026 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:04:24.091675 kubelet[3026]: W0213 19:04:24.091193 3026 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:04:24.093165 kubelet[3026]: W0213 19:04:24.093080 3026 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-68&limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:24.093345 kubelet[3026]: E0213 19:04:24.093325 3026 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-68&limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:24.093699 kubelet[3026]: W0213 19:04:24.093604 3026 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:24.093829 kubelet[3026]: E0213 19:04:24.093809 3026 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:24.095745 kubelet[3026]: I0213 19:04:24.095711 3026 server.go:1264] "Started kubelet" Feb 13 19:04:24.104405 kubelet[3026]: I0213 19:04:24.104359 3026 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:04:24.108371 kubelet[3026]: I0213 19:04:24.108316 3026 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:04:24.113756 kubelet[3026]: I0213 19:04:24.113716 3026 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:04:24.117098 kubelet[3026]: I0213 19:04:24.115599 3026 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:04:24.117098 kubelet[3026]: I0213 19:04:24.115998 3026 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:04:24.123675 kubelet[3026]: I0213 19:04:24.123621 3026 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:04:24.129261 kubelet[3026]: I0213 19:04:24.129224 3026 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:04:24.129545 kubelet[3026]: I0213 19:04:24.129526 3026 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:04:24.137383 kubelet[3026]: I0213 19:04:24.135260 3026 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:04:24.140102 kubelet[3026]: I0213 19:04:24.140062 3026 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:04:24.140536 kubelet[3026]: I0213 19:04:24.140504 3026 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:04:24.142113 kubelet[3026]: E0213 19:04:24.142025 3026 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-68?timeout=10s\": dial tcp 172.31.18.68:6443: connect: connection refused" interval="200ms" Feb 13 19:04:24.143877 kubelet[3026]: W0213 19:04:24.143645 3026 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:24.144116 kubelet[3026]: E0213 19:04:24.144090 3026 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:24.144467 kubelet[3026]: E0213 19:04:24.144287 3026 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.68:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.68:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-68.1823d9e643a41169 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-68,UID:ip-172-31-18-68,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-68,},FirstTimestamp:2025-02-13 19:04:24.095674729 +0000 UTC m=+1.514932136,LastTimestamp:2025-02-13 19:04:24.095674729 +0000 UTC m=+1.514932136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-68,}" Feb 13 19:04:24.145823 kubelet[3026]: I0213 19:04:24.145771 3026 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:04:24.145985 kubelet[3026]: I0213 19:04:24.145868 3026 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:04:24.145985 kubelet[3026]: I0213 19:04:24.145896 3026 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:04:24.146081 kubelet[3026]: E0213 19:04:24.145964 3026 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:04:24.147882 kubelet[3026]: W0213 19:04:24.147254 3026 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:24.147882 kubelet[3026]: E0213 19:04:24.147314 3026 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:24.148760 kubelet[3026]: I0213 19:04:24.148721 3026 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:04:24.154109 kubelet[3026]: E0213 19:04:24.154043 3026 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:04:24.197378 kubelet[3026]: I0213 19:04:24.197341 3026 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:04:24.197571 kubelet[3026]: I0213 19:04:24.197550 3026 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:04:24.197844 kubelet[3026]: I0213 19:04:24.197825 3026 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:24.199823 kubelet[3026]: I0213 19:04:24.199799 3026 policy_none.go:49] "None policy: Start" Feb 13 19:04:24.201049 kubelet[3026]: I0213 19:04:24.201024 3026 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:04:24.201702 kubelet[3026]: I0213 19:04:24.201252 3026 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:04:24.211071 kubelet[3026]: I0213 19:04:24.211015 3026 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:04:24.211513 kubelet[3026]: I0213 19:04:24.211464 3026 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:04:24.211766 kubelet[3026]: I0213 19:04:24.211746 3026 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:04:24.221620 kubelet[3026]: E0213 19:04:24.221583 3026 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-68\" not found" Feb 13 19:04:24.227623 kubelet[3026]: I0213 19:04:24.227127 3026 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-68" Feb 13 19:04:24.227623 kubelet[3026]: E0213 19:04:24.227567 3026 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.68:6443/api/v1/nodes\": dial tcp 172.31.18.68:6443: connect: connection refused" node="ip-172-31-18-68" Feb 13 19:04:24.247180 kubelet[3026]: I0213 19:04:24.247066 3026 topology_manager.go:215] "Topology Admit Handler" podUID="46fbd11fd210f645b64cdad3bb93a94f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:24.249987 kubelet[3026]: I0213 19:04:24.249615 3026 topology_manager.go:215] "Topology Admit Handler" podUID="a72a4ff42bcb6f09e82b49b26ce5a4f4" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-68" Feb 13 19:04:24.252393 kubelet[3026]: I0213 19:04:24.252353 3026 topology_manager.go:215] "Topology Admit Handler" podUID="85447c4e29c7d21d7ff909a009505487" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-68" Feb 13 19:04:24.343084 kubelet[3026]: E0213 19:04:24.342913 3026 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-68?timeout=10s\": dial tcp 172.31.18.68:6443: connect: connection refused" interval="400ms" Feb 13 19:04:24.430158 kubelet[3026]: I0213 19:04:24.430058 3026 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-68" Feb 13 19:04:24.430757 kubelet[3026]: I0213 19:04:24.430486 3026 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a72a4ff42bcb6f09e82b49b26ce5a4f4-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-68\" (UID: \"a72a4ff42bcb6f09e82b49b26ce5a4f4\") " pod="kube-system/kube-scheduler-ip-172-31-18-68" Feb 13 19:04:24.430757 kubelet[3026]: E0213 19:04:24.430535 3026 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.68:6443/api/v1/nodes\": dial tcp 172.31.18.68:6443: connect: connection refused" node="ip-172-31-18-68" Feb 13 19:04:24.430757 kubelet[3026]: I0213 19:04:24.430545 3026 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85447c4e29c7d21d7ff909a009505487-ca-certs\") pod \"kube-apiserver-ip-172-31-18-68\" (UID: \"85447c4e29c7d21d7ff909a009505487\") " pod="kube-system/kube-apiserver-ip-172-31-18-68" Feb 13 19:04:24.430757 kubelet[3026]: I0213 19:04:24.430604 3026 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85447c4e29c7d21d7ff909a009505487-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-68\" (UID: \"85447c4e29c7d21d7ff909a009505487\") " pod="kube-system/kube-apiserver-ip-172-31-18-68" Feb 13 19:04:24.430757 kubelet[3026]: I0213 19:04:24.430676 3026 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:24.432214 kubelet[3026]: I0213 19:04:24.430717 3026 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:24.432214 kubelet[3026]: I0213 19:04:24.431936 3026 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:24.432214 kubelet[3026]: I0213 19:04:24.432002 3026 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:24.432214 kubelet[3026]: I0213 19:04:24.432058 3026 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:24.432214 kubelet[3026]: I0213 19:04:24.432111 3026 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85447c4e29c7d21d7ff909a009505487-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-68\" (UID: \"85447c4e29c7d21d7ff909a009505487\") " pod="kube-system/kube-apiserver-ip-172-31-18-68" Feb 13 19:04:24.561699 containerd[2049]: time="2025-02-13T19:04:24.561316479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-68,Uid:46fbd11fd210f645b64cdad3bb93a94f,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:24.567906 containerd[2049]: time="2025-02-13T19:04:24.567834244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-68,Uid:a72a4ff42bcb6f09e82b49b26ce5a4f4,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:24.572340 containerd[2049]: time="2025-02-13T19:04:24.571978936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-68,Uid:85447c4e29c7d21d7ff909a009505487,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:24.707473 kubelet[3026]: E0213 19:04:24.707281 3026 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.68:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.68:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-68.1823d9e643a41169 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-68,UID:ip-172-31-18-68,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-68,},FirstTimestamp:2025-02-13 19:04:24.095674729 +0000 UTC m=+1.514932136,LastTimestamp:2025-02-13 19:04:24.095674729 +0000 UTC m=+1.514932136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-68,}" Feb 13 19:04:24.743953 kubelet[3026]: E0213 19:04:24.743849 3026 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-68?timeout=10s\": dial tcp 172.31.18.68:6443: connect: connection refused" interval="800ms" Feb 13 19:04:24.833505 kubelet[3026]: I0213 19:04:24.832987 3026 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-68" Feb 13 19:04:24.833505 kubelet[3026]: E0213 19:04:24.833435 3026 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.68:6443/api/v1/nodes\": dial tcp 172.31.18.68:6443: connect: connection refused" node="ip-172-31-18-68" Feb 13 19:04:25.077752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415824425.mount: Deactivated successfully. Feb 13 19:04:25.085668 containerd[2049]: time="2025-02-13T19:04:25.085584086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:25.100513 containerd[2049]: time="2025-02-13T19:04:25.098028002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:04:25.100513 containerd[2049]: time="2025-02-13T19:04:25.099192122Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:25.102796 containerd[2049]: time="2025-02-13T19:04:25.102120686Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:25.102796 containerd[2049]: time="2025-02-13T19:04:25.102529742Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:04:25.104247 containerd[2049]: time="2025-02-13T19:04:25.104026406Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:04:25.104247 containerd[2049]: time="2025-02-13T19:04:25.104176106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:25.111186 containerd[2049]: time="2025-02-13T19:04:25.111120386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:25.113531 containerd[2049]: time="2025-02-13T19:04:25.113127374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.184314ms" Feb 13 19:04:25.115838 containerd[2049]: time="2025-02-13T19:04:25.115452182Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.022447ms" Feb 13 19:04:25.120257 containerd[2049]: time="2025-02-13T19:04:25.120202250Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.119754ms" Feb 13 19:04:25.224305 kubelet[3026]: W0213 19:04:25.224174 3026 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:25.224305 kubelet[3026]: E0213 19:04:25.224269 3026 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:25.323411 containerd[2049]: time="2025-02-13T19:04:25.322876035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:25.323411 containerd[2049]: time="2025-02-13T19:04:25.322996095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:25.323411 containerd[2049]: time="2025-02-13T19:04:25.323032239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:25.323411 containerd[2049]: time="2025-02-13T19:04:25.323230815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:25.328036 containerd[2049]: time="2025-02-13T19:04:25.326758527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:25.328036 containerd[2049]: time="2025-02-13T19:04:25.327862251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:25.328381 containerd[2049]: time="2025-02-13T19:04:25.327905979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:25.328381 containerd[2049]: time="2025-02-13T19:04:25.328125663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:25.335389 containerd[2049]: time="2025-02-13T19:04:25.334963947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:25.335389 containerd[2049]: time="2025-02-13T19:04:25.335079819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:25.335389 containerd[2049]: time="2025-02-13T19:04:25.335106183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:25.336119 containerd[2049]: time="2025-02-13T19:04:25.335393835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:25.497903 containerd[2049]: time="2025-02-13T19:04:25.497841244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-68,Uid:a72a4ff42bcb6f09e82b49b26ce5a4f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8f05a79e15f7c50b7028e42eee086a07e67a5134c807cf63da7c37369342ddc\"" Feb 13 19:04:25.505837 containerd[2049]: time="2025-02-13T19:04:25.505743004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-68,Uid:46fbd11fd210f645b64cdad3bb93a94f,Namespace:kube-system,Attempt:0,} returns sandbox id \"72c871d664cd1a929042c00bb3aa2d39afe885936555ccf7a615b80a06ac1cb9\"" Feb 13 19:04:25.513562 containerd[2049]: time="2025-02-13T19:04:25.513509632Z" level=info msg="CreateContainer within sandbox \"72c871d664cd1a929042c00bb3aa2d39afe885936555ccf7a615b80a06ac1cb9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:04:25.514437 containerd[2049]: time="2025-02-13T19:04:25.514389184Z" level=info msg="CreateContainer within sandbox \"a8f05a79e15f7c50b7028e42eee086a07e67a5134c807cf63da7c37369342ddc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:04:25.521978 containerd[2049]: time="2025-02-13T19:04:25.521124748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-68,Uid:85447c4e29c7d21d7ff909a009505487,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f5028eb567fa88ebac2f7b400f575e6741724bedf94b1b593cab4cef5065b73\"" Feb 13 19:04:25.529385 containerd[2049]: time="2025-02-13T19:04:25.529329508Z" level=info msg="CreateContainer within sandbox \"2f5028eb567fa88ebac2f7b400f575e6741724bedf94b1b593cab4cef5065b73\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:04:25.545182 kubelet[3026]: E0213 19:04:25.544981 3026 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-68?timeout=10s\": dial tcp 172.31.18.68:6443: connect: connection refused" interval="1.6s" Feb 13 19:04:25.545182 kubelet[3026]: W0213 19:04:25.545044 3026 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-68&limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:25.545182 kubelet[3026]: E0213 19:04:25.545136 3026 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-68&limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:25.552563 containerd[2049]: time="2025-02-13T19:04:25.552498028Z" level=info msg="CreateContainer within sandbox \"a8f05a79e15f7c50b7028e42eee086a07e67a5134c807cf63da7c37369342ddc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4022cd7327f9b64e17484545dbf01e27616a21e8deb6fcb50ac36e57d1f76e9a\"" Feb 13 19:04:25.553593 containerd[2049]: time="2025-02-13T19:04:25.553544272Z" level=info msg="StartContainer for \"4022cd7327f9b64e17484545dbf01e27616a21e8deb6fcb50ac36e57d1f76e9a\"" Feb 13 19:04:25.560391 containerd[2049]: time="2025-02-13T19:04:25.560211184Z" level=info msg="CreateContainer within sandbox \"72c871d664cd1a929042c00bb3aa2d39afe885936555ccf7a615b80a06ac1cb9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0d1aea0a916fa020b6727012feaec830ca678b83ce322144186e2ef965c87d9\"" Feb 13 19:04:25.560901 containerd[2049]: time="2025-02-13T19:04:25.560841940Z" level=info msg="CreateContainer within sandbox \"2f5028eb567fa88ebac2f7b400f575e6741724bedf94b1b593cab4cef5065b73\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"49031da61c31c37f306285b65f8cc7f083e9452101c1e51ca1fa0b8f12a84446\"" Feb 13 19:04:25.560991 containerd[2049]: time="2025-02-13T19:04:25.560863864Z" level=info msg="StartContainer for \"f0d1aea0a916fa020b6727012feaec830ca678b83ce322144186e2ef965c87d9\"" Feb 13 19:04:25.561854 containerd[2049]: time="2025-02-13T19:04:25.561683956Z" level=info msg="StartContainer for \"49031da61c31c37f306285b65f8cc7f083e9452101c1e51ca1fa0b8f12a84446\"" Feb 13 19:04:25.613573 kubelet[3026]: W0213 19:04:25.613160 3026 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:25.613573 kubelet[3026]: E0213 19:04:25.613263 3026 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:25.639278 kubelet[3026]: I0213 19:04:25.639118 3026 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-68" Feb 13 19:04:25.640689 kubelet[3026]: E0213 19:04:25.639597 3026 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.68:6443/api/v1/nodes\": dial tcp 172.31.18.68:6443: connect: connection refused" node="ip-172-31-18-68" Feb 13 19:04:25.680322 kubelet[3026]: W0213 19:04:25.680235 3026 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:25.681019 kubelet[3026]: E0213 19:04:25.680331 3026 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.68:6443: connect: connection refused Feb 13 19:04:25.790273 containerd[2049]: time="2025-02-13T19:04:25.789431790Z" level=info msg="StartContainer for \"f0d1aea0a916fa020b6727012feaec830ca678b83ce322144186e2ef965c87d9\" returns successfully" Feb 13 19:04:25.794508 containerd[2049]: time="2025-02-13T19:04:25.793776342Z" level=info msg="StartContainer for \"49031da61c31c37f306285b65f8cc7f083e9452101c1e51ca1fa0b8f12a84446\" returns successfully" Feb 13 19:04:25.839052 containerd[2049]: time="2025-02-13T19:04:25.838966506Z" level=info msg="StartContainer for \"4022cd7327f9b64e17484545dbf01e27616a21e8deb6fcb50ac36e57d1f76e9a\" returns successfully" Feb 13 19:04:27.246698 kubelet[3026]: I0213 19:04:27.245214 3026 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-68" Feb 13 19:04:29.461082 kubelet[3026]: E0213 19:04:29.460966 3026 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-68\" not found" node="ip-172-31-18-68" Feb 13 19:04:29.505566 kubelet[3026]: I0213 19:04:29.505458 3026 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-68" Feb 13 19:04:30.110965 kubelet[3026]: I0213 19:04:30.109147 3026 apiserver.go:52] "Watching apiserver" Feb 13 19:04:30.130895 kubelet[3026]: I0213 19:04:30.130847 3026 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:04:30.943905 update_engine[2017]: I20250213 19:04:30.943811 2017 update_attempter.cc:509] Updating boot flags... Feb 13 19:04:31.021754 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3317) Feb 13 19:04:31.431658 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3320) Feb 13 19:04:31.574916 systemd[1]: Reloading requested from client PID 3469 ('systemctl') (unit session-7.scope)... Feb 13 19:04:31.575146 systemd[1]: Reloading... Feb 13 19:04:31.776318 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3320) Feb 13 19:04:31.875719 zram_generator::config[3557]: No configuration found. Feb 13 19:04:32.198771 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:04:32.403260 systemd[1]: Reloading finished in 827 ms. Feb 13 19:04:32.555278 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:32.556257 kubelet[3026]: I0213 19:04:32.555619 3026 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:04:32.584275 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:04:32.585018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:32.603239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:32.913993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:32.925171 (kubelet)[3681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:04:33.020685 kubelet[3681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:33.020685 kubelet[3681]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:04:33.020685 kubelet[3681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:33.020685 kubelet[3681]: I0213 19:04:33.020608 3681 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:04:33.032585 kubelet[3681]: I0213 19:04:33.028768 3681 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:04:33.032585 kubelet[3681]: I0213 19:04:33.028813 3681 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:04:33.032585 kubelet[3681]: I0213 19:04:33.029121 3681 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:04:33.032585 kubelet[3681]: I0213 19:04:33.031910 3681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:04:33.041458 kubelet[3681]: I0213 19:04:33.041239 3681 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:04:33.059674 kubelet[3681]: I0213 19:04:33.058970 3681 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:04:33.060470 kubelet[3681]: I0213 19:04:33.060369 3681 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:04:33.061082 kubelet[3681]: I0213 19:04:33.060587 3681 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-68","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:04:33.061640 kubelet[3681]: I0213 19:04:33.061595 3681 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:04:33.061814 kubelet[3681]: I0213 19:04:33.061793 3681 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:04:33.061976 kubelet[3681]: I0213 19:04:33.061956 3681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:33.062241 kubelet[3681]: I0213 19:04:33.062221 3681 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:04:33.062352 kubelet[3681]: I0213 19:04:33.062332 3681 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:04:33.062497 kubelet[3681]: I0213 19:04:33.062478 3681 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:04:33.062758 kubelet[3681]: I0213 19:04:33.062736 3681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:04:33.070493 kubelet[3681]: I0213 19:04:33.067378 3681 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:04:33.070493 kubelet[3681]: I0213 19:04:33.067708 3681 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:04:33.070493 kubelet[3681]: I0213 19:04:33.068357 3681 server.go:1264] "Started kubelet" Feb 13 19:04:33.075872 kubelet[3681]: I0213 19:04:33.075821 3681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:04:33.084511 sudo[3694]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:04:33.085762 sudo[3694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:04:33.095028 kubelet[3681]: I0213 19:04:33.094934 3681 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:04:33.098418 kubelet[3681]: I0213 19:04:33.098367 3681 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:04:33.100682 kubelet[3681]: I0213 19:04:33.100095 3681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:04:33.100682 kubelet[3681]: I0213 19:04:33.100468 3681 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:04:33.110675 kubelet[3681]: I0213 19:04:33.106478 3681 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:04:33.110675 kubelet[3681]: I0213 19:04:33.106672 3681 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:04:33.112023 kubelet[3681]: I0213 19:04:33.111996 3681 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:04:33.114942 kubelet[3681]: I0213 19:04:33.114907 3681 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:04:33.117002 kubelet[3681]: I0213 19:04:33.116919 3681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:04:33.144705 kubelet[3681]: I0213 19:04:33.144431 3681 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:04:33.175190 kubelet[3681]: E0213 19:04:33.175053 3681 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:04:33.192680 kubelet[3681]: I0213 19:04:33.191946 3681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:04:33.200546 kubelet[3681]: I0213 19:04:33.198686 3681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:04:33.200546 kubelet[3681]: I0213 19:04:33.198763 3681 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:04:33.200546 kubelet[3681]: I0213 19:04:33.198796 3681 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:04:33.200546 kubelet[3681]: E0213 19:04:33.198899 3681 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:04:33.251110 kubelet[3681]: E0213 19:04:33.250482 3681 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Feb 13 19:04:33.256512 kubelet[3681]: I0213 19:04:33.256264 3681 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-68" Feb 13 19:04:33.282501 kubelet[3681]: I0213 19:04:33.282450 3681 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-68" Feb 13 19:04:33.282872 kubelet[3681]: I0213 19:04:33.282570 3681 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-68" Feb 13 19:04:33.302380 kubelet[3681]: E0213 19:04:33.302072 3681 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:04:33.423721 kubelet[3681]: I0213 19:04:33.423407 3681 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:04:33.423721 kubelet[3681]: I0213 19:04:33.423444 3681 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:04:33.423721 kubelet[3681]: I0213 19:04:33.423480 3681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:33.424221 kubelet[3681]: I0213 19:04:33.424191 3681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:04:33.424384 kubelet[3681]: I0213 19:04:33.424336 3681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:04:33.424478 kubelet[3681]: I0213 19:04:33.424460 3681 policy_none.go:49] "None policy: Start" Feb 13 19:04:33.430286 kubelet[3681]: I0213 19:04:33.429367 3681 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:04:33.434667 kubelet[3681]: I0213 19:04:33.434308 3681 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:04:33.435042 kubelet[3681]: I0213 19:04:33.435002 3681 state_mem.go:75] "Updated machine memory state" Feb 13 19:04:33.438869 kubelet[3681]: I0213 19:04:33.438816 3681 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:04:33.439685 kubelet[3681]: I0213 19:04:33.439112 3681 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:04:33.446465 kubelet[3681]: I0213 19:04:33.444835 3681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:04:33.502350 kubelet[3681]: I0213 19:04:33.502286 3681 topology_manager.go:215] "Topology Admit Handler" podUID="85447c4e29c7d21d7ff909a009505487" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-68" Feb 13 19:04:33.502916 kubelet[3681]: I0213 19:04:33.502867 3681 topology_manager.go:215] "Topology Admit Handler" podUID="46fbd11fd210f645b64cdad3bb93a94f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:33.504555 kubelet[3681]: I0213 19:04:33.504095 3681 topology_manager.go:215] "Topology Admit Handler" podUID="a72a4ff42bcb6f09e82b49b26ce5a4f4" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-68" Feb 13 19:04:33.520977 kubelet[3681]: I0213 19:04:33.519868 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85447c4e29c7d21d7ff909a009505487-ca-certs\") pod \"kube-apiserver-ip-172-31-18-68\" (UID: \"85447c4e29c7d21d7ff909a009505487\") " pod="kube-system/kube-apiserver-ip-172-31-18-68" Feb 13 19:04:33.525852 kubelet[3681]: I0213 19:04:33.521812 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85447c4e29c7d21d7ff909a009505487-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-68\" (UID: \"85447c4e29c7d21d7ff909a009505487\") " pod="kube-system/kube-apiserver-ip-172-31-18-68" Feb 13 19:04:33.526988 kubelet[3681]: I0213 19:04:33.526848 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85447c4e29c7d21d7ff909a009505487-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-68\" (UID: \"85447c4e29c7d21d7ff909a009505487\") " pod="kube-system/kube-apiserver-ip-172-31-18-68" Feb 13 19:04:33.528727 kubelet[3681]: I0213 19:04:33.528671 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:33.529150 kubelet[3681]: I0213 19:04:33.529112 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a72a4ff42bcb6f09e82b49b26ce5a4f4-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-68\" (UID: \"a72a4ff42bcb6f09e82b49b26ce5a4f4\") " pod="kube-system/kube-scheduler-ip-172-31-18-68" Feb 13 19:04:33.529333 kubelet[3681]: I0213 19:04:33.529307 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:33.529475 kubelet[3681]: I0213 19:04:33.529452 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:33.529614 kubelet[3681]: I0213 19:04:33.529591 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:33.529769 kubelet[3681]: I0213 19:04:33.529747 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46fbd11fd210f645b64cdad3bb93a94f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-68\" (UID: \"46fbd11fd210f645b64cdad3bb93a94f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-68" Feb 13 19:04:34.027259 sudo[3694]: pam_unix(sudo:session): session closed for user root Feb 13 19:04:34.064880 kubelet[3681]: I0213 19:04:34.064411 3681 apiserver.go:52] "Watching apiserver" Feb 13 19:04:34.111202 kubelet[3681]: I0213 19:04:34.111118 3681 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:04:34.344602 kubelet[3681]: I0213 19:04:34.342034 3681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-68" podStartSLOduration=1.342014232 podStartE2EDuration="1.342014232s" podCreationTimestamp="2025-02-13 19:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:34.340120368 +0000 UTC m=+1.406877368" watchObservedRunningTime="2025-02-13 19:04:34.342014232 +0000 UTC m=+1.408771232" Feb 13 19:04:34.344602 kubelet[3681]: E0213 19:04:34.344435 3681 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-68\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-68" Feb 13 19:04:34.375307 kubelet[3681]: I0213 19:04:34.373604 3681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-68" podStartSLOduration=1.373428372 podStartE2EDuration="1.373428372s" podCreationTimestamp="2025-02-13 19:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:34.361469628 +0000 UTC m=+1.428226700" watchObservedRunningTime="2025-02-13 19:04:34.373428372 +0000 UTC m=+1.440185372" Feb 13 19:04:34.375307 kubelet[3681]: I0213 19:04:34.373889 3681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-68" podStartSLOduration=1.3738791639999999 podStartE2EDuration="1.373879164s" podCreationTimestamp="2025-02-13 19:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:34.37375062 +0000 UTC m=+1.440507632" watchObservedRunningTime="2025-02-13 19:04:34.373879164 +0000 UTC m=+1.440636164" Feb 13 19:04:37.886538 sudo[2398]: pam_unix(sudo:session): session closed for user root Feb 13 19:04:37.909684 sshd[2397]: Connection closed by 147.75.109.163 port 55342 Feb 13 19:04:37.910533 sshd-session[2394]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:37.919694 systemd[1]: sshd@6-172.31.18.68:22-147.75.109.163:55342.service: Deactivated successfully. Feb 13 19:04:37.928271 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:04:37.929956 systemd-logind[2015]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:04:37.932339 systemd-logind[2015]: Removed session 7. Feb 13 19:04:45.538045 kubelet[3681]: I0213 19:04:45.537955 3681 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:04:45.539684 containerd[2049]: time="2025-02-13T19:04:45.539466864Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:04:45.541000 kubelet[3681]: I0213 19:04:45.539922 3681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:04:46.287389 kubelet[3681]: I0213 19:04:46.287123 3681 topology_manager.go:215] "Topology Admit Handler" podUID="f09f0b1c-4974-4a9e-b7c5-019e7026f303" podNamespace="kube-system" podName="kube-proxy-b6sxg" Feb 13 19:04:46.309652 kubelet[3681]: I0213 19:04:46.303951 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f09f0b1c-4974-4a9e-b7c5-019e7026f303-xtables-lock\") pod \"kube-proxy-b6sxg\" (UID: \"f09f0b1c-4974-4a9e-b7c5-019e7026f303\") " pod="kube-system/kube-proxy-b6sxg" Feb 13 19:04:46.309652 kubelet[3681]: I0213 19:04:46.304057 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f09f0b1c-4974-4a9e-b7c5-019e7026f303-lib-modules\") pod \"kube-proxy-b6sxg\" (UID: \"f09f0b1c-4974-4a9e-b7c5-019e7026f303\") " pod="kube-system/kube-proxy-b6sxg" Feb 13 19:04:46.309652 kubelet[3681]: I0213 19:04:46.304149 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f09f0b1c-4974-4a9e-b7c5-019e7026f303-kube-proxy\") pod \"kube-proxy-b6sxg\" (UID: \"f09f0b1c-4974-4a9e-b7c5-019e7026f303\") " pod="kube-system/kube-proxy-b6sxg" Feb 13 19:04:46.309652 kubelet[3681]: I0213 19:04:46.304233 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl5q7\" (UniqueName: \"kubernetes.io/projected/f09f0b1c-4974-4a9e-b7c5-019e7026f303-kube-api-access-nl5q7\") pod \"kube-proxy-b6sxg\" (UID: \"f09f0b1c-4974-4a9e-b7c5-019e7026f303\") " pod="kube-system/kube-proxy-b6sxg" Feb 13 19:04:46.318672 kubelet[3681]: I0213 19:04:46.314980 3681 topology_manager.go:215] "Topology Admit Handler" podUID="e905df3a-1cf7-4a84-beef-467b121db14b" podNamespace="kube-system" podName="cilium-9lkmt" Feb 13 19:04:46.408617 kubelet[3681]: I0213 19:04:46.408471 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-run\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.408886 kubelet[3681]: I0213 19:04:46.408851 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cni-path\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.409169 kubelet[3681]: I0213 19:04:46.409144 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-lib-modules\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.409469 kubelet[3681]: I0213 19:04:46.409424 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-config-path\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.409752 kubelet[3681]: I0213 19:04:46.409616 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-host-proc-sys-kernel\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.411934 kubelet[3681]: I0213 19:04:46.411896 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svtj\" (UniqueName: \"kubernetes.io/projected/e905df3a-1cf7-4a84-beef-467b121db14b-kube-api-access-8svtj\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.412310 kubelet[3681]: I0213 19:04:46.412266 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-bpf-maps\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.412404 kubelet[3681]: I0213 19:04:46.412342 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-hostproc\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.412482 kubelet[3681]: I0213 19:04:46.412422 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-host-proc-sys-net\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.412482 kubelet[3681]: I0213 19:04:46.412459 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e905df3a-1cf7-4a84-beef-467b121db14b-hubble-tls\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.412590 kubelet[3681]: I0213 19:04:46.412498 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e905df3a-1cf7-4a84-beef-467b121db14b-clustermesh-secrets\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.412590 kubelet[3681]: I0213 19:04:46.412535 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-cgroup\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.412741 kubelet[3681]: I0213 19:04:46.412569 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-xtables-lock\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.412741 kubelet[3681]: I0213 19:04:46.412651 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-etc-cni-netd\") pod \"cilium-9lkmt\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " pod="kube-system/cilium-9lkmt" Feb 13 19:04:46.641186 containerd[2049]: time="2025-02-13T19:04:46.641005465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6sxg,Uid:f09f0b1c-4974-4a9e-b7c5-019e7026f303,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:46.674179 containerd[2049]: time="2025-02-13T19:04:46.674127325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lkmt,Uid:e905df3a-1cf7-4a84-beef-467b121db14b,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:46.686224 kubelet[3681]: I0213 19:04:46.686002 3681 topology_manager.go:215] "Topology Admit Handler" podUID="1afb24ac-2249-4015-9af4-b2b2b7f7a228" podNamespace="kube-system" podName="cilium-operator-599987898-l6b2l" Feb 13 19:04:46.715802 kubelet[3681]: I0213 19:04:46.714573 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1afb24ac-2249-4015-9af4-b2b2b7f7a228-cilium-config-path\") pod \"cilium-operator-599987898-l6b2l\" (UID: \"1afb24ac-2249-4015-9af4-b2b2b7f7a228\") " pod="kube-system/cilium-operator-599987898-l6b2l" Feb 13 19:04:46.715802 kubelet[3681]: I0213 19:04:46.714661 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjknf\" (UniqueName: \"kubernetes.io/projected/1afb24ac-2249-4015-9af4-b2b2b7f7a228-kube-api-access-hjknf\") pod \"cilium-operator-599987898-l6b2l\" (UID: \"1afb24ac-2249-4015-9af4-b2b2b7f7a228\") " pod="kube-system/cilium-operator-599987898-l6b2l" Feb 13 19:04:46.746583 containerd[2049]: time="2025-02-13T19:04:46.745186910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:46.746583 containerd[2049]: time="2025-02-13T19:04:46.745833638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:46.746583 containerd[2049]: time="2025-02-13T19:04:46.745876190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:46.746583 containerd[2049]: time="2025-02-13T19:04:46.746088854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:46.757858 containerd[2049]: time="2025-02-13T19:04:46.757209494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:46.757858 containerd[2049]: time="2025-02-13T19:04:46.757551542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:46.757858 containerd[2049]: time="2025-02-13T19:04:46.757713242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:46.759240 containerd[2049]: time="2025-02-13T19:04:46.759072842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:46.863808 containerd[2049]: time="2025-02-13T19:04:46.863678426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6sxg,Uid:f09f0b1c-4974-4a9e-b7c5-019e7026f303,Namespace:kube-system,Attempt:0,} returns sandbox id \"9444dc80e5cff831a7ed231e9e4fe2760fe6e3802e4a800b885a1c99aeae181e\"" Feb 13 19:04:46.873118 containerd[2049]: time="2025-02-13T19:04:46.872658938Z" level=info msg="CreateContainer within sandbox \"9444dc80e5cff831a7ed231e9e4fe2760fe6e3802e4a800b885a1c99aeae181e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:04:46.876831 containerd[2049]: time="2025-02-13T19:04:46.876780830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lkmt,Uid:e905df3a-1cf7-4a84-beef-467b121db14b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\"" Feb 13 19:04:46.880691 containerd[2049]: time="2025-02-13T19:04:46.880620134Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:04:46.897126 containerd[2049]: time="2025-02-13T19:04:46.896964746Z" level=info msg="CreateContainer within sandbox \"9444dc80e5cff831a7ed231e9e4fe2760fe6e3802e4a800b885a1c99aeae181e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"329627935305dbab6c608791dd32ef31fea486dbbce86b79fc64d4b9fd44da1f\"" Feb 13 19:04:46.899383 containerd[2049]: time="2025-02-13T19:04:46.899314682Z" level=info msg="StartContainer for \"329627935305dbab6c608791dd32ef31fea486dbbce86b79fc64d4b9fd44da1f\"" Feb 13 19:04:47.006389 containerd[2049]: time="2025-02-13T19:04:47.006309515Z" level=info msg="StartContainer for \"329627935305dbab6c608791dd32ef31fea486dbbce86b79fc64d4b9fd44da1f\" returns successfully" Feb 13 19:04:47.013458 containerd[2049]: time="2025-02-13T19:04:47.013168883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l6b2l,Uid:1afb24ac-2249-4015-9af4-b2b2b7f7a228,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:47.068746 containerd[2049]: time="2025-02-13T19:04:47.067866467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:47.068746 containerd[2049]: time="2025-02-13T19:04:47.067952495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:47.068746 containerd[2049]: time="2025-02-13T19:04:47.067976963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:47.068746 containerd[2049]: time="2025-02-13T19:04:47.068112959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:47.178901 containerd[2049]: time="2025-02-13T19:04:47.178461108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l6b2l,Uid:1afb24ac-2249-4015-9af4-b2b2b7f7a228,Namespace:kube-system,Attempt:0,} returns sandbox id \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\"" Feb 13 19:04:53.225027 kubelet[3681]: I0213 19:04:53.224112 3681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b6sxg" podStartSLOduration=7.224090154 podStartE2EDuration="7.224090154s" podCreationTimestamp="2025-02-13 19:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:47.406896181 +0000 UTC m=+14.473653169" watchObservedRunningTime="2025-02-13 19:04:53.224090154 +0000 UTC m=+20.290847142" Feb 13 19:04:53.533501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303326836.mount: Deactivated successfully. Feb 13 19:04:56.033953 containerd[2049]: time="2025-02-13T19:04:56.033875252Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:56.035864 containerd[2049]: time="2025-02-13T19:04:56.035781572Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:04:56.038352 containerd[2049]: time="2025-02-13T19:04:56.038275664Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:56.041870 containerd[2049]: time="2025-02-13T19:04:56.041796896Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.160904002s" Feb 13 19:04:56.041870 containerd[2049]: time="2025-02-13T19:04:56.041864708Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:04:56.045002 containerd[2049]: time="2025-02-13T19:04:56.044940080Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:04:56.050327 containerd[2049]: time="2025-02-13T19:04:56.050155736Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:04:56.078123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889187644.mount: Deactivated successfully. Feb 13 19:04:56.087828 containerd[2049]: time="2025-02-13T19:04:56.087765896Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\"" Feb 13 19:04:56.088854 containerd[2049]: time="2025-02-13T19:04:56.088805384Z" level=info msg="StartContainer for \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\"" Feb 13 19:04:56.187236 containerd[2049]: time="2025-02-13T19:04:56.187126749Z" level=info msg="StartContainer for \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\" returns successfully" Feb 13 19:04:57.065589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e-rootfs.mount: Deactivated successfully. Feb 13 19:04:57.330541 containerd[2049]: time="2025-02-13T19:04:57.330292882Z" level=info msg="shim disconnected" id=3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e namespace=k8s.io Feb 13 19:04:57.330541 containerd[2049]: time="2025-02-13T19:04:57.330365074Z" level=warning msg="cleaning up after shim disconnected" id=3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e namespace=k8s.io Feb 13 19:04:57.330541 containerd[2049]: time="2025-02-13T19:04:57.330384994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:57.429701 containerd[2049]: time="2025-02-13T19:04:57.429591839Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:04:57.470069 containerd[2049]: time="2025-02-13T19:04:57.470004155Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\"" Feb 13 19:04:57.470801 containerd[2049]: time="2025-02-13T19:04:57.470735639Z" level=info msg="StartContainer for \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\"" Feb 13 19:04:57.589831 containerd[2049]: time="2025-02-13T19:04:57.589668204Z" level=info msg="StartContainer for \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\" returns successfully" Feb 13 19:04:57.609201 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:04:57.611893 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:04:57.612106 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:04:57.624561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:04:57.665966 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:04:57.682982 containerd[2049]: time="2025-02-13T19:04:57.682770396Z" level=info msg="shim disconnected" id=09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c namespace=k8s.io Feb 13 19:04:57.683341 containerd[2049]: time="2025-02-13T19:04:57.683065344Z" level=warning msg="cleaning up after shim disconnected" id=09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c namespace=k8s.io Feb 13 19:04:57.683797 containerd[2049]: time="2025-02-13T19:04:57.683088348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:58.069537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c-rootfs.mount: Deactivated successfully. Feb 13 19:04:58.442777 containerd[2049]: time="2025-02-13T19:04:58.440731440Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:04:58.508708 containerd[2049]: time="2025-02-13T19:04:58.508119864Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\"" Feb 13 19:04:58.510740 containerd[2049]: time="2025-02-13T19:04:58.510681288Z" level=info msg="StartContainer for \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\"" Feb 13 19:04:58.655313 containerd[2049]: time="2025-02-13T19:04:58.655008637Z" level=info msg="StartContainer for \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\" returns successfully" Feb 13 19:04:58.770352 containerd[2049]: time="2025-02-13T19:04:58.770269333Z" level=info msg="shim disconnected" id=4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f namespace=k8s.io Feb 13 19:04:58.770352 containerd[2049]: time="2025-02-13T19:04:58.770349913Z" level=warning msg="cleaning up after shim disconnected" id=4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f namespace=k8s.io Feb 13 19:04:58.770902 containerd[2049]: time="2025-02-13T19:04:58.770371273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:59.003401 containerd[2049]: time="2025-02-13T19:04:59.003080267Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:59.007543 containerd[2049]: time="2025-02-13T19:04:59.007464347Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:59.007760 containerd[2049]: time="2025-02-13T19:04:59.007573283Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:04:59.015472 containerd[2049]: time="2025-02-13T19:04:59.015310283Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.970296499s" Feb 13 19:04:59.015472 containerd[2049]: time="2025-02-13T19:04:59.015376859Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:04:59.023438 containerd[2049]: time="2025-02-13T19:04:59.022456259Z" level=info msg="CreateContainer within sandbox \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:04:59.035434 containerd[2049]: time="2025-02-13T19:04:59.034978511Z" level=info msg="CreateContainer within sandbox \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\"" Feb 13 19:04:59.036581 containerd[2049]: time="2025-02-13T19:04:59.035921411Z" level=info msg="StartContainer for \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\"" Feb 13 19:04:59.071424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f-rootfs.mount: Deactivated successfully. Feb 13 19:04:59.150592 containerd[2049]: time="2025-02-13T19:04:59.150538715Z" level=info msg="StartContainer for \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\" returns successfully" Feb 13 19:04:59.462691 containerd[2049]: time="2025-02-13T19:04:59.462070693Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:04:59.518769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941496028.mount: Deactivated successfully. Feb 13 19:04:59.521854 containerd[2049]: time="2025-02-13T19:04:59.521779729Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\"" Feb 13 19:04:59.537674 containerd[2049]: time="2025-02-13T19:04:59.534961813Z" level=info msg="StartContainer for \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\"" Feb 13 19:04:59.625663 kubelet[3681]: I0213 19:04:59.622598 3681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-l6b2l" podStartSLOduration=1.7856393910000001 podStartE2EDuration="13.622575326s" podCreationTimestamp="2025-02-13 19:04:46 +0000 UTC" firstStartedPulling="2025-02-13 19:04:47.18123708 +0000 UTC m=+14.247994068" lastFinishedPulling="2025-02-13 19:04:59.018173015 +0000 UTC m=+26.084930003" observedRunningTime="2025-02-13 19:04:59.512331769 +0000 UTC m=+26.579088781" watchObservedRunningTime="2025-02-13 19:04:59.622575326 +0000 UTC m=+26.689332326" Feb 13 19:04:59.730375 containerd[2049]: time="2025-02-13T19:04:59.729468038Z" level=info msg="StartContainer for \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\" returns successfully" Feb 13 19:04:59.833970 containerd[2049]: time="2025-02-13T19:04:59.833873007Z" level=info msg="shim disconnected" id=fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6 namespace=k8s.io Feb 13 19:04:59.834318 containerd[2049]: time="2025-02-13T19:04:59.834042279Z" level=warning msg="cleaning up after shim disconnected" id=fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6 namespace=k8s.io Feb 13 19:04:59.834318 containerd[2049]: time="2025-02-13T19:04:59.834064419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:00.485814 containerd[2049]: time="2025-02-13T19:05:00.484577402Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:05:00.538006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523837811.mount: Deactivated successfully. Feb 13 19:05:00.564688 containerd[2049]: time="2025-02-13T19:05:00.562988282Z" level=info msg="CreateContainer within sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\"" Feb 13 19:05:00.571663 containerd[2049]: time="2025-02-13T19:05:00.567982010Z" level=info msg="StartContainer for \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\"" Feb 13 19:05:00.771479 containerd[2049]: time="2025-02-13T19:05:00.771270255Z" level=info msg="StartContainer for \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\" returns successfully" Feb 13 19:05:00.972738 kubelet[3681]: I0213 19:05:00.972617 3681 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:05:01.027246 kubelet[3681]: I0213 19:05:01.026683 3681 topology_manager.go:215] "Topology Admit Handler" podUID="fdec174d-516f-4617-a628-a3e20edf2411" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h745b" Feb 13 19:05:01.050954 kubelet[3681]: I0213 19:05:01.042395 3681 topology_manager.go:215] "Topology Admit Handler" podUID="af00f5c2-cce8-421e-bd0d-3b3774c78b93" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ppdxg" Feb 13 19:05:01.117072 kubelet[3681]: I0213 19:05:01.116997 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdec174d-516f-4617-a628-a3e20edf2411-config-volume\") pod \"coredns-7db6d8ff4d-h745b\" (UID: \"fdec174d-516f-4617-a628-a3e20edf2411\") " pod="kube-system/coredns-7db6d8ff4d-h745b" Feb 13 19:05:01.117249 kubelet[3681]: I0213 19:05:01.117078 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af00f5c2-cce8-421e-bd0d-3b3774c78b93-config-volume\") pod \"coredns-7db6d8ff4d-ppdxg\" (UID: \"af00f5c2-cce8-421e-bd0d-3b3774c78b93\") " pod="kube-system/coredns-7db6d8ff4d-ppdxg" Feb 13 19:05:01.117249 kubelet[3681]: I0213 19:05:01.117130 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2lbj\" (UniqueName: \"kubernetes.io/projected/af00f5c2-cce8-421e-bd0d-3b3774c78b93-kube-api-access-r2lbj\") pod \"coredns-7db6d8ff4d-ppdxg\" (UID: \"af00f5c2-cce8-421e-bd0d-3b3774c78b93\") " pod="kube-system/coredns-7db6d8ff4d-ppdxg" Feb 13 19:05:01.117249 kubelet[3681]: I0213 19:05:01.117173 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b75hw\" (UniqueName: \"kubernetes.io/projected/fdec174d-516f-4617-a628-a3e20edf2411-kube-api-access-b75hw\") pod \"coredns-7db6d8ff4d-h745b\" (UID: \"fdec174d-516f-4617-a628-a3e20edf2411\") " pod="kube-system/coredns-7db6d8ff4d-h745b" Feb 13 19:05:01.354404 containerd[2049]: time="2025-02-13T19:05:01.354241130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h745b,Uid:fdec174d-516f-4617-a628-a3e20edf2411,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:01.389250 containerd[2049]: time="2025-02-13T19:05:01.388533314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppdxg,Uid:af00f5c2-cce8-421e-bd0d-3b3774c78b93,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:03.828496 systemd-networkd[1601]: cilium_host: Link UP Feb 13 19:05:03.830476 systemd-networkd[1601]: cilium_net: Link UP Feb 13 19:05:03.830902 (udev-worker)[4473]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:03.833015 systemd-networkd[1601]: cilium_net: Gained carrier Feb 13 19:05:03.833427 systemd-networkd[1601]: cilium_host: Gained carrier Feb 13 19:05:03.834513 (udev-worker)[4471]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:03.836410 systemd-networkd[1601]: cilium_net: Gained IPv6LL Feb 13 19:05:03.837213 systemd-networkd[1601]: cilium_host: Gained IPv6LL Feb 13 19:05:04.016798 systemd-networkd[1601]: cilium_vxlan: Link UP Feb 13 19:05:04.018115 systemd-networkd[1601]: cilium_vxlan: Gained carrier Feb 13 19:05:04.489672 kernel: NET: Registered PF_ALG protocol family Feb 13 19:05:05.240937 systemd-networkd[1601]: cilium_vxlan: Gained IPv6LL Feb 13 19:05:05.776947 systemd-networkd[1601]: lxc_health: Link UP Feb 13 19:05:05.846260 systemd-networkd[1601]: lxc_health: Gained carrier Feb 13 19:05:05.849904 (udev-worker)[4515]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:06.076712 systemd-networkd[1601]: lxccafde66c5e21: Link UP Feb 13 19:05:06.086804 kernel: eth0: renamed from tmpbd09f Feb 13 19:05:06.092004 systemd-networkd[1601]: lxccafde66c5e21: Gained carrier Feb 13 19:05:06.535393 systemd-networkd[1601]: lxcee759a800581: Link UP Feb 13 19:05:06.546845 kernel: eth0: renamed from tmp2e2db Feb 13 19:05:06.561954 systemd-networkd[1601]: lxcee759a800581: Gained carrier Feb 13 19:05:06.715675 kubelet[3681]: I0213 19:05:06.714963 3681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9lkmt" podStartSLOduration=11.549800975 podStartE2EDuration="20.714939153s" podCreationTimestamp="2025-02-13 19:04:46 +0000 UTC" firstStartedPulling="2025-02-13 19:04:46.879479666 +0000 UTC m=+13.946236654" lastFinishedPulling="2025-02-13 19:04:56.04461776 +0000 UTC m=+23.111374832" observedRunningTime="2025-02-13 19:05:01.576514599 +0000 UTC m=+28.643271599" watchObservedRunningTime="2025-02-13 19:05:06.714939153 +0000 UTC m=+33.781696129" Feb 13 19:05:07.418070 systemd-networkd[1601]: lxccafde66c5e21: Gained IPv6LL Feb 13 19:05:07.608818 systemd-networkd[1601]: lxc_health: Gained IPv6LL Feb 13 19:05:07.992858 systemd-networkd[1601]: lxcee759a800581: Gained IPv6LL Feb 13 19:05:10.354280 ntpd[1997]: Listen normally on 6 cilium_host 192.168.0.228:123 Feb 13 19:05:10.354417 ntpd[1997]: Listen normally on 7 cilium_net [fe80::a070:9dff:fedf:6891%4]:123 Feb 13 19:05:10.354979 ntpd[1997]: 13 Feb 19:05:10 ntpd[1997]: Listen normally on 6 cilium_host 192.168.0.228:123 Feb 13 19:05:10.354979 ntpd[1997]: 13 Feb 19:05:10 ntpd[1997]: Listen normally on 7 cilium_net [fe80::a070:9dff:fedf:6891%4]:123 Feb 13 19:05:10.354979 ntpd[1997]: 13 Feb 19:05:10 ntpd[1997]: Listen normally on 8 cilium_host [fe80::4410:2aff:fe74:ccc5%5]:123 Feb 13 19:05:10.354979 ntpd[1997]: 13 Feb 19:05:10 ntpd[1997]: Listen normally on 9 cilium_vxlan [fe80::ac12:3fff:fefd:bc36%6]:123 Feb 13 19:05:10.354979 ntpd[1997]: 13 Feb 19:05:10 ntpd[1997]: Listen normally on 10 lxc_health [fe80::4bf:98ff:fe59:ac10%8]:123 Feb 13 19:05:10.354979 ntpd[1997]: 13 Feb 19:05:10 ntpd[1997]: Listen normally on 11 lxccafde66c5e21 [fe80::d40f:c5ff:fe90:b458%10]:123 Feb 13 19:05:10.354979 ntpd[1997]: 13 Feb 19:05:10 ntpd[1997]: Listen normally on 12 lxcee759a800581 [fe80::785b:a0ff:fe3c:7af8%12]:123 Feb 13 19:05:10.354497 ntpd[1997]: Listen normally on 8 cilium_host [fe80::4410:2aff:fe74:ccc5%5]:123 Feb 13 19:05:10.354566 ntpd[1997]: Listen normally on 9 cilium_vxlan [fe80::ac12:3fff:fefd:bc36%6]:123 Feb 13 19:05:10.354665 ntpd[1997]: Listen normally on 10 lxc_health [fe80::4bf:98ff:fe59:ac10%8]:123 Feb 13 19:05:10.354746 ntpd[1997]: Listen normally on 11 lxccafde66c5e21 [fe80::d40f:c5ff:fe90:b458%10]:123 Feb 13 19:05:10.354815 ntpd[1997]: Listen normally on 12 lxcee759a800581 [fe80::785b:a0ff:fe3c:7af8%12]:123 Feb 13 19:05:14.497846 containerd[2049]: time="2025-02-13T19:05:14.497052316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:14.497846 containerd[2049]: time="2025-02-13T19:05:14.497202976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:14.497846 containerd[2049]: time="2025-02-13T19:05:14.497240380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:14.497846 containerd[2049]: time="2025-02-13T19:05:14.497419348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:14.603930 containerd[2049]: time="2025-02-13T19:05:14.603108604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:14.607487 containerd[2049]: time="2025-02-13T19:05:14.604458352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:14.607487 containerd[2049]: time="2025-02-13T19:05:14.604503988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:14.610453 containerd[2049]: time="2025-02-13T19:05:14.607356124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:14.793379 containerd[2049]: time="2025-02-13T19:05:14.793135241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h745b,Uid:fdec174d-516f-4617-a628-a3e20edf2411,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2dbd0b3a126377402ed94855432690c70df01efce42e55fbcf0619c9747cd3\"" Feb 13 19:05:14.810438 containerd[2049]: time="2025-02-13T19:05:14.810313097Z" level=info msg="CreateContainer within sandbox \"2e2dbd0b3a126377402ed94855432690c70df01efce42e55fbcf0619c9747cd3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:05:14.821352 containerd[2049]: time="2025-02-13T19:05:14.821294453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppdxg,Uid:af00f5c2-cce8-421e-bd0d-3b3774c78b93,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd09f394b6523d8a977d3a32020f4e11ffe8c23da055656450a960f1cb3735dc\"" Feb 13 19:05:14.855115 containerd[2049]: time="2025-02-13T19:05:14.854623913Z" level=info msg="CreateContainer within sandbox \"bd09f394b6523d8a977d3a32020f4e11ffe8c23da055656450a960f1cb3735dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:05:14.898405 containerd[2049]: time="2025-02-13T19:05:14.898006361Z" level=info msg="CreateContainer within sandbox \"2e2dbd0b3a126377402ed94855432690c70df01efce42e55fbcf0619c9747cd3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14692cc16b89748733e718a4809378d4d4509683d23b3f07f2e460213dcd0bbd\"" Feb 13 19:05:14.901807 containerd[2049]: time="2025-02-13T19:05:14.901744650Z" level=info msg="StartContainer for \"14692cc16b89748733e718a4809378d4d4509683d23b3f07f2e460213dcd0bbd\"" Feb 13 19:05:14.922332 containerd[2049]: time="2025-02-13T19:05:14.922122546Z" level=info msg="CreateContainer within sandbox \"bd09f394b6523d8a977d3a32020f4e11ffe8c23da055656450a960f1cb3735dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da03f458a49a147da358e9acfbb8ec3315f882f27e0f6e6f278cf74c72542085\"" Feb 13 19:05:14.924353 containerd[2049]: time="2025-02-13T19:05:14.924279102Z" level=info msg="StartContainer for \"da03f458a49a147da358e9acfbb8ec3315f882f27e0f6e6f278cf74c72542085\"" Feb 13 19:05:15.073069 containerd[2049]: time="2025-02-13T19:05:15.070924142Z" level=info msg="StartContainer for \"14692cc16b89748733e718a4809378d4d4509683d23b3f07f2e460213dcd0bbd\" returns successfully" Feb 13 19:05:15.099757 containerd[2049]: time="2025-02-13T19:05:15.098521346Z" level=info msg="StartContainer for \"da03f458a49a147da358e9acfbb8ec3315f882f27e0f6e6f278cf74c72542085\" returns successfully" Feb 13 19:05:15.513132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356959710.mount: Deactivated successfully. Feb 13 19:05:15.609311 kubelet[3681]: I0213 19:05:15.609215 3681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h745b" podStartSLOduration=29.609192245 podStartE2EDuration="29.609192245s" podCreationTimestamp="2025-02-13 19:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:15.605056937 +0000 UTC m=+42.671813949" watchObservedRunningTime="2025-02-13 19:05:15.609192245 +0000 UTC m=+42.675949233" Feb 13 19:05:15.612803 kubelet[3681]: I0213 19:05:15.609377 3681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ppdxg" podStartSLOduration=29.609366281 podStartE2EDuration="29.609366281s" podCreationTimestamp="2025-02-13 19:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:15.582956513 +0000 UTC m=+42.649713537" watchObservedRunningTime="2025-02-13 19:05:15.609366281 +0000 UTC m=+42.676123281" Feb 13 19:05:16.751309 systemd[1]: Started sshd@7-172.31.18.68:22-147.75.109.163:60792.service - OpenSSH per-connection server daemon (147.75.109.163:60792). Feb 13 19:05:16.943444 sshd[5045]: Accepted publickey for core from 147.75.109.163 port 60792 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:16.946030 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:16.954448 systemd-logind[2015]: New session 8 of user core. Feb 13 19:05:16.964130 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:05:17.286997 sshd[5048]: Connection closed by 147.75.109.163 port 60792 Feb 13 19:05:17.287605 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:17.296789 systemd[1]: sshd@7-172.31.18.68:22-147.75.109.163:60792.service: Deactivated successfully. Feb 13 19:05:17.304991 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:05:17.307850 systemd-logind[2015]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:05:17.313366 systemd-logind[2015]: Removed session 8. Feb 13 19:05:22.319158 systemd[1]: Started sshd@8-172.31.18.68:22-147.75.109.163:34366.service - OpenSSH per-connection server daemon (147.75.109.163:34366). Feb 13 19:05:22.513621 sshd[5066]: Accepted publickey for core from 147.75.109.163 port 34366 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:22.516071 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:22.523524 systemd-logind[2015]: New session 9 of user core. Feb 13 19:05:22.532282 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:05:22.781503 sshd[5069]: Connection closed by 147.75.109.163 port 34366 Feb 13 19:05:22.782023 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:22.789957 systemd-logind[2015]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:05:22.791773 systemd[1]: sshd@8-172.31.18.68:22-147.75.109.163:34366.service: Deactivated successfully. Feb 13 19:05:22.801372 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:05:22.806395 systemd-logind[2015]: Removed session 9. Feb 13 19:05:27.813135 systemd[1]: Started sshd@9-172.31.18.68:22-147.75.109.163:34380.service - OpenSSH per-connection server daemon (147.75.109.163:34380). Feb 13 19:05:27.999884 sshd[5082]: Accepted publickey for core from 147.75.109.163 port 34380 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:28.002495 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:28.012394 systemd-logind[2015]: New session 10 of user core. Feb 13 19:05:28.018469 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:05:28.260774 sshd[5085]: Connection closed by 147.75.109.163 port 34380 Feb 13 19:05:28.261214 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:28.267348 systemd[1]: sshd@9-172.31.18.68:22-147.75.109.163:34380.service: Deactivated successfully. Feb 13 19:05:28.269340 systemd-logind[2015]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:05:28.277446 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:05:28.282002 systemd-logind[2015]: Removed session 10. Feb 13 19:05:33.290124 systemd[1]: Started sshd@10-172.31.18.68:22-147.75.109.163:36960.service - OpenSSH per-connection server daemon (147.75.109.163:36960). Feb 13 19:05:33.484670 sshd[5098]: Accepted publickey for core from 147.75.109.163 port 36960 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:33.487603 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:33.496123 systemd-logind[2015]: New session 11 of user core. Feb 13 19:05:33.506236 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:05:33.754148 sshd[5101]: Connection closed by 147.75.109.163 port 36960 Feb 13 19:05:33.755420 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:33.761876 systemd[1]: sshd@10-172.31.18.68:22-147.75.109.163:36960.service: Deactivated successfully. Feb 13 19:05:33.768767 systemd-logind[2015]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:05:33.769867 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:05:33.772527 systemd-logind[2015]: Removed session 11. Feb 13 19:05:33.788938 systemd[1]: Started sshd@11-172.31.18.68:22-147.75.109.163:36972.service - OpenSSH per-connection server daemon (147.75.109.163:36972). Feb 13 19:05:33.983925 sshd[5113]: Accepted publickey for core from 147.75.109.163 port 36972 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:33.986363 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:33.994535 systemd-logind[2015]: New session 12 of user core. Feb 13 19:05:34.003153 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:05:34.324453 sshd[5116]: Connection closed by 147.75.109.163 port 36972 Feb 13 19:05:34.323982 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:34.345282 systemd-logind[2015]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:05:34.345701 systemd[1]: sshd@11-172.31.18.68:22-147.75.109.163:36972.service: Deactivated successfully. Feb 13 19:05:34.361291 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:05:34.375218 systemd[1]: Started sshd@12-172.31.18.68:22-147.75.109.163:36974.service - OpenSSH per-connection server daemon (147.75.109.163:36974). Feb 13 19:05:34.376263 systemd-logind[2015]: Removed session 12. Feb 13 19:05:34.569150 sshd[5124]: Accepted publickey for core from 147.75.109.163 port 36974 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:34.571592 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:34.580619 systemd-logind[2015]: New session 13 of user core. Feb 13 19:05:34.588247 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:05:34.840605 sshd[5127]: Connection closed by 147.75.109.163 port 36974 Feb 13 19:05:34.841438 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:34.847024 systemd[1]: sshd@12-172.31.18.68:22-147.75.109.163:36974.service: Deactivated successfully. Feb 13 19:05:34.855157 systemd-logind[2015]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:05:34.856151 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:05:34.859436 systemd-logind[2015]: Removed session 13. Feb 13 19:05:39.873119 systemd[1]: Started sshd@13-172.31.18.68:22-147.75.109.163:45678.service - OpenSSH per-connection server daemon (147.75.109.163:45678). Feb 13 19:05:40.053160 sshd[5138]: Accepted publickey for core from 147.75.109.163 port 45678 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:40.055571 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:40.064017 systemd-logind[2015]: New session 14 of user core. Feb 13 19:05:40.070128 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:05:40.314675 sshd[5141]: Connection closed by 147.75.109.163 port 45678 Feb 13 19:05:40.314972 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:40.322345 systemd-logind[2015]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:05:40.323288 systemd[1]: sshd@13-172.31.18.68:22-147.75.109.163:45678.service: Deactivated successfully. Feb 13 19:05:40.331398 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:05:40.333472 systemd-logind[2015]: Removed session 14. Feb 13 19:05:45.348120 systemd[1]: Started sshd@14-172.31.18.68:22-147.75.109.163:45682.service - OpenSSH per-connection server daemon (147.75.109.163:45682). Feb 13 19:05:45.534479 sshd[5154]: Accepted publickey for core from 147.75.109.163 port 45682 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:45.537005 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:45.545006 systemd-logind[2015]: New session 15 of user core. Feb 13 19:05:45.557450 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:05:45.811238 sshd[5157]: Connection closed by 147.75.109.163 port 45682 Feb 13 19:05:45.811848 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:45.819885 systemd[1]: sshd@14-172.31.18.68:22-147.75.109.163:45682.service: Deactivated successfully. Feb 13 19:05:45.829407 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:05:45.830206 systemd-logind[2015]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:05:45.834882 systemd-logind[2015]: Removed session 15. Feb 13 19:05:50.851242 systemd[1]: Started sshd@15-172.31.18.68:22-147.75.109.163:56178.service - OpenSSH per-connection server daemon (147.75.109.163:56178). Feb 13 19:05:51.031038 sshd[5170]: Accepted publickey for core from 147.75.109.163 port 56178 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:51.033516 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:51.041750 systemd-logind[2015]: New session 16 of user core. Feb 13 19:05:51.047169 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:05:51.296751 sshd[5173]: Connection closed by 147.75.109.163 port 56178 Feb 13 19:05:51.297752 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:51.310952 systemd-logind[2015]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:05:51.311500 systemd[1]: sshd@15-172.31.18.68:22-147.75.109.163:56178.service: Deactivated successfully. Feb 13 19:05:51.317977 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:05:51.320884 systemd-logind[2015]: Removed session 16. Feb 13 19:05:56.327150 systemd[1]: Started sshd@16-172.31.18.68:22-147.75.109.163:56188.service - OpenSSH per-connection server daemon (147.75.109.163:56188). Feb 13 19:05:56.513118 sshd[5186]: Accepted publickey for core from 147.75.109.163 port 56188 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:56.515604 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:56.522701 systemd-logind[2015]: New session 17 of user core. Feb 13 19:05:56.530318 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:05:56.776791 sshd[5189]: Connection closed by 147.75.109.163 port 56188 Feb 13 19:05:56.777766 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:56.786425 systemd[1]: sshd@16-172.31.18.68:22-147.75.109.163:56188.service: Deactivated successfully. Feb 13 19:05:56.792517 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:05:56.795113 systemd-logind[2015]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:05:56.798223 systemd-logind[2015]: Removed session 17. Feb 13 19:05:56.812081 systemd[1]: Started sshd@17-172.31.18.68:22-147.75.109.163:56202.service - OpenSSH per-connection server daemon (147.75.109.163:56202). Feb 13 19:05:56.993086 sshd[5200]: Accepted publickey for core from 147.75.109.163 port 56202 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:56.994265 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:57.002077 systemd-logind[2015]: New session 18 of user core. Feb 13 19:05:57.011256 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:05:57.303972 sshd[5203]: Connection closed by 147.75.109.163 port 56202 Feb 13 19:05:57.305623 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:57.314410 systemd-logind[2015]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:05:57.315922 systemd[1]: sshd@17-172.31.18.68:22-147.75.109.163:56202.service: Deactivated successfully. Feb 13 19:05:57.327200 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:05:57.340107 systemd[1]: Started sshd@18-172.31.18.68:22-147.75.109.163:56214.service - OpenSSH per-connection server daemon (147.75.109.163:56214). Feb 13 19:05:57.341711 systemd-logind[2015]: Removed session 18. Feb 13 19:05:57.533460 sshd[5211]: Accepted publickey for core from 147.75.109.163 port 56214 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:57.536031 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:57.544786 systemd-logind[2015]: New session 19 of user core. Feb 13 19:05:57.553270 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:06:00.303664 sshd[5214]: Connection closed by 147.75.109.163 port 56214 Feb 13 19:06:00.302997 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:00.315353 systemd[1]: sshd@18-172.31.18.68:22-147.75.109.163:56214.service: Deactivated successfully. Feb 13 19:06:00.327342 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:06:00.346112 systemd-logind[2015]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:06:00.355173 systemd[1]: Started sshd@19-172.31.18.68:22-147.75.109.163:52444.service - OpenSSH per-connection server daemon (147.75.109.163:52444). Feb 13 19:06:00.359962 systemd-logind[2015]: Removed session 19. Feb 13 19:06:00.542145 sshd[5230]: Accepted publickey for core from 147.75.109.163 port 52444 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:00.544175 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:00.552199 systemd-logind[2015]: New session 20 of user core. Feb 13 19:06:00.560139 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:06:01.054687 sshd[5233]: Connection closed by 147.75.109.163 port 52444 Feb 13 19:06:01.054535 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:01.061081 systemd-logind[2015]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:06:01.065272 systemd[1]: sshd@19-172.31.18.68:22-147.75.109.163:52444.service: Deactivated successfully. Feb 13 19:06:01.072562 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:06:01.075840 systemd-logind[2015]: Removed session 20. Feb 13 19:06:01.086168 systemd[1]: Started sshd@20-172.31.18.68:22-147.75.109.163:52448.service - OpenSSH per-connection server daemon (147.75.109.163:52448). Feb 13 19:06:01.285905 sshd[5242]: Accepted publickey for core from 147.75.109.163 port 52448 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:01.288521 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:01.296945 systemd-logind[2015]: New session 21 of user core. Feb 13 19:06:01.306182 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:06:01.576819 sshd[5245]: Connection closed by 147.75.109.163 port 52448 Feb 13 19:06:01.577281 sshd-session[5242]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:01.586273 systemd[1]: sshd@20-172.31.18.68:22-147.75.109.163:52448.service: Deactivated successfully. Feb 13 19:06:01.594691 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:06:01.598274 systemd-logind[2015]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:06:01.601138 systemd-logind[2015]: Removed session 21. Feb 13 19:06:06.613089 systemd[1]: Started sshd@21-172.31.18.68:22-147.75.109.163:52454.service - OpenSSH per-connection server daemon (147.75.109.163:52454). Feb 13 19:06:06.804427 sshd[5256]: Accepted publickey for core from 147.75.109.163 port 52454 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:06.807048 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:06.814973 systemd-logind[2015]: New session 22 of user core. Feb 13 19:06:06.825125 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:06:07.072533 sshd[5259]: Connection closed by 147.75.109.163 port 52454 Feb 13 19:06:07.073472 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:07.081217 systemd[1]: sshd@21-172.31.18.68:22-147.75.109.163:52454.service: Deactivated successfully. Feb 13 19:06:07.088202 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:06:07.090327 systemd-logind[2015]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:06:07.092567 systemd-logind[2015]: Removed session 22. Feb 13 19:06:12.104161 systemd[1]: Started sshd@22-172.31.18.68:22-147.75.109.163:43040.service - OpenSSH per-connection server daemon (147.75.109.163:43040). Feb 13 19:06:12.291767 sshd[5273]: Accepted publickey for core from 147.75.109.163 port 43040 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:12.294278 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:12.303444 systemd-logind[2015]: New session 23 of user core. Feb 13 19:06:12.310334 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:06:12.558775 sshd[5276]: Connection closed by 147.75.109.163 port 43040 Feb 13 19:06:12.559705 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:12.565469 systemd[1]: sshd@22-172.31.18.68:22-147.75.109.163:43040.service: Deactivated successfully. Feb 13 19:06:12.574084 systemd-logind[2015]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:06:12.575489 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:06:12.578598 systemd-logind[2015]: Removed session 23. Feb 13 19:06:17.589196 systemd[1]: Started sshd@23-172.31.18.68:22-147.75.109.163:43056.service - OpenSSH per-connection server daemon (147.75.109.163:43056). Feb 13 19:06:17.776361 sshd[5289]: Accepted publickey for core from 147.75.109.163 port 43056 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:17.778866 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:17.788091 systemd-logind[2015]: New session 24 of user core. Feb 13 19:06:17.796295 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:06:18.039646 sshd[5292]: Connection closed by 147.75.109.163 port 43056 Feb 13 19:06:18.040153 sshd-session[5289]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:18.046510 systemd[1]: sshd@23-172.31.18.68:22-147.75.109.163:43056.service: Deactivated successfully. Feb 13 19:06:18.052991 systemd-logind[2015]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:06:18.053443 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:06:18.057612 systemd-logind[2015]: Removed session 24. Feb 13 19:06:23.075073 systemd[1]: Started sshd@24-172.31.18.68:22-147.75.109.163:46290.service - OpenSSH per-connection server daemon (147.75.109.163:46290). Feb 13 19:06:23.254492 sshd[5302]: Accepted publickey for core from 147.75.109.163 port 46290 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:23.257150 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:23.265852 systemd-logind[2015]: New session 25 of user core. Feb 13 19:06:23.274671 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:06:23.522590 sshd[5305]: Connection closed by 147.75.109.163 port 46290 Feb 13 19:06:23.523742 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:23.531126 systemd[1]: sshd@24-172.31.18.68:22-147.75.109.163:46290.service: Deactivated successfully. Feb 13 19:06:23.536829 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:06:23.538805 systemd-logind[2015]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:06:23.540556 systemd-logind[2015]: Removed session 25. Feb 13 19:06:23.555131 systemd[1]: Started sshd@25-172.31.18.68:22-147.75.109.163:46294.service - OpenSSH per-connection server daemon (147.75.109.163:46294). Feb 13 19:06:23.741765 sshd[5315]: Accepted publickey for core from 147.75.109.163 port 46294 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:23.745754 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:23.753491 systemd-logind[2015]: New session 26 of user core. Feb 13 19:06:23.764280 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:06:26.805188 containerd[2049]: time="2025-02-13T19:06:26.802958583Z" level=info msg="StopContainer for \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\" with timeout 30 (s)" Feb 13 19:06:26.808365 containerd[2049]: time="2025-02-13T19:06:26.808156959Z" level=info msg="Stop container \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\" with signal terminated" Feb 13 19:06:26.858127 containerd[2049]: time="2025-02-13T19:06:26.858067155Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:06:26.873435 containerd[2049]: time="2025-02-13T19:06:26.873279279Z" level=info msg="StopContainer for \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\" with timeout 2 (s)" Feb 13 19:06:26.876594 containerd[2049]: time="2025-02-13T19:06:26.875829519Z" level=info msg="Stop container \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\" with signal terminated" Feb 13 19:06:26.899905 systemd-networkd[1601]: lxc_health: Link DOWN Feb 13 19:06:26.902168 systemd-networkd[1601]: lxc_health: Lost carrier Feb 13 19:06:26.912416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b-rootfs.mount: Deactivated successfully. Feb 13 19:06:26.945570 containerd[2049]: time="2025-02-13T19:06:26.945306243Z" level=info msg="shim disconnected" id=0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b namespace=k8s.io Feb 13 19:06:26.946001 containerd[2049]: time="2025-02-13T19:06:26.945699855Z" level=warning msg="cleaning up after shim disconnected" id=0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b namespace=k8s.io Feb 13 19:06:26.946001 containerd[2049]: time="2025-02-13T19:06:26.945729651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:26.987975 containerd[2049]: time="2025-02-13T19:06:26.984925396Z" level=info msg="StopContainer for \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\" returns successfully" Feb 13 19:06:26.991305 containerd[2049]: time="2025-02-13T19:06:26.990012448Z" level=info msg="StopPodSandbox for \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\"" Feb 13 19:06:26.991784 containerd[2049]: time="2025-02-13T19:06:26.991446040Z" level=info msg="Container to stop \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:26.992122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f-rootfs.mount: Deactivated successfully. Feb 13 19:06:26.998008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd-shm.mount: Deactivated successfully. Feb 13 19:06:27.005964 containerd[2049]: time="2025-02-13T19:06:27.005696376Z" level=info msg="shim disconnected" id=91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f namespace=k8s.io Feb 13 19:06:27.005964 containerd[2049]: time="2025-02-13T19:06:27.005884560Z" level=warning msg="cleaning up after shim disconnected" id=91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f namespace=k8s.io Feb 13 19:06:27.005964 containerd[2049]: time="2025-02-13T19:06:27.005919192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:27.039750 containerd[2049]: time="2025-02-13T19:06:27.039183144Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:06:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:06:27.045571 containerd[2049]: time="2025-02-13T19:06:27.045519072Z" level=info msg="StopContainer for \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\" returns successfully" Feb 13 19:06:27.046432 containerd[2049]: time="2025-02-13T19:06:27.046389180Z" level=info msg="StopPodSandbox for \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\"" Feb 13 19:06:27.046943 containerd[2049]: time="2025-02-13T19:06:27.046604184Z" level=info msg="Container to stop \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:27.046943 containerd[2049]: time="2025-02-13T19:06:27.046712472Z" level=info msg="Container to stop \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:27.046943 containerd[2049]: time="2025-02-13T19:06:27.046737480Z" level=info msg="Container to stop \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:27.046943 containerd[2049]: time="2025-02-13T19:06:27.046758372Z" level=info msg="Container to stop \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:27.046943 containerd[2049]: time="2025-02-13T19:06:27.046779972Z" level=info msg="Container to stop \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:06:27.086031 containerd[2049]: time="2025-02-13T19:06:27.084941412Z" level=info msg="shim disconnected" id=10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd namespace=k8s.io Feb 13 19:06:27.086031 containerd[2049]: time="2025-02-13T19:06:27.085017852Z" level=warning msg="cleaning up after shim disconnected" id=10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd namespace=k8s.io Feb 13 19:06:27.086031 containerd[2049]: time="2025-02-13T19:06:27.085036692Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:27.121227 containerd[2049]: time="2025-02-13T19:06:27.121174920Z" level=info msg="TearDown network for sandbox \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\" successfully" Feb 13 19:06:27.121624 containerd[2049]: time="2025-02-13T19:06:27.121546548Z" level=info msg="StopPodSandbox for \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\" returns successfully" Feb 13 19:06:27.123338 containerd[2049]: time="2025-02-13T19:06:27.123252180Z" level=info msg="shim disconnected" id=d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f namespace=k8s.io Feb 13 19:06:27.123338 containerd[2049]: time="2025-02-13T19:06:27.123327720Z" level=warning msg="cleaning up after shim disconnected" id=d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f namespace=k8s.io Feb 13 19:06:27.123531 containerd[2049]: time="2025-02-13T19:06:27.123348816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:27.156241 containerd[2049]: time="2025-02-13T19:06:27.156191064Z" level=info msg="TearDown network for sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" successfully" Feb 13 19:06:27.156432 containerd[2049]: time="2025-02-13T19:06:27.156405276Z" level=info msg="StopPodSandbox for \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" returns successfully" Feb 13 19:06:27.199829 kubelet[3681]: I0213 19:06:27.199786 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-cgroup\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.200567 kubelet[3681]: I0213 19:06:27.200498 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-xtables-lock\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.200761 kubelet[3681]: I0213 19:06:27.200737 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cni-path\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.200899 kubelet[3681]: I0213 19:06:27.200877 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-lib-modules\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.201032 kubelet[3681]: I0213 19:06:27.201010 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e905df3a-1cf7-4a84-beef-467b121db14b-hubble-tls\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.201160 kubelet[3681]: I0213 19:06:27.201137 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjknf\" (UniqueName: \"kubernetes.io/projected/1afb24ac-2249-4015-9af4-b2b2b7f7a228-kube-api-access-hjknf\") pod \"1afb24ac-2249-4015-9af4-b2b2b7f7a228\" (UID: \"1afb24ac-2249-4015-9af4-b2b2b7f7a228\") " Feb 13 19:06:27.201303 kubelet[3681]: I0213 19:06:27.201278 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-config-path\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.201570 kubelet[3681]: I0213 19:06:27.201542 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-host-proc-sys-kernel\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.202466 kubelet[3681]: I0213 19:06:27.202432 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1afb24ac-2249-4015-9af4-b2b2b7f7a228-cilium-config-path\") pod \"1afb24ac-2249-4015-9af4-b2b2b7f7a228\" (UID: \"1afb24ac-2249-4015-9af4-b2b2b7f7a228\") " Feb 13 19:06:27.202875 kubelet[3681]: I0213 19:06:27.202850 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8svtj\" (UniqueName: \"kubernetes.io/projected/e905df3a-1cf7-4a84-beef-467b121db14b-kube-api-access-8svtj\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.203618 kubelet[3681]: I0213 19:06:27.203017 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e905df3a-1cf7-4a84-beef-467b121db14b-clustermesh-secrets\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.203618 kubelet[3681]: I0213 19:06:27.203065 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-hostproc\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.203618 kubelet[3681]: I0213 19:06:27.203100 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-run\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.203618 kubelet[3681]: I0213 19:06:27.203134 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-etc-cni-netd\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.203618 kubelet[3681]: I0213 19:06:27.203169 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-bpf-maps\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.203618 kubelet[3681]: I0213 19:06:27.203204 3681 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-host-proc-sys-net\") pod \"e905df3a-1cf7-4a84-beef-467b121db14b\" (UID: \"e905df3a-1cf7-4a84-beef-467b121db14b\") " Feb 13 19:06:27.204227 kubelet[3681]: I0213 19:06:27.199920 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.204289 kubelet[3681]: I0213 19:06:27.201748 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.204349 kubelet[3681]: I0213 19:06:27.203280 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.204415 kubelet[3681]: I0213 19:06:27.204336 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.204469 kubelet[3681]: I0213 19:06:27.204409 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cni-path" (OuterVolumeSpecName: "cni-path") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.204526 kubelet[3681]: I0213 19:06:27.204473 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.213663 kubelet[3681]: I0213 19:06:27.208160 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-hostproc" (OuterVolumeSpecName: "hostproc") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.213663 kubelet[3681]: I0213 19:06:27.212601 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.213663 kubelet[3681]: I0213 19:06:27.212668 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.213663 kubelet[3681]: I0213 19:06:27.212708 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:06:27.213663 kubelet[3681]: I0213 19:06:27.212860 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e905df3a-1cf7-4a84-beef-467b121db14b-kube-api-access-8svtj" (OuterVolumeSpecName: "kube-api-access-8svtj") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "kube-api-access-8svtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:06:27.220852 kubelet[3681]: I0213 19:06:27.220787 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:06:27.221004 kubelet[3681]: I0213 19:06:27.220784 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afb24ac-2249-4015-9af4-b2b2b7f7a228-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1afb24ac-2249-4015-9af4-b2b2b7f7a228" (UID: "1afb24ac-2249-4015-9af4-b2b2b7f7a228"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:06:27.221004 kubelet[3681]: I0213 19:06:27.220985 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e905df3a-1cf7-4a84-beef-467b121db14b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:06:27.222507 kubelet[3681]: I0213 19:06:27.222414 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e905df3a-1cf7-4a84-beef-467b121db14b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e905df3a-1cf7-4a84-beef-467b121db14b" (UID: "e905df3a-1cf7-4a84-beef-467b121db14b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:06:27.223950 kubelet[3681]: I0213 19:06:27.223853 3681 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afb24ac-2249-4015-9af4-b2b2b7f7a228-kube-api-access-hjknf" (OuterVolumeSpecName: "kube-api-access-hjknf") pod "1afb24ac-2249-4015-9af4-b2b2b7f7a228" (UID: "1afb24ac-2249-4015-9af4-b2b2b7f7a228"). InnerVolumeSpecName "kube-api-access-hjknf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:06:27.304654 kubelet[3681]: I0213 19:06:27.303837 3681 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-hostproc\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.304654 kubelet[3681]: I0213 19:06:27.303890 3681 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-etc-cni-netd\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.304654 kubelet[3681]: I0213 19:06:27.303913 3681 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-bpf-maps\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.304654 kubelet[3681]: I0213 19:06:27.303933 3681 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-host-proc-sys-net\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.304654 kubelet[3681]: I0213 19:06:27.303961 3681 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-run\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.304654 kubelet[3681]: I0213 19:06:27.303981 3681 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cni-path\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.304654 kubelet[3681]: I0213 19:06:27.304004 3681 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-lib-modules\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.304654 kubelet[3681]: I0213 19:06:27.304025 3681 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e905df3a-1cf7-4a84-beef-467b121db14b-hubble-tls\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.305160 kubelet[3681]: I0213 19:06:27.304044 3681 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-cgroup\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.305160 kubelet[3681]: I0213 19:06:27.304062 3681 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-xtables-lock\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.305160 kubelet[3681]: I0213 19:06:27.304081 3681 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hjknf\" (UniqueName: \"kubernetes.io/projected/1afb24ac-2249-4015-9af4-b2b2b7f7a228-kube-api-access-hjknf\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.305160 kubelet[3681]: I0213 19:06:27.304101 3681 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e905df3a-1cf7-4a84-beef-467b121db14b-cilium-config-path\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.305160 kubelet[3681]: I0213 19:06:27.304125 3681 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e905df3a-1cf7-4a84-beef-467b121db14b-host-proc-sys-kernel\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.305160 kubelet[3681]: I0213 19:06:27.304145 3681 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8svtj\" (UniqueName: \"kubernetes.io/projected/e905df3a-1cf7-4a84-beef-467b121db14b-kube-api-access-8svtj\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.305160 kubelet[3681]: I0213 19:06:27.304165 3681 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e905df3a-1cf7-4a84-beef-467b121db14b-clustermesh-secrets\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.305160 kubelet[3681]: I0213 19:06:27.304185 3681 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1afb24ac-2249-4015-9af4-b2b2b7f7a228-cilium-config-path\") on node \"ip-172-31-18-68\" DevicePath \"\"" Feb 13 19:06:27.762315 kubelet[3681]: I0213 19:06:27.762270 3681 scope.go:117] "RemoveContainer" containerID="91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f" Feb 13 19:06:27.767099 containerd[2049]: time="2025-02-13T19:06:27.766976091Z" level=info msg="RemoveContainer for \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\"" Feb 13 19:06:27.781408 containerd[2049]: time="2025-02-13T19:06:27.781017496Z" level=info msg="RemoveContainer for \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\" returns successfully" Feb 13 19:06:27.782167 kubelet[3681]: I0213 19:06:27.782128 3681 scope.go:117] "RemoveContainer" containerID="fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6" Feb 13 19:06:27.786453 containerd[2049]: time="2025-02-13T19:06:27.786398164Z" level=info msg="RemoveContainer for \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\"" Feb 13 19:06:27.793796 containerd[2049]: time="2025-02-13T19:06:27.793737640Z" level=info msg="RemoveContainer for \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\" returns successfully" Feb 13 19:06:27.794265 kubelet[3681]: I0213 19:06:27.794116 3681 scope.go:117] "RemoveContainer" containerID="4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f" Feb 13 19:06:27.798687 containerd[2049]: time="2025-02-13T19:06:27.798197164Z" level=info msg="RemoveContainer for \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\"" Feb 13 19:06:27.805439 containerd[2049]: time="2025-02-13T19:06:27.805251100Z" level=info msg="RemoveContainer for \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\" returns successfully" Feb 13 19:06:27.809098 kubelet[3681]: I0213 19:06:27.809023 3681 scope.go:117] "RemoveContainer" containerID="09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c" Feb 13 19:06:27.815077 containerd[2049]: time="2025-02-13T19:06:27.814916056Z" level=info msg="RemoveContainer for \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\"" Feb 13 19:06:27.818272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd-rootfs.mount: Deactivated successfully. Feb 13 19:06:27.818592 systemd[1]: var-lib-kubelet-pods-1afb24ac\x2d2249\x2d4015\x2d9af4\x2db2b2b7f7a228-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhjknf.mount: Deactivated successfully. Feb 13 19:06:27.821054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f-rootfs.mount: Deactivated successfully. Feb 13 19:06:27.821320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f-shm.mount: Deactivated successfully. Feb 13 19:06:27.821567 systemd[1]: var-lib-kubelet-pods-e905df3a\x2d1cf7\x2d4a84\x2dbeef\x2d467b121db14b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8svtj.mount: Deactivated successfully. Feb 13 19:06:27.823165 systemd[1]: var-lib-kubelet-pods-e905df3a\x2d1cf7\x2d4a84\x2dbeef\x2d467b121db14b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:06:27.824077 systemd[1]: var-lib-kubelet-pods-e905df3a\x2d1cf7\x2d4a84\x2dbeef\x2d467b121db14b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:06:27.824503 containerd[2049]: time="2025-02-13T19:06:27.824425540Z" level=info msg="RemoveContainer for \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\" returns successfully" Feb 13 19:06:27.827087 kubelet[3681]: I0213 19:06:27.825733 3681 scope.go:117] "RemoveContainer" containerID="3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e" Feb 13 19:06:27.834326 containerd[2049]: time="2025-02-13T19:06:27.833737588Z" level=info msg="RemoveContainer for \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\"" Feb 13 19:06:27.842026 containerd[2049]: time="2025-02-13T19:06:27.841967440Z" level=info msg="RemoveContainer for \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\" returns successfully" Feb 13 19:06:27.842851 kubelet[3681]: I0213 19:06:27.842799 3681 scope.go:117] "RemoveContainer" containerID="91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f" Feb 13 19:06:27.843542 containerd[2049]: time="2025-02-13T19:06:27.843450868Z" level=error msg="ContainerStatus for \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\": not found" Feb 13 19:06:27.843880 kubelet[3681]: E0213 19:06:27.843817 3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\": not found" containerID="91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f" Feb 13 19:06:27.844065 kubelet[3681]: I0213 19:06:27.843888 3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f"} err="failed to get container status \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\": rpc error: code = NotFound desc = an error occurred when try to find container \"91ca07ab140909acea270d9fafcab4b8619da414586c0d77ede80c97c752914f\": not found" Feb 13 19:06:27.844065 kubelet[3681]: I0213 19:06:27.844020 3681 scope.go:117] "RemoveContainer" containerID="fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6" Feb 13 19:06:27.844484 containerd[2049]: time="2025-02-13T19:06:27.844395268Z" level=error msg="ContainerStatus for \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\": not found" Feb 13 19:06:27.844954 kubelet[3681]: E0213 19:06:27.844896 3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\": not found" containerID="fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6" Feb 13 19:06:27.845137 kubelet[3681]: I0213 19:06:27.844958 3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6"} err="failed to get container status \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"fde13367d65884eaa6d70704a24a95f1708d31d3cae4997ac129eb100d7f44c6\": not found" Feb 13 19:06:27.845137 kubelet[3681]: I0213 19:06:27.845007 3681 scope.go:117] "RemoveContainer" containerID="4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f" Feb 13 19:06:27.845948 containerd[2049]: time="2025-02-13T19:06:27.845853148Z" level=error msg="ContainerStatus for \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\": not found" Feb 13 19:06:27.846563 kubelet[3681]: E0213 19:06:27.846490 3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\": not found" containerID="4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f" Feb 13 19:06:27.847159 kubelet[3681]: I0213 19:06:27.846554 3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f"} err="failed to get container status \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a1dc73adc9508d06feda1bda31ab9e3b26f247d61580cca50e00128ff3e245f\": not found" Feb 13 19:06:27.847159 kubelet[3681]: I0213 19:06:27.846593 3681 scope.go:117] "RemoveContainer" containerID="09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c" Feb 13 19:06:27.847331 containerd[2049]: time="2025-02-13T19:06:27.847009144Z" level=error msg="ContainerStatus for \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\": not found" Feb 13 19:06:27.847797 kubelet[3681]: E0213 19:06:27.847702 3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\": not found" containerID="09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c" Feb 13 19:06:27.847797 kubelet[3681]: I0213 19:06:27.847760 3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c"} err="failed to get container status \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\": rpc error: code = NotFound desc = an error occurred when try to find container \"09b36d51f06070f74d10aa10b1b38f4057a1d16b80c82a5412a2af3bd3f3864c\": not found" Feb 13 19:06:27.848182 kubelet[3681]: I0213 19:06:27.847802 3681 scope.go:117] "RemoveContainer" containerID="3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e" Feb 13 19:06:27.848855 containerd[2049]: time="2025-02-13T19:06:27.848529724Z" level=error msg="ContainerStatus for \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\": not found" Feb 13 19:06:27.848954 kubelet[3681]: E0213 19:06:27.848878 3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\": not found" containerID="3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e" Feb 13 19:06:27.848954 kubelet[3681]: I0213 19:06:27.848927 3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e"} err="failed to get container status \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b48f0b19d53207255fb41dbc6f4245234181d6e58424199ac1a77c66f99073e\": not found" Feb 13 19:06:27.849084 kubelet[3681]: I0213 19:06:27.848965 3681 scope.go:117] "RemoveContainer" containerID="0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b" Feb 13 19:06:27.851960 containerd[2049]: time="2025-02-13T19:06:27.851808340Z" level=info msg="RemoveContainer for \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\"" Feb 13 19:06:27.859249 containerd[2049]: time="2025-02-13T19:06:27.859196356Z" level=info msg="RemoveContainer for \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\" returns successfully" Feb 13 19:06:27.859974 kubelet[3681]: I0213 19:06:27.859916 3681 scope.go:117] "RemoveContainer" containerID="0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b" Feb 13 19:06:27.860600 containerd[2049]: time="2025-02-13T19:06:27.860453608Z" level=error msg="ContainerStatus for \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\": not found" Feb 13 19:06:27.861046 kubelet[3681]: E0213 19:06:27.860996 3681 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\": not found" containerID="0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b" Feb 13 19:06:27.861188 kubelet[3681]: I0213 19:06:27.861053 3681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b"} err="failed to get container status \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d33cbdebfa0bda0578115dbd197b713bc2bf87f54ca2ca240eafd171cc0647b\": not found" Feb 13 19:06:28.481731 kubelet[3681]: E0213 19:06:28.481582 3681 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:06:28.733264 sshd[5318]: Connection closed by 147.75.109.163 port 46294 Feb 13 19:06:28.734343 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:28.742225 systemd[1]: sshd@25-172.31.18.68:22-147.75.109.163:46294.service: Deactivated successfully. Feb 13 19:06:28.748802 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:06:28.750307 systemd-logind[2015]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:06:28.752231 systemd-logind[2015]: Removed session 26. Feb 13 19:06:28.764122 systemd[1]: Started sshd@26-172.31.18.68:22-147.75.109.163:46296.service - OpenSSH per-connection server daemon (147.75.109.163:46296). Feb 13 19:06:28.954851 sshd[5482]: Accepted publickey for core from 147.75.109.163 port 46296 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:28.957369 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:28.964718 systemd-logind[2015]: New session 27 of user core. Feb 13 19:06:28.969368 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:06:29.210722 kubelet[3681]: I0213 19:06:29.208957 3681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afb24ac-2249-4015-9af4-b2b2b7f7a228" path="/var/lib/kubelet/pods/1afb24ac-2249-4015-9af4-b2b2b7f7a228/volumes" Feb 13 19:06:29.210722 kubelet[3681]: I0213 19:06:29.210034 3681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e905df3a-1cf7-4a84-beef-467b121db14b" path="/var/lib/kubelet/pods/e905df3a-1cf7-4a84-beef-467b121db14b/volumes" Feb 13 19:06:29.354321 ntpd[1997]: Deleting interface #10 lxc_health, fe80::4bf:98ff:fe59:ac10%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 19:06:29.354880 ntpd[1997]: 13 Feb 19:06:29 ntpd[1997]: Deleting interface #10 lxc_health, fe80::4bf:98ff:fe59:ac10%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 19:06:30.732673 sshd[5485]: Connection closed by 147.75.109.163 port 46296 Feb 13 19:06:30.731094 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:30.744947 systemd[1]: sshd@26-172.31.18.68:22-147.75.109.163:46296.service: Deactivated successfully. Feb 13 19:06:30.754047 kubelet[3681]: I0213 19:06:30.750784 3681 topology_manager.go:215] "Topology Admit Handler" podUID="8259fec2-4914-4b35-8ec4-c05e2e72de69" podNamespace="kube-system" podName="cilium-242c7" Feb 13 19:06:30.754047 kubelet[3681]: E0213 19:06:30.750871 3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1afb24ac-2249-4015-9af4-b2b2b7f7a228" containerName="cilium-operator" Feb 13 19:06:30.754047 kubelet[3681]: E0213 19:06:30.750891 3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e905df3a-1cf7-4a84-beef-467b121db14b" containerName="cilium-agent" Feb 13 19:06:30.754047 kubelet[3681]: E0213 19:06:30.750907 3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e905df3a-1cf7-4a84-beef-467b121db14b" containerName="mount-cgroup" Feb 13 19:06:30.754047 kubelet[3681]: E0213 19:06:30.750923 3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e905df3a-1cf7-4a84-beef-467b121db14b" containerName="mount-bpf-fs" Feb 13 19:06:30.754047 kubelet[3681]: E0213 19:06:30.750937 3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e905df3a-1cf7-4a84-beef-467b121db14b" containerName="clean-cilium-state" Feb 13 19:06:30.754047 kubelet[3681]: E0213 19:06:30.750954 3681 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e905df3a-1cf7-4a84-beef-467b121db14b" containerName="apply-sysctl-overwrites" Feb 13 19:06:30.754047 kubelet[3681]: I0213 19:06:30.750997 3681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e905df3a-1cf7-4a84-beef-467b121db14b" containerName="cilium-agent" Feb 13 19:06:30.754047 kubelet[3681]: I0213 19:06:30.751013 3681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afb24ac-2249-4015-9af4-b2b2b7f7a228" containerName="cilium-operator" Feb 13 19:06:30.756479 systemd-logind[2015]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:06:30.771266 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:06:30.788153 systemd[1]: Started sshd@27-172.31.18.68:22-147.75.109.163:48836.service - OpenSSH per-connection server daemon (147.75.109.163:48836). Feb 13 19:06:30.792843 systemd-logind[2015]: Removed session 27. Feb 13 19:06:30.830273 kubelet[3681]: I0213 19:06:30.830177 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-cilium-cgroup\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.830878 kubelet[3681]: I0213 19:06:30.830816 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-bpf-maps\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.832894 kubelet[3681]: I0213 19:06:30.832741 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-etc-cni-netd\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.832894 kubelet[3681]: I0213 19:06:30.832812 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8259fec2-4914-4b35-8ec4-c05e2e72de69-cilium-ipsec-secrets\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.832894 kubelet[3681]: I0213 19:06:30.832857 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-cni-path\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.832894 kubelet[3681]: I0213 19:06:30.832894 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8259fec2-4914-4b35-8ec4-c05e2e72de69-clustermesh-secrets\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.833938 kubelet[3681]: I0213 19:06:30.832967 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-hostproc\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.833938 kubelet[3681]: I0213 19:06:30.833014 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8259fec2-4914-4b35-8ec4-c05e2e72de69-cilium-config-path\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.833938 kubelet[3681]: I0213 19:06:30.833050 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-host-proc-sys-net\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.833938 kubelet[3681]: I0213 19:06:30.833103 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-cilium-run\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.833938 kubelet[3681]: I0213 19:06:30.833152 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-xtables-lock\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.833938 kubelet[3681]: I0213 19:06:30.833189 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxwh\" (UniqueName: \"kubernetes.io/projected/8259fec2-4914-4b35-8ec4-c05e2e72de69-kube-api-access-mnxwh\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.834240 kubelet[3681]: I0213 19:06:30.833282 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-lib-modules\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.834240 kubelet[3681]: I0213 19:06:30.833341 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8259fec2-4914-4b35-8ec4-c05e2e72de69-host-proc-sys-kernel\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:30.834240 kubelet[3681]: I0213 19:06:30.833384 3681 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8259fec2-4914-4b35-8ec4-c05e2e72de69-hubble-tls\") pod \"cilium-242c7\" (UID: \"8259fec2-4914-4b35-8ec4-c05e2e72de69\") " pod="kube-system/cilium-242c7" Feb 13 19:06:31.089301 sshd[5497]: Accepted publickey for core from 147.75.109.163 port 48836 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:31.091882 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:31.100522 systemd-logind[2015]: New session 28 of user core. Feb 13 19:06:31.103205 containerd[2049]: time="2025-02-13T19:06:31.103129228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-242c7,Uid:8259fec2-4914-4b35-8ec4-c05e2e72de69,Namespace:kube-system,Attempt:0,}" Feb 13 19:06:31.106976 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:06:31.156789 containerd[2049]: time="2025-02-13T19:06:31.156278956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:06:31.156789 containerd[2049]: time="2025-02-13T19:06:31.156387604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:06:31.156789 containerd[2049]: time="2025-02-13T19:06:31.156429892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:06:31.157938 containerd[2049]: time="2025-02-13T19:06:31.157721140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:06:31.230390 containerd[2049]: time="2025-02-13T19:06:31.230332037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-242c7,Uid:8259fec2-4914-4b35-8ec4-c05e2e72de69,Namespace:kube-system,Attempt:0,} returns sandbox id \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\"" Feb 13 19:06:31.236136 containerd[2049]: time="2025-02-13T19:06:31.236042261Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:06:31.236927 sshd[5504]: Connection closed by 147.75.109.163 port 48836 Feb 13 19:06:31.237805 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:31.249066 systemd[1]: sshd@27-172.31.18.68:22-147.75.109.163:48836.service: Deactivated successfully. Feb 13 19:06:31.249425 systemd-logind[2015]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:06:31.261299 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:06:31.267716 systemd-logind[2015]: Removed session 28. Feb 13 19:06:31.272844 containerd[2049]: time="2025-02-13T19:06:31.272309633Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"201a17d11714a0a18c9690ac9c080327d893f4cf6b6c579184c857bbfcdd7705\"" Feb 13 19:06:31.277722 containerd[2049]: time="2025-02-13T19:06:31.275924873Z" level=info msg="StartContainer for \"201a17d11714a0a18c9690ac9c080327d893f4cf6b6c579184c857bbfcdd7705\"" Feb 13 19:06:31.278316 systemd[1]: Started sshd@28-172.31.18.68:22-147.75.109.163:48848.service - OpenSSH per-connection server daemon (147.75.109.163:48848). Feb 13 19:06:31.385329 containerd[2049]: time="2025-02-13T19:06:31.382327829Z" level=info msg="StartContainer for \"201a17d11714a0a18c9690ac9c080327d893f4cf6b6c579184c857bbfcdd7705\" returns successfully" Feb 13 19:06:31.451789 containerd[2049]: time="2025-02-13T19:06:31.451526046Z" level=info msg="shim disconnected" id=201a17d11714a0a18c9690ac9c080327d893f4cf6b6c579184c857bbfcdd7705 namespace=k8s.io Feb 13 19:06:31.452239 containerd[2049]: time="2025-02-13T19:06:31.451782090Z" level=warning msg="cleaning up after shim disconnected" id=201a17d11714a0a18c9690ac9c080327d893f4cf6b6c579184c857bbfcdd7705 namespace=k8s.io Feb 13 19:06:31.452239 containerd[2049]: time="2025-02-13T19:06:31.451830198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:31.494213 sshd[5551]: Accepted publickey for core from 147.75.109.163 port 48848 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:31.496786 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:31.505291 systemd-logind[2015]: New session 29 of user core. Feb 13 19:06:31.511423 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:06:31.805609 containerd[2049]: time="2025-02-13T19:06:31.805359920Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:06:31.835712 containerd[2049]: time="2025-02-13T19:06:31.835501268Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b729cae1d348699d1824e6b29b02ed9133ccd7727d2ca472e1a958c89bbd836\"" Feb 13 19:06:31.838215 containerd[2049]: time="2025-02-13T19:06:31.837319148Z" level=info msg="StartContainer for \"8b729cae1d348699d1824e6b29b02ed9133ccd7727d2ca472e1a958c89bbd836\"" Feb 13 19:06:31.931467 containerd[2049]: time="2025-02-13T19:06:31.931282532Z" level=info msg="StartContainer for \"8b729cae1d348699d1824e6b29b02ed9133ccd7727d2ca472e1a958c89bbd836\" returns successfully" Feb 13 19:06:31.983195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b729cae1d348699d1824e6b29b02ed9133ccd7727d2ca472e1a958c89bbd836-rootfs.mount: Deactivated successfully. Feb 13 19:06:31.992829 containerd[2049]: time="2025-02-13T19:06:31.992409872Z" level=info msg="shim disconnected" id=8b729cae1d348699d1824e6b29b02ed9133ccd7727d2ca472e1a958c89bbd836 namespace=k8s.io Feb 13 19:06:31.992829 containerd[2049]: time="2025-02-13T19:06:31.992515004Z" level=warning msg="cleaning up after shim disconnected" id=8b729cae1d348699d1824e6b29b02ed9133ccd7727d2ca472e1a958c89bbd836 namespace=k8s.io Feb 13 19:06:31.992829 containerd[2049]: time="2025-02-13T19:06:31.992536904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:32.199854 kubelet[3681]: E0213 19:06:32.199484 3681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-ppdxg" podUID="af00f5c2-cce8-421e-bd0d-3b3774c78b93" Feb 13 19:06:32.812378 containerd[2049]: time="2025-02-13T19:06:32.811674189Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:06:32.850688 containerd[2049]: time="2025-02-13T19:06:32.850585161Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b7306aa9e877b71b7efa06cc2f8dd551312cc9a2c022910db2b94febbd16558\"" Feb 13 19:06:32.852413 containerd[2049]: time="2025-02-13T19:06:32.852322473Z" level=info msg="StartContainer for \"2b7306aa9e877b71b7efa06cc2f8dd551312cc9a2c022910db2b94febbd16558\"" Feb 13 19:06:32.958792 containerd[2049]: time="2025-02-13T19:06:32.958129161Z" level=info msg="StartContainer for \"2b7306aa9e877b71b7efa06cc2f8dd551312cc9a2c022910db2b94febbd16558\" returns successfully" Feb 13 19:06:33.001278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b7306aa9e877b71b7efa06cc2f8dd551312cc9a2c022910db2b94febbd16558-rootfs.mount: Deactivated successfully. Feb 13 19:06:33.007814 containerd[2049]: time="2025-02-13T19:06:33.007608173Z" level=info msg="shim disconnected" id=2b7306aa9e877b71b7efa06cc2f8dd551312cc9a2c022910db2b94febbd16558 namespace=k8s.io Feb 13 19:06:33.007814 containerd[2049]: time="2025-02-13T19:06:33.007710665Z" level=warning msg="cleaning up after shim disconnected" id=2b7306aa9e877b71b7efa06cc2f8dd551312cc9a2c022910db2b94febbd16558 namespace=k8s.io Feb 13 19:06:33.007814 containerd[2049]: time="2025-02-13T19:06:33.007731881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:33.155196 containerd[2049]: time="2025-02-13T19:06:33.154890606Z" level=info msg="StopPodSandbox for \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\"" Feb 13 19:06:33.155196 containerd[2049]: time="2025-02-13T19:06:33.155049402Z" level=info msg="TearDown network for sandbox \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\" successfully" Feb 13 19:06:33.155196 containerd[2049]: time="2025-02-13T19:06:33.155076318Z" level=info msg="StopPodSandbox for \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\" returns successfully" Feb 13 19:06:33.156479 containerd[2049]: time="2025-02-13T19:06:33.156324186Z" level=info msg="RemovePodSandbox for \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\"" Feb 13 19:06:33.156479 containerd[2049]: time="2025-02-13T19:06:33.156400578Z" level=info msg="Forcibly stopping sandbox \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\"" Feb 13 19:06:33.156855 containerd[2049]: time="2025-02-13T19:06:33.156508638Z" level=info msg="TearDown network for sandbox \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\" successfully" Feb 13 19:06:33.166822 containerd[2049]: time="2025-02-13T19:06:33.166722402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:33.167079 containerd[2049]: time="2025-02-13T19:06:33.166841418Z" level=info msg="RemovePodSandbox \"10f4adc69cdfe35d6372e8056a956008d6641c96bf40e27c7979b387c0af04bd\" returns successfully" Feb 13 19:06:33.168034 containerd[2049]: time="2025-02-13T19:06:33.167765706Z" level=info msg="StopPodSandbox for \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\"" Feb 13 19:06:33.168034 containerd[2049]: time="2025-02-13T19:06:33.167907078Z" level=info msg="TearDown network for sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" successfully" Feb 13 19:06:33.168034 containerd[2049]: time="2025-02-13T19:06:33.167930922Z" level=info msg="StopPodSandbox for \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" returns successfully" Feb 13 19:06:33.169470 containerd[2049]: time="2025-02-13T19:06:33.169112154Z" level=info msg="RemovePodSandbox for \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\"" Feb 13 19:06:33.169470 containerd[2049]: time="2025-02-13T19:06:33.169161762Z" level=info msg="Forcibly stopping sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\"" Feb 13 19:06:33.169470 containerd[2049]: time="2025-02-13T19:06:33.169316550Z" level=info msg="TearDown network for sandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" successfully" Feb 13 19:06:33.176043 containerd[2049]: time="2025-02-13T19:06:33.175934082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:06:33.176043 containerd[2049]: time="2025-02-13T19:06:33.176027082Z" level=info msg="RemovePodSandbox \"d47bee43dad207a9df982b68ac9aba321f5383286bdab8d85c5356e858509a3f\" returns successfully" Feb 13 19:06:33.483140 kubelet[3681]: E0213 19:06:33.483058 3681 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:06:33.820746 containerd[2049]: time="2025-02-13T19:06:33.820358950Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:06:33.854129 containerd[2049]: time="2025-02-13T19:06:33.853986994Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7542eba118620907d5fa9812ba82325c3ece615a204d5274c942385c2e19a16f\"" Feb 13 19:06:33.855192 containerd[2049]: time="2025-02-13T19:06:33.855121990Z" level=info msg="StartContainer for \"7542eba118620907d5fa9812ba82325c3ece615a204d5274c942385c2e19a16f\"" Feb 13 19:06:33.958416 containerd[2049]: time="2025-02-13T19:06:33.958366138Z" level=info msg="StartContainer for \"7542eba118620907d5fa9812ba82325c3ece615a204d5274c942385c2e19a16f\" returns successfully" Feb 13 19:06:33.997337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7542eba118620907d5fa9812ba82325c3ece615a204d5274c942385c2e19a16f-rootfs.mount: Deactivated successfully. Feb 13 19:06:34.005847 containerd[2049]: time="2025-02-13T19:06:34.005672166Z" level=info msg="shim disconnected" id=7542eba118620907d5fa9812ba82325c3ece615a204d5274c942385c2e19a16f namespace=k8s.io Feb 13 19:06:34.005847 containerd[2049]: time="2025-02-13T19:06:34.005750718Z" level=warning msg="cleaning up after shim disconnected" id=7542eba118620907d5fa9812ba82325c3ece615a204d5274c942385c2e19a16f namespace=k8s.io Feb 13 19:06:34.005847 containerd[2049]: time="2025-02-13T19:06:34.005773662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:34.200227 kubelet[3681]: E0213 19:06:34.200161 3681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-ppdxg" podUID="af00f5c2-cce8-421e-bd0d-3b3774c78b93" Feb 13 19:06:34.828174 containerd[2049]: time="2025-02-13T19:06:34.828114827Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:06:34.864233 containerd[2049]: time="2025-02-13T19:06:34.864053927Z" level=info msg="CreateContainer within sandbox \"56e5cca4e6f48d25ddb6b5e45a231e23ac859a9520e628e62299afd932f71ed4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f68802dcf2ac828d8a7c174261f182dcf6de22657d5fddcea3c5a63c5de597bf\"" Feb 13 19:06:34.865035 containerd[2049]: time="2025-02-13T19:06:34.864983027Z" level=info msg="StartContainer for \"f68802dcf2ac828d8a7c174261f182dcf6de22657d5fddcea3c5a63c5de597bf\"" Feb 13 19:06:34.973911 containerd[2049]: time="2025-02-13T19:06:34.973838579Z" level=info msg="StartContainer for \"f68802dcf2ac828d8a7c174261f182dcf6de22657d5fddcea3c5a63c5de597bf\" returns successfully" Feb 13 19:06:35.853624 kubelet[3681]: I0213 19:06:35.853496 3681 setters.go:580] "Node became not ready" node="ip-172-31-18-68" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:06:35Z","lastTransitionTime":"2025-02-13T19:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:06:35.947724 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:06:36.200467 kubelet[3681]: E0213 19:06:36.199917 3681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-ppdxg" podUID="af00f5c2-cce8-421e-bd0d-3b3774c78b93" Feb 13 19:06:38.201924 kubelet[3681]: E0213 19:06:38.200166 3681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-ppdxg" podUID="af00f5c2-cce8-421e-bd0d-3b3774c78b93" Feb 13 19:06:40.040109 systemd-networkd[1601]: lxc_health: Link UP Feb 13 19:06:40.049961 (udev-worker)[6325]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:06:40.051466 systemd-networkd[1601]: lxc_health: Gained carrier Feb 13 19:06:41.139771 kubelet[3681]: I0213 19:06:41.139076 3681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-242c7" podStartSLOduration=11.139021334 podStartE2EDuration="11.139021334s" podCreationTimestamp="2025-02-13 19:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:06:35.929017044 +0000 UTC m=+122.995774056" watchObservedRunningTime="2025-02-13 19:06:41.139021334 +0000 UTC m=+128.205778322" Feb 13 19:06:41.306925 systemd-networkd[1601]: lxc_health: Gained IPv6LL Feb 13 19:06:42.849007 kubelet[3681]: E0213 19:06:42.848564 3681 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47562->127.0.0.1:38325: write tcp 127.0.0.1:47562->127.0.0.1:38325: write: broken pipe Feb 13 19:06:43.354415 ntpd[1997]: Listen normally on 13 lxc_health [fe80::f41e:b1ff:fe04:f591%14]:123 Feb 13 19:06:43.355124 ntpd[1997]: 13 Feb 19:06:43 ntpd[1997]: Listen normally on 13 lxc_health [fe80::f41e:b1ff:fe04:f591%14]:123 Feb 13 19:06:47.456585 systemd[1]: run-containerd-runc-k8s.io-f68802dcf2ac828d8a7c174261f182dcf6de22657d5fddcea3c5a63c5de597bf-runc.4j3Zk4.mount: Deactivated successfully. Feb 13 19:06:47.580674 sshd[5619]: Connection closed by 147.75.109.163 port 48848 Feb 13 19:06:47.582180 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:47.589826 systemd[1]: sshd@28-172.31.18.68:22-147.75.109.163:48848.service: Deactivated successfully. Feb 13 19:06:47.606160 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:06:47.614095 systemd-logind[2015]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:06:47.616854 systemd-logind[2015]: Removed session 29.