Jan 13 20:08:09.183450 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083]
Jan 13 20:08:09.183496 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025
Jan 13 20:08:09.183521 kernel: KASLR disabled due to lack of seed
Jan 13 20:08:09.183538 kernel: efi: EFI v2.7 by EDK II
Jan 13 20:08:09.183553 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 
Jan 13 20:08:09.183569 kernel: secureboot: Secure boot disabled
Jan 13 20:08:09.183586 kernel: ACPI: Early table checksum verification disabled
Jan 13 20:08:09.183601 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON)
Jan 13 20:08:09.183617 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001      01000013)
Jan 13 20:08:09.183633 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001)
Jan 13 20:08:09.183653 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527)
Jan 13 20:08:09.183669 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Jan 13 20:08:09.183685 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001)
Jan 13 20:08:09.183700 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001)
Jan 13 20:08:09.183719 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001)
Jan 13 20:08:09.183739 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Jan 13 20:08:09.183758 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001)
Jan 13 20:08:09.183774 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001)
Jan 13 20:08:09.183791 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200
Jan 13 20:08:09.183807 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200')
Jan 13 20:08:09.183824 kernel: printk: bootconsole [uart0] enabled
Jan 13 20:08:09.183840 kernel: NUMA: Failed to initialise from firmware
Jan 13 20:08:09.183857 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff]
Jan 13 20:08:09.183873 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff]
Jan 13 20:08:09.183890 kernel: Zone ranges:
Jan 13 20:08:09.183906 kernel:   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
Jan 13 20:08:09.183926 kernel:   DMA32    empty
Jan 13 20:08:09.183943 kernel:   Normal   [mem 0x0000000100000000-0x00000004b5ffffff]
Jan 13 20:08:09.183960 kernel: Movable zone start for each node
Jan 13 20:08:09.183976 kernel: Early memory node ranges
Jan 13 20:08:09.184018 kernel:   node   0: [mem 0x0000000040000000-0x000000007862ffff]
Jan 13 20:08:09.184038 kernel:   node   0: [mem 0x0000000078630000-0x000000007863ffff]
Jan 13 20:08:09.184055 kernel:   node   0: [mem 0x0000000078640000-0x00000000786effff]
Jan 13 20:08:09.184073 kernel:   node   0: [mem 0x00000000786f0000-0x000000007872ffff]
Jan 13 20:08:09.184089 kernel:   node   0: [mem 0x0000000078730000-0x000000007bbfffff]
Jan 13 20:08:09.184105 kernel:   node   0: [mem 0x000000007bc00000-0x000000007bfdffff]
Jan 13 20:08:09.184122 kernel:   node   0: [mem 0x000000007bfe0000-0x000000007fffffff]
Jan 13 20:08:09.184138 kernel:   node   0: [mem 0x0000000400000000-0x00000004b5ffffff]
Jan 13 20:08:09.184161 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff]
Jan 13 20:08:09.184178 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges
Jan 13 20:08:09.184201 kernel: psci: probing for conduit method from ACPI.
Jan 13 20:08:09.184219 kernel: psci: PSCIv1.0 detected in firmware.
Jan 13 20:08:09.184236 kernel: psci: Using standard PSCI v0.2 function IDs
Jan 13 20:08:09.184260 kernel: psci: Trusted OS migration not required
Jan 13 20:08:09.184278 kernel: psci: SMC Calling Convention v1.1
Jan 13 20:08:09.184295 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Jan 13 20:08:09.184313 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Jan 13 20:08:09.184331 kernel: pcpu-alloc: [0] 0 [0] 1 
Jan 13 20:08:09.184349 kernel: Detected PIPT I-cache on CPU0
Jan 13 20:08:09.184367 kernel: CPU features: detected: GIC system register CPU interface
Jan 13 20:08:09.184384 kernel: CPU features: detected: Spectre-v2
Jan 13 20:08:09.184401 kernel: CPU features: detected: Spectre-v3a
Jan 13 20:08:09.184418 kernel: CPU features: detected: Spectre-BHB
Jan 13 20:08:09.184435 kernel: CPU features: detected: ARM erratum 1742098
Jan 13 20:08:09.184453 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923
Jan 13 20:08:09.184475 kernel: alternatives: applying boot alternatives
Jan 13 20:08:09.184495 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436
Jan 13 20:08:09.184514 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 13 20:08:09.184532 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 13 20:08:09.184550 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jan 13 20:08:09.184567 kernel: Fallback order for Node 0: 0 
Jan 13 20:08:09.184585 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 991872
Jan 13 20:08:09.184602 kernel: Policy zone: Normal
Jan 13 20:08:09.184619 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 13 20:08:09.184636 kernel: software IO TLB: area num 2.
Jan 13 20:08:09.184658 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB)
Jan 13 20:08:09.184676 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved)
Jan 13 20:08:09.184694 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Jan 13 20:08:09.184711 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 13 20:08:09.184729 kernel: rcu:         RCU event tracing is enabled.
Jan 13 20:08:09.184747 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Jan 13 20:08:09.184765 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 13 20:08:09.184783 kernel:         Tracing variant of Tasks RCU enabled.
Jan 13 20:08:09.184801 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 13 20:08:09.184819 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Jan 13 20:08:09.184836 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Jan 13 20:08:09.184858 kernel: GICv3: 96 SPIs implemented
Jan 13 20:08:09.184875 kernel: GICv3: 0 Extended SPIs implemented
Jan 13 20:08:09.184892 kernel: Root IRQ handler: gic_handle_irq
Jan 13 20:08:09.184910 kernel: GICv3: GICv3 features: 16 PPIs
Jan 13 20:08:09.184927 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000
Jan 13 20:08:09.184944 kernel: ITS [mem 0x10080000-0x1009ffff]
Jan 13 20:08:09.184962 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1)
Jan 13 20:08:09.184980 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1)
Jan 13 20:08:09.185044 kernel: GICv3: using LPI property table @0x00000004000d0000
Jan 13 20:08:09.185063 kernel: ITS: Using hypervisor restricted LPI range [128]
Jan 13 20:08:09.185080 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000
Jan 13 20:08:09.185097 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 13 20:08:09.185121 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt).
Jan 13 20:08:09.185139 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns
Jan 13 20:08:09.185157 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns
Jan 13 20:08:09.185174 kernel: Console: colour dummy device 80x25
Jan 13 20:08:09.185192 kernel: printk: console [tty1] enabled
Jan 13 20:08:09.185209 kernel: ACPI: Core revision 20230628
Jan 13 20:08:09.185227 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333)
Jan 13 20:08:09.185245 kernel: pid_max: default: 32768 minimum: 301
Jan 13 20:08:09.185263 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 13 20:08:09.185280 kernel: landlock: Up and running.
Jan 13 20:08:09.185302 kernel: SELinux:  Initializing.
Jan 13 20:08:09.185319 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:08:09.185337 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:08:09.185355 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 13 20:08:09.185373 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Jan 13 20:08:09.185390 kernel: rcu: Hierarchical SRCU implementation.
Jan 13 20:08:09.185408 kernel: rcu:         Max phase no-delay instances is 400.
Jan 13 20:08:09.185426 kernel: Platform MSI: ITS@0x10080000 domain created
Jan 13 20:08:09.185447 kernel: PCI/MSI: ITS@0x10080000 domain created
Jan 13 20:08:09.185465 kernel: Remapping and enabling EFI services.
Jan 13 20:08:09.185482 kernel: smp: Bringing up secondary CPUs ...
Jan 13 20:08:09.185500 kernel: Detected PIPT I-cache on CPU1
Jan 13 20:08:09.185518 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000
Jan 13 20:08:09.185536 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000
Jan 13 20:08:09.185553 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083]
Jan 13 20:08:09.185571 kernel: smp: Brought up 1 node, 2 CPUs
Jan 13 20:08:09.185588 kernel: SMP: Total of 2 processors activated.
Jan 13 20:08:09.185606 kernel: CPU features: detected: 32-bit EL0 Support
Jan 13 20:08:09.185628 kernel: CPU features: detected: 32-bit EL1 Support
Jan 13 20:08:09.185646 kernel: CPU features: detected: CRC32 instructions
Jan 13 20:08:09.185674 kernel: CPU: All CPU(s) started at EL1
Jan 13 20:08:09.185696 kernel: alternatives: applying system-wide alternatives
Jan 13 20:08:09.185714 kernel: devtmpfs: initialized
Jan 13 20:08:09.185733 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 13 20:08:09.185751 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Jan 13 20:08:09.185769 kernel: pinctrl core: initialized pinctrl subsystem
Jan 13 20:08:09.185788 kernel: SMBIOS 3.0.0 present.
Jan 13 20:08:09.185810 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018
Jan 13 20:08:09.185829 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 13 20:08:09.185847 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Jan 13 20:08:09.185866 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 13 20:08:09.185885 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 13 20:08:09.185903 kernel: audit: initializing netlink subsys (disabled)
Jan 13 20:08:09.185921 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1
Jan 13 20:08:09.185943 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 13 20:08:09.185961 kernel: cpuidle: using governor menu
Jan 13 20:08:09.185980 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Jan 13 20:08:09.188083 kernel: ASID allocator initialised with 65536 entries
Jan 13 20:08:09.188104 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 13 20:08:09.188123 kernel: Serial: AMBA PL011 UART driver
Jan 13 20:08:09.188141 kernel: Modules: 17440 pages in range for non-PLT usage
Jan 13 20:08:09.188160 kernel: Modules: 508960 pages in range for PLT usage
Jan 13 20:08:09.188179 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 13 20:08:09.188207 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Jan 13 20:08:09.188226 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Jan 13 20:08:09.188245 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Jan 13 20:08:09.188263 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 13 20:08:09.188281 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Jan 13 20:08:09.188299 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Jan 13 20:08:09.188318 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Jan 13 20:08:09.188336 kernel: ACPI: Added _OSI(Module Device)
Jan 13 20:08:09.188355 kernel: ACPI: Added _OSI(Processor Device)
Jan 13 20:08:09.188377 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 13 20:08:09.188396 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 13 20:08:09.188415 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 13 20:08:09.188434 kernel: ACPI: Interpreter enabled
Jan 13 20:08:09.188452 kernel: ACPI: Using GIC for interrupt routing
Jan 13 20:08:09.188470 kernel: ACPI: MCFG table detected, 1 entries
Jan 13 20:08:09.188490 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f])
Jan 13 20:08:09.188800 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Jan 13 20:08:09.189048 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Jan 13 20:08:09.189249 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Jan 13 20:08:09.189444 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00
Jan 13 20:08:09.189647 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f]
Jan 13 20:08:09.189673 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io  0x0000-0xffff window]
Jan 13 20:08:09.189692 kernel: acpiphp: Slot [1] registered
Jan 13 20:08:09.189711 kernel: acpiphp: Slot [2] registered
Jan 13 20:08:09.189729 kernel: acpiphp: Slot [3] registered
Jan 13 20:08:09.189754 kernel: acpiphp: Slot [4] registered
Jan 13 20:08:09.189773 kernel: acpiphp: Slot [5] registered
Jan 13 20:08:09.189791 kernel: acpiphp: Slot [6] registered
Jan 13 20:08:09.189809 kernel: acpiphp: Slot [7] registered
Jan 13 20:08:09.189827 kernel: acpiphp: Slot [8] registered
Jan 13 20:08:09.189845 kernel: acpiphp: Slot [9] registered
Jan 13 20:08:09.189864 kernel: acpiphp: Slot [10] registered
Jan 13 20:08:09.189882 kernel: acpiphp: Slot [11] registered
Jan 13 20:08:09.189900 kernel: acpiphp: Slot [12] registered
Jan 13 20:08:09.189918 kernel: acpiphp: Slot [13] registered
Jan 13 20:08:09.189941 kernel: acpiphp: Slot [14] registered
Jan 13 20:08:09.189959 kernel: acpiphp: Slot [15] registered
Jan 13 20:08:09.189978 kernel: acpiphp: Slot [16] registered
Jan 13 20:08:09.190019 kernel: acpiphp: Slot [17] registered
Jan 13 20:08:09.190039 kernel: acpiphp: Slot [18] registered
Jan 13 20:08:09.190057 kernel: acpiphp: Slot [19] registered
Jan 13 20:08:09.190076 kernel: acpiphp: Slot [20] registered
Jan 13 20:08:09.190094 kernel: acpiphp: Slot [21] registered
Jan 13 20:08:09.190112 kernel: acpiphp: Slot [22] registered
Jan 13 20:08:09.190136 kernel: acpiphp: Slot [23] registered
Jan 13 20:08:09.190155 kernel: acpiphp: Slot [24] registered
Jan 13 20:08:09.190173 kernel: acpiphp: Slot [25] registered
Jan 13 20:08:09.190191 kernel: acpiphp: Slot [26] registered
Jan 13 20:08:09.190209 kernel: acpiphp: Slot [27] registered
Jan 13 20:08:09.190227 kernel: acpiphp: Slot [28] registered
Jan 13 20:08:09.190245 kernel: acpiphp: Slot [29] registered
Jan 13 20:08:09.190264 kernel: acpiphp: Slot [30] registered
Jan 13 20:08:09.190282 kernel: acpiphp: Slot [31] registered
Jan 13 20:08:09.190300 kernel: PCI host bridge to bus 0000:00
Jan 13 20:08:09.190513 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window]
Jan 13 20:08:09.190700 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Jan 13 20:08:09.190907 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window]
Jan 13 20:08:09.191153 kernel: pci_bus 0000:00: root bus resource [bus 00-0f]
Jan 13 20:08:09.191389 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000
Jan 13 20:08:09.191622 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003
Jan 13 20:08:09.195311 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff]
Jan 13 20:08:09.195661 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Jan 13 20:08:09.195884 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff]
Jan 13 20:08:09.197328 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold
Jan 13 20:08:09.197572 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Jan 13 20:08:09.197808 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff]
Jan 13 20:08:09.198055 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref]
Jan 13 20:08:09.198268 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff]
Jan 13 20:08:09.198519 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold
Jan 13 20:08:09.198728 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref]
Jan 13 20:08:09.198972 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff]
Jan 13 20:08:09.199232 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff]
Jan 13 20:08:09.199438 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff]
Jan 13 20:08:09.199679 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff]
Jan 13 20:08:09.199888 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window]
Jan 13 20:08:09.200157 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Jan 13 20:08:09.200337 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window]
Jan 13 20:08:09.200362 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Jan 13 20:08:09.200381 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Jan 13 20:08:09.200400 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Jan 13 20:08:09.200419 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Jan 13 20:08:09.200437 kernel: iommu: Default domain type: Translated
Jan 13 20:08:09.200463 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Jan 13 20:08:09.200481 kernel: efivars: Registered efivars operations
Jan 13 20:08:09.200500 kernel: vgaarb: loaded
Jan 13 20:08:09.200518 kernel: clocksource: Switched to clocksource arch_sys_counter
Jan 13 20:08:09.200537 kernel: VFS: Disk quotas dquot_6.6.0
Jan 13 20:08:09.200556 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 13 20:08:09.200574 kernel: pnp: PnP ACPI init
Jan 13 20:08:09.200876 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved
Jan 13 20:08:09.200919 kernel: pnp: PnP ACPI: found 1 devices
Jan 13 20:08:09.200940 kernel: NET: Registered PF_INET protocol family
Jan 13 20:08:09.200959 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 13 20:08:09.200978 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Jan 13 20:08:09.202541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 13 20:08:09.202569 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Jan 13 20:08:09.202588 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Jan 13 20:08:09.202607 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Jan 13 20:08:09.202626 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:08:09.202653 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:08:09.202672 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 13 20:08:09.202690 kernel: PCI: CLS 0 bytes, default 64
Jan 13 20:08:09.202709 kernel: kvm [1]: HYP mode not available
Jan 13 20:08:09.202727 kernel: Initialise system trusted keyrings
Jan 13 20:08:09.202746 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Jan 13 20:08:09.202764 kernel: Key type asymmetric registered
Jan 13 20:08:09.202804 kernel: Asymmetric key parser 'x509' registered
Jan 13 20:08:09.202823 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Jan 13 20:08:09.202847 kernel: io scheduler mq-deadline registered
Jan 13 20:08:09.202866 kernel: io scheduler kyber registered
Jan 13 20:08:09.202884 kernel: io scheduler bfq registered
Jan 13 20:08:09.203157 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered
Jan 13 20:08:09.203186 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jan 13 20:08:09.203205 kernel: ACPI: button: Power Button [PWRB]
Jan 13 20:08:09.203224 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1
Jan 13 20:08:09.203243 kernel: ACPI: button: Sleep Button [SLPB]
Jan 13 20:08:09.203267 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 13 20:08:09.203287 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37
Jan 13 20:08:09.203490 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012)
Jan 13 20:08:09.203516 kernel: printk: console [ttyS0] disabled
Jan 13 20:08:09.203535 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A
Jan 13 20:08:09.203554 kernel: printk: console [ttyS0] enabled
Jan 13 20:08:09.203573 kernel: printk: bootconsole [uart0] disabled
Jan 13 20:08:09.203591 kernel: thunder_xcv, ver 1.0
Jan 13 20:08:09.203609 kernel: thunder_bgx, ver 1.0
Jan 13 20:08:09.203627 kernel: nicpf, ver 1.0
Jan 13 20:08:09.203651 kernel: nicvf, ver 1.0
Jan 13 20:08:09.203851 kernel: rtc-efi rtc-efi.0: registered as rtc0
Jan 13 20:08:09.205121 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:08:08 UTC (1736798888)
Jan 13 20:08:09.205152 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 13 20:08:09.205172 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available
Jan 13 20:08:09.205191 kernel: watchdog: Delayed init of the lockup detector failed: -19
Jan 13 20:08:09.205209 kernel: watchdog: Hard watchdog permanently disabled
Jan 13 20:08:09.205235 kernel: NET: Registered PF_INET6 protocol family
Jan 13 20:08:09.205254 kernel: Segment Routing with IPv6
Jan 13 20:08:09.205272 kernel: In-situ OAM (IOAM) with IPv6
Jan 13 20:08:09.205290 kernel: NET: Registered PF_PACKET protocol family
Jan 13 20:08:09.205309 kernel: Key type dns_resolver registered
Jan 13 20:08:09.205327 kernel: registered taskstats version 1
Jan 13 20:08:09.205345 kernel: Loading compiled-in X.509 certificates
Jan 13 20:08:09.205364 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb'
Jan 13 20:08:09.205383 kernel: Key type .fscrypt registered
Jan 13 20:08:09.205401 kernel: Key type fscrypt-provisioning registered
Jan 13 20:08:09.205424 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 13 20:08:09.205443 kernel: ima: Allocated hash algorithm: sha1
Jan 13 20:08:09.205461 kernel: ima: No architecture policies found
Jan 13 20:08:09.205480 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Jan 13 20:08:09.205502 kernel: clk: Disabling unused clocks
Jan 13 20:08:09.205522 kernel: Freeing unused kernel memory: 39680K
Jan 13 20:08:09.205541 kernel: Run /init as init process
Jan 13 20:08:09.205559 kernel:   with arguments:
Jan 13 20:08:09.205577 kernel:     /init
Jan 13 20:08:09.205600 kernel:   with environment:
Jan 13 20:08:09.208082 kernel:     HOME=/
Jan 13 20:08:09.208104 kernel:     TERM=linux
Jan 13 20:08:09.208122 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 13 20:08:09.208145 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:08:09.208169 systemd[1]: Detected virtualization amazon.
Jan 13 20:08:09.208189 systemd[1]: Detected architecture arm64.
Jan 13 20:08:09.208219 systemd[1]: Running in initrd.
Jan 13 20:08:09.208239 systemd[1]: No hostname configured, using default hostname.
Jan 13 20:08:09.208259 systemd[1]: Hostname set to <localhost>.
Jan 13 20:08:09.208280 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:08:09.208300 systemd[1]: Queued start job for default target initrd.target.
Jan 13 20:08:09.208320 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:08:09.208340 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:08:09.208362 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 13 20:08:09.208386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:08:09.208407 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 13 20:08:09.208427 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 13 20:08:09.208450 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 13 20:08:09.208471 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 13 20:08:09.208491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:08:09.208511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:08:09.208535 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:08:09.208555 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:08:09.208575 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:08:09.208595 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:08:09.208615 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:08:09.208636 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:08:09.208656 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 13 20:08:09.208676 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 13 20:08:09.208696 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:08:09.208720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:08:09.208740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:08:09.208760 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:08:09.208780 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 13 20:08:09.208800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:08:09.208820 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 13 20:08:09.208839 systemd[1]: Starting systemd-fsck-usr.service...
Jan 13 20:08:09.208859 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:08:09.208883 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:08:09.208903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:08:09.208923 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 13 20:08:09.208943 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:08:09.208963 systemd[1]: Finished systemd-fsck-usr.service.
Jan 13 20:08:09.210537 systemd-journald[252]: Collecting audit messages is disabled.
Jan 13 20:08:09.210602 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 13 20:08:09.210624 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 13 20:08:09.210644 kernel: Bridge firewalling registered
Jan 13 20:08:09.210670 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:08:09.210691 systemd-journald[252]: Journal started
Jan 13 20:08:09.210728 systemd-journald[252]: Runtime Journal (/run/log/journal/ec26203cca16352eda024a1ab03b1b88) is 8.0M, max 75.3M, 67.3M free.
Jan 13 20:08:09.163670 systemd-modules-load[253]: Inserted module 'overlay'
Jan 13 20:08:09.200877 systemd-modules-load[253]: Inserted module 'br_netfilter'
Jan 13 20:08:09.226363 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:08:09.230532 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:08:09.236660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:08:09.239498 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:08:09.257343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:08:09.263457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:08:09.270236 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:08:09.278649 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:08:09.308936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:08:09.311837 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:08:09.333409 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:08:09.344279 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:08:09.358295 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 13 20:08:09.386035 dracut-cmdline[290]: dracut-dracut-053
Jan 13 20:08:09.392950 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436
Jan 13 20:08:09.419980 systemd-resolved[284]: Positive Trust Anchors:
Jan 13 20:08:09.420048 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:08:09.420109 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:08:09.541025 kernel: SCSI subsystem initialized
Jan 13 20:08:09.549023 kernel: Loading iSCSI transport class v2.0-870.
Jan 13 20:08:09.561023 kernel: iscsi: registered transport (tcp)
Jan 13 20:08:09.583457 kernel: iscsi: registered transport (qla4xxx)
Jan 13 20:08:09.583532 kernel: QLogic iSCSI HBA Driver
Jan 13 20:08:09.657100 kernel: random: crng init done
Jan 13 20:08:09.657255 systemd-resolved[284]: Defaulting to hostname 'linux'.
Jan 13 20:08:09.660666 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:08:09.679040 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:08:09.689038 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:08:09.699264 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 13 20:08:09.736199 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 13 20:08:09.736273 kernel: device-mapper: uevent: version 1.0.3
Jan 13 20:08:09.736313 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 13 20:08:09.803046 kernel: raid6: neonx8   gen()  6624 MB/s
Jan 13 20:08:09.820020 kernel: raid6: neonx4   gen()  6449 MB/s
Jan 13 20:08:09.837017 kernel: raid6: neonx2   gen()  5383 MB/s
Jan 13 20:08:09.854021 kernel: raid6: neonx1   gen()  3933 MB/s
Jan 13 20:08:09.871016 kernel: raid6: int64x8  gen()  3798 MB/s
Jan 13 20:08:09.888018 kernel: raid6: int64x4  gen()  3687 MB/s
Jan 13 20:08:09.905017 kernel: raid6: int64x2  gen()  3553 MB/s
Jan 13 20:08:09.922785 kernel: raid6: int64x1  gen()  2752 MB/s
Jan 13 20:08:09.922819 kernel: raid6: using algorithm neonx8 gen() 6624 MB/s
Jan 13 20:08:09.940751 kernel: raid6: .... xor() 4923 MB/s, rmw enabled
Jan 13 20:08:09.940791 kernel: raid6: using neon recovery algorithm
Jan 13 20:08:09.949152 kernel: xor: measuring software checksum speed
Jan 13 20:08:09.949206 kernel:    8regs           : 10968 MB/sec
Jan 13 20:08:09.950224 kernel:    32regs          : 11930 MB/sec
Jan 13 20:08:09.951399 kernel:    arm64_neon      :  9562 MB/sec
Jan 13 20:08:09.951431 kernel: xor: using function: 32regs (11930 MB/sec)
Jan 13 20:08:10.036032 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 13 20:08:10.054461 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:08:10.064388 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:08:10.104448 systemd-udevd[471]: Using default interface naming scheme 'v255'.
Jan 13 20:08:10.112842 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:08:10.132809 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 13 20:08:10.170108 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation
Jan 13 20:08:10.229033 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:08:10.240402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:08:10.352261 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:08:10.367476 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 13 20:08:10.413288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:08:10.418550 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:08:10.423341 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:08:10.425602 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:08:10.439517 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 13 20:08:10.479358 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:08:10.544531 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Jan 13 20:08:10.544607 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012)
Jan 13 20:08:10.565532 kernel: ena 0000:00:05.0: ENA device version: 0.10
Jan 13 20:08:10.565818 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Jan 13 20:08:10.566091 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:35:54:75:9f:77
Jan 13 20:08:10.570091 (udev-worker)[522]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:08:10.570740 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:08:10.573433 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:08:10.586338 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:08:10.588821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:08:10.591150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:08:10.599157 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:08:10.616043 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35
Jan 13 20:08:10.618021 kernel: nvme nvme0: pci function 0000:00:04.0
Jan 13 20:08:10.618537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:08:10.630086 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Jan 13 20:08:10.641024 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Jan 13 20:08:10.641089 kernel: GPT:9289727 != 16777215
Jan 13 20:08:10.641114 kernel: GPT:Alternate GPT header not at the end of the disk.
Jan 13 20:08:10.645818 kernel: GPT:9289727 != 16777215
Jan 13 20:08:10.645867 kernel: GPT: Use GNU Parted to correct GPT errors.
Jan 13 20:08:10.645892 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Jan 13 20:08:10.649231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:08:10.662376 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:08:10.699710 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:08:10.756029 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (522)
Jan 13 20:08:10.763036 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (520)
Jan 13 20:08:10.822534 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Jan 13 20:08:10.849378 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Jan 13 20:08:10.877571 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Jan 13 20:08:10.883611 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Jan 13 20:08:10.909043 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Jan 13 20:08:10.922397 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 13 20:08:10.937324 disk-uuid[662]: Primary Header is updated.
Jan 13 20:08:10.937324 disk-uuid[662]: Secondary Entries is updated.
Jan 13 20:08:10.937324 disk-uuid[662]: Secondary Header is updated.
Jan 13 20:08:10.949022 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Jan 13 20:08:11.965031 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Jan 13 20:08:11.966068 disk-uuid[663]: The operation has completed successfully.
Jan 13 20:08:12.149416 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 13 20:08:12.149618 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 13 20:08:12.188249 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 13 20:08:12.196963 sh[923]: Success
Jan 13 20:08:12.216272 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Jan 13 20:08:12.322114 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 13 20:08:12.340208 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 13 20:08:12.344316 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 13 20:08:12.391280 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78
Jan 13 20:08:12.391348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:08:12.391375 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 13 20:08:12.392635 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 13 20:08:12.393777 kernel: BTRFS info (device dm-0): using free space tree
Jan 13 20:08:12.486017 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Jan 13 20:08:12.511882 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 13 20:08:12.515212 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 13 20:08:12.530328 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 13 20:08:12.538342 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 13 20:08:12.568690 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:08:12.568774 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:08:12.570286 kernel: BTRFS info (device nvme0n1p6): using free space tree
Jan 13 20:08:12.577033 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Jan 13 20:08:12.594581 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 13 20:08:12.597365 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:08:12.605834 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 13 20:08:12.619914 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 13 20:08:12.708581 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:08:12.722235 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:08:12.768955 systemd-networkd[1128]: lo: Link UP
Jan 13 20:08:12.768977 systemd-networkd[1128]: lo: Gained carrier
Jan 13 20:08:12.771490 systemd-networkd[1128]: Enumeration completed
Jan 13 20:08:12.771635 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:08:12.772681 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:08:12.772688 systemd-networkd[1128]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:08:12.776639 systemd[1]: Reached target network.target - Network.
Jan 13 20:08:12.782918 systemd-networkd[1128]: eth0: Link UP
Jan 13 20:08:12.783202 systemd-networkd[1128]: eth0: Gained carrier
Jan 13 20:08:12.783504 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:08:12.807071 systemd-networkd[1128]: eth0: DHCPv4 address 172.31.18.148/20, gateway 172.31.16.1 acquired from 172.31.16.1
Jan 13 20:08:12.957855 ignition[1049]: Ignition 2.20.0
Jan 13 20:08:12.957888 ignition[1049]: Stage: fetch-offline
Jan 13 20:08:12.958385 ignition[1049]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:08:12.958410 ignition[1049]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 13 20:08:12.960974 ignition[1049]: Ignition finished successfully
Jan 13 20:08:12.969051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:08:12.978322 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Jan 13 20:08:13.013821 ignition[1137]: Ignition 2.20.0
Jan 13 20:08:13.013848 ignition[1137]: Stage: fetch
Jan 13 20:08:13.015484 ignition[1137]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:08:13.015518 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 13 20:08:13.016689 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 13 20:08:13.029304 ignition[1137]: PUT result: OK
Jan 13 20:08:13.032329 ignition[1137]: parsed url from cmdline: ""
Jan 13 20:08:13.032352 ignition[1137]: no config URL provided
Jan 13 20:08:13.032367 ignition[1137]: reading system config file "/usr/lib/ignition/user.ign"
Jan 13 20:08:13.032394 ignition[1137]: no config at "/usr/lib/ignition/user.ign"
Jan 13 20:08:13.032428 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 13 20:08:13.036025 ignition[1137]: PUT result: OK
Jan 13 20:08:13.036119 ignition[1137]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Jan 13 20:08:13.038197 ignition[1137]: GET result: OK
Jan 13 20:08:13.038295 ignition[1137]: parsing config with SHA512: bb3ef2c894713ca68511216524ec3c2915f6fbe29468d91613f3afcf4325512dca33a9da1ff1713901b0b0120c2abc7db92253eb92591f74f7e2eb7df207f092
Jan 13 20:08:13.048215 unknown[1137]: fetched base config from "system"
Jan 13 20:08:13.049004 unknown[1137]: fetched base config from "system"
Jan 13 20:08:13.049480 ignition[1137]: fetch: fetch complete
Jan 13 20:08:13.049022 unknown[1137]: fetched user config from "aws"
Jan 13 20:08:13.049491 ignition[1137]: fetch: fetch passed
Jan 13 20:08:13.049571 ignition[1137]: Ignition finished successfully
Jan 13 20:08:13.058885 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Jan 13 20:08:13.068343 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 13 20:08:13.094981 ignition[1143]: Ignition 2.20.0
Jan 13 20:08:13.095504 ignition[1143]: Stage: kargs
Jan 13 20:08:13.096128 ignition[1143]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:08:13.096154 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 13 20:08:13.096336 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 13 20:08:13.102727 ignition[1143]: PUT result: OK
Jan 13 20:08:13.108791 ignition[1143]: kargs: kargs passed
Jan 13 20:08:13.108936 ignition[1143]: Ignition finished successfully
Jan 13 20:08:13.112720 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 13 20:08:13.123297 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 13 20:08:13.158292 ignition[1150]: Ignition 2.20.0
Jan 13 20:08:13.158314 ignition[1150]: Stage: disks
Jan 13 20:08:13.158912 ignition[1150]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:08:13.158936 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 13 20:08:13.159619 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 13 20:08:13.164289 ignition[1150]: PUT result: OK
Jan 13 20:08:13.172468 ignition[1150]: disks: disks passed
Jan 13 20:08:13.172564 ignition[1150]: Ignition finished successfully
Jan 13 20:08:13.175584 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 13 20:08:13.181232 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 13 20:08:13.185216 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 13 20:08:13.187719 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:08:13.193137 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:08:13.195164 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:08:13.212724 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 13 20:08:13.256587 systemd-fsck[1158]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Jan 13 20:08:13.261786 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 13 20:08:13.272200 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 13 20:08:13.367027 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none.
Jan 13 20:08:13.369303 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 13 20:08:13.372618 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:08:13.388172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:08:13.395208 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 13 20:08:13.400265 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Jan 13 20:08:13.400365 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 13 20:08:13.400418 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:08:13.420124 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1177)
Jan 13 20:08:13.424660 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:08:13.424725 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:08:13.424753 kernel: BTRFS info (device nvme0n1p6): using free space tree
Jan 13 20:08:13.428343 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 13 20:08:13.439263 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 13 20:08:13.444177 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Jan 13 20:08:13.451367 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:08:13.826719 initrd-setup-root[1201]: cut: /sysroot/etc/passwd: No such file or directory
Jan 13 20:08:13.846787 initrd-setup-root[1208]: cut: /sysroot/etc/group: No such file or directory
Jan 13 20:08:13.855467 initrd-setup-root[1215]: cut: /sysroot/etc/shadow: No such file or directory
Jan 13 20:08:13.863796 initrd-setup-root[1222]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 13 20:08:14.097279 systemd-networkd[1128]: eth0: Gained IPv6LL
Jan 13 20:08:14.234677 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 13 20:08:14.245185 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 13 20:08:14.254304 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 13 20:08:14.271391 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 13 20:08:14.274904 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:08:14.310458 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 13 20:08:14.322650 ignition[1290]: INFO     : Ignition 2.20.0
Jan 13 20:08:14.322650 ignition[1290]: INFO     : Stage: mount
Jan 13 20:08:14.325885 ignition[1290]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:08:14.325885 ignition[1290]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 13 20:08:14.329966 ignition[1290]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 13 20:08:14.332918 ignition[1290]: INFO     : PUT result: OK
Jan 13 20:08:14.337144 ignition[1290]: INFO     : mount: mount passed
Jan 13 20:08:14.338770 ignition[1290]: INFO     : Ignition finished successfully
Jan 13 20:08:14.342498 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 13 20:08:14.355206 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 13 20:08:14.380387 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:08:14.402041 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1301)
Jan 13 20:08:14.405464 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:08:14.405502 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:08:14.405528 kernel: BTRFS info (device nvme0n1p6): using free space tree
Jan 13 20:08:14.412036 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Jan 13 20:08:14.415535 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:08:14.455038 ignition[1318]: INFO     : Ignition 2.20.0
Jan 13 20:08:14.455038 ignition[1318]: INFO     : Stage: files
Jan 13 20:08:14.458470 ignition[1318]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:08:14.458470 ignition[1318]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 13 20:08:14.462596 ignition[1318]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 13 20:08:14.465605 ignition[1318]: INFO     : PUT result: OK
Jan 13 20:08:14.469853 ignition[1318]: DEBUG    : files: compiled without relabeling support, skipping
Jan 13 20:08:14.472314 ignition[1318]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 13 20:08:14.472314 ignition[1318]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 13 20:08:14.478932 ignition[1318]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 13 20:08:14.481731 ignition[1318]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 13 20:08:14.484752 unknown[1318]: wrote ssh authorized keys file for user: core
Jan 13 20:08:14.487134 ignition[1318]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 13 20:08:14.501436 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Jan 13 20:08:14.501436 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Jan 13 20:08:14.501436 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:08:14.501436 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:08:14.501436 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 13 20:08:14.501436 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 13 20:08:14.501436 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 13 20:08:14.501436 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1
Jan 13 20:08:17.087764 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Jan 13 20:08:17.503001 ignition[1318]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 13 20:08:17.507074 ignition[1318]: INFO     : files: createResultFile: createFiles: op(7): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:08:17.510512 ignition[1318]: INFO     : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:08:17.510512 ignition[1318]: INFO     : files: files passed
Jan 13 20:08:17.510512 ignition[1318]: INFO     : Ignition finished successfully
Jan 13 20:08:17.518857 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 13 20:08:17.533303 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 13 20:08:17.544302 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 13 20:08:17.550110 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 13 20:08:17.550299 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 13 20:08:17.577344 initrd-setup-root-after-ignition[1346]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:08:17.577344 initrd-setup-root-after-ignition[1346]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:08:17.585689 initrd-setup-root-after-ignition[1350]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:08:17.591256 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:08:17.591875 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 13 20:08:17.607324 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 13 20:08:17.661824 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 13 20:08:17.662073 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 13 20:08:17.667718 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 13 20:08:17.669882 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 13 20:08:17.674003 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 13 20:08:17.690378 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 13 20:08:17.718029 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:08:17.728302 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 13 20:08:17.761503 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:08:17.766154 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:08:17.770901 systemd[1]: Stopped target timers.target - Timer Units.
Jan 13 20:08:17.773263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 13 20:08:17.773540 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:08:17.782056 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 13 20:08:17.784611 systemd[1]: Stopped target basic.target - Basic System.
Jan 13 20:08:17.789878 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 13 20:08:17.792096 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:08:17.794782 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 13 20:08:17.802874 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 13 20:08:17.805130 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:08:17.812084 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 13 20:08:17.814636 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 13 20:08:17.818331 systemd[1]: Stopped target swap.target - Swaps.
Jan 13 20:08:17.823356 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 13 20:08:17.823594 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:08:17.829782 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:08:17.832050 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:08:17.834448 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 13 20:08:17.834723 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:08:17.837454 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 13 20:08:17.837692 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:08:17.851834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 13 20:08:17.852267 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:08:17.858956 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 13 20:08:17.859186 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 13 20:08:17.870421 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 13 20:08:17.883345 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 13 20:08:17.885160 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 13 20:08:17.887090 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:08:17.903334 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 13 20:08:17.903566 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:08:17.936556 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 13 20:08:17.939415 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 13 20:08:17.947257 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 13 20:08:17.949931 ignition[1370]: INFO     : Ignition 2.20.0
Jan 13 20:08:17.949931 ignition[1370]: INFO     : Stage: umount
Jan 13 20:08:17.953314 ignition[1370]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:08:17.953314 ignition[1370]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Jan 13 20:08:17.953314 ignition[1370]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Jan 13 20:08:17.960089 ignition[1370]: INFO     : PUT result: OK
Jan 13 20:08:17.964169 ignition[1370]: INFO     : umount: umount passed
Jan 13 20:08:17.966498 ignition[1370]: INFO     : Ignition finished successfully
Jan 13 20:08:17.969928 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 13 20:08:17.971790 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 13 20:08:17.976369 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 13 20:08:17.978280 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 13 20:08:17.982107 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 13 20:08:17.982227 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 13 20:08:17.987548 systemd[1]: ignition-fetch.service: Deactivated successfully.
Jan 13 20:08:17.987668 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Jan 13 20:08:17.991239 systemd[1]: Stopped target network.target - Network.
Jan 13 20:08:17.997836 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 13 20:08:17.997957 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:08:18.003882 systemd[1]: Stopped target paths.target - Path Units.
Jan 13 20:08:18.012540 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 13 20:08:18.014659 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:08:18.017000 systemd[1]: Stopped target slices.target - Slice Units.
Jan 13 20:08:18.022711 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 13 20:08:18.025152 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 13 20:08:18.025239 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:08:18.028075 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 13 20:08:18.028150 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:08:18.039321 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 13 20:08:18.039426 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 13 20:08:18.041330 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 13 20:08:18.041412 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 13 20:08:18.044767 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 13 20:08:18.045234 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 13 20:08:18.069084 systemd-networkd[1128]: eth0: DHCPv6 lease lost
Jan 13 20:08:18.071979 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 13 20:08:18.076533 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 13 20:08:18.086917 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 13 20:08:18.088928 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 13 20:08:18.096403 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 13 20:08:18.096513 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:08:18.109227 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 13 20:08:18.112241 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 13 20:08:18.112357 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:08:18.115067 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 13 20:08:18.115146 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:08:18.117560 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 13 20:08:18.117644 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:08:18.120265 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 13 20:08:18.120345 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:08:18.123312 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:08:18.127201 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 13 20:08:18.127393 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 13 20:08:18.155470 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 13 20:08:18.155739 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:08:18.159504 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 13 20:08:18.159633 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:08:18.162797 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 13 20:08:18.162877 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:08:18.173582 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 13 20:08:18.173683 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:08:18.185015 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 13 20:08:18.185122 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:08:18.187273 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:08:18.187358 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:08:18.189920 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 13 20:08:18.190019 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 13 20:08:18.201052 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 13 20:08:18.207329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 13 20:08:18.207456 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:08:18.208119 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Jan 13 20:08:18.208197 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:08:18.208803 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 13 20:08:18.208876 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:08:18.215875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:08:18.215963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:08:18.257490 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 13 20:08:18.260107 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 13 20:08:18.271838 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 13 20:08:18.272268 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 13 20:08:18.278874 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 13 20:08:18.289266 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 13 20:08:18.306332 systemd[1]: Switching root.
Jan 13 20:08:18.367457 systemd-journald[252]: Journal stopped
Jan 13 20:08:20.717912 systemd-journald[252]: Received SIGTERM from PID 1 (systemd).
Jan 13 20:08:20.718070 kernel: SELinux:  policy capability network_peer_controls=1
Jan 13 20:08:20.718129 kernel: SELinux:  policy capability open_perms=1
Jan 13 20:08:20.718168 kernel: SELinux:  policy capability extended_socket_class=1
Jan 13 20:08:20.718198 kernel: SELinux:  policy capability always_check_network=0
Jan 13 20:08:20.718228 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 13 20:08:20.718263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 13 20:08:20.718292 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 13 20:08:20.718320 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 13 20:08:20.718347 kernel: audit: type=1403 audit(1736798898.830:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 13 20:08:20.718388 systemd[1]: Successfully loaded SELinux policy in 47.725ms.
Jan 13 20:08:20.718433 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.124ms.
Jan 13 20:08:20.718467 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:08:20.718498 systemd[1]: Detected virtualization amazon.
Jan 13 20:08:20.718529 systemd[1]: Detected architecture arm64.
Jan 13 20:08:20.718563 systemd[1]: Detected first boot.
Jan 13 20:08:20.718594 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:08:20.718629 zram_generator::config[1412]: No configuration found.
Jan 13 20:08:20.718665 systemd[1]: Populated /etc with preset unit settings.
Jan 13 20:08:20.718701 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 13 20:08:20.718762 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jan 13 20:08:20.718795 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 13 20:08:20.718828 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 13 20:08:20.718858 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 13 20:08:20.718891 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 13 20:08:20.718923 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 13 20:08:20.718956 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 13 20:08:20.723066 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 13 20:08:20.723133 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 13 20:08:20.723167 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 13 20:08:20.723199 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:08:20.723231 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:08:20.723262 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 13 20:08:20.723291 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 13 20:08:20.723321 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 13 20:08:20.723350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:08:20.723382 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Jan 13 20:08:20.723415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:08:20.723444 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jan 13 20:08:20.723474 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jan 13 20:08:20.723506 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:08:20.723534 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 13 20:08:20.723565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:08:20.723596 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:08:20.723626 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:08:20.723659 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:08:20.723687 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 13 20:08:20.723716 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 13 20:08:20.723746 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:08:20.723775 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:08:20.723805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:08:20.723836 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 13 20:08:20.723866 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 13 20:08:20.723896 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 13 20:08:20.723929 systemd[1]: Mounting media.mount - External Media Directory...
Jan 13 20:08:20.723959 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 13 20:08:20.724006 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 13 20:08:20.726160 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 13 20:08:20.726208 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 13 20:08:20.726245 systemd[1]: Reached target machines.target - Containers.
Jan 13 20:08:20.726276 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 13 20:08:20.726306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:08:20.726343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:08:20.726374 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 13 20:08:20.726402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:08:20.726433 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:08:20.726465 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:08:20.726495 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 13 20:08:20.726528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:08:20.726557 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 13 20:08:20.726586 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 13 20:08:20.726619 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jan 13 20:08:20.726648 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jan 13 20:08:20.726677 systemd[1]: Stopped systemd-fsck-usr.service.
Jan 13 20:08:20.726707 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:08:20.726754 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:08:20.726790 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 13 20:08:20.726821 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 13 20:08:20.726851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:08:20.726886 systemd[1]: verity-setup.service: Deactivated successfully.
Jan 13 20:08:20.726921 systemd[1]: Stopped verity-setup.service.
Jan 13 20:08:20.726950 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 13 20:08:20.726979 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 13 20:08:20.728101 systemd[1]: Mounted media.mount - External Media Directory.
Jan 13 20:08:20.728135 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 13 20:08:20.728165 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 13 20:08:20.728203 kernel: loop: module loaded
Jan 13 20:08:20.728232 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 13 20:08:20.728261 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:08:20.728339 systemd-journald[1490]: Collecting audit messages is disabled.
Jan 13 20:08:20.728392 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 13 20:08:20.728425 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 13 20:08:20.728460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:08:20.728490 systemd-journald[1490]: Journal started
Jan 13 20:08:20.728539 systemd-journald[1490]: Runtime Journal (/run/log/journal/ec26203cca16352eda024a1ab03b1b88) is 8.0M, max 75.3M, 67.3M free.
Jan 13 20:08:20.736328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:08:20.135980 systemd[1]: Queued start job for default target multi-user.target.
Jan 13 20:08:20.235760 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Jan 13 20:08:20.236621 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 13 20:08:20.744015 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:08:20.745700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:08:20.747696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:08:20.750691 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:08:20.753169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:08:20.756109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:08:20.758825 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 13 20:08:20.767034 kernel: ACPI: bus type drm_connector registered
Jan 13 20:08:20.767137 kernel: fuse: init (API version 7.39)
Jan 13 20:08:20.773319 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:08:20.773651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:08:20.782798 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 13 20:08:20.788604 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 13 20:08:20.789050 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 13 20:08:20.817447 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 13 20:08:20.830655 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 13 20:08:20.845292 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 13 20:08:20.847567 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 13 20:08:20.847626 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:08:20.853973 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Jan 13 20:08:20.865351 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 13 20:08:20.880263 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 13 20:08:20.883444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:08:20.896221 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 13 20:08:20.905537 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 13 20:08:20.907869 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:08:20.911830 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 13 20:08:20.914203 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:08:20.918353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:08:20.929308 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 13 20:08:20.936239 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 13 20:08:20.945772 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 13 20:08:20.949554 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 13 20:08:20.953316 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 13 20:08:20.956068 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 13 20:08:20.986818 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 13 20:08:20.989471 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 13 20:08:21.007501 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Jan 13 20:08:21.030312 systemd-journald[1490]: Time spent on flushing to /var/log/journal/ec26203cca16352eda024a1ab03b1b88 is 146.906ms for 894 entries.
Jan 13 20:08:21.030312 systemd-journald[1490]: System Journal (/var/log/journal/ec26203cca16352eda024a1ab03b1b88) is 8.0M, max 195.6M, 187.6M free.
Jan 13 20:08:21.207539 systemd-journald[1490]: Received client request to flush runtime journal.
Jan 13 20:08:21.207666 kernel: loop0: detected capacity change from 0 to 116808
Jan 13 20:08:21.207704 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 13 20:08:21.116926 systemd-tmpfiles[1541]: ACLs are not supported, ignoring.
Jan 13 20:08:21.116951 systemd-tmpfiles[1541]: ACLs are not supported, ignoring.
Jan 13 20:08:21.124269 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:08:21.145350 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 13 20:08:21.149358 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:08:21.152511 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Jan 13 20:08:21.173434 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 13 20:08:21.187135 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:08:21.198437 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 13 20:08:21.215089 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 13 20:08:21.236869 udevadm[1557]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Jan 13 20:08:21.239035 kernel: loop1: detected capacity change from 0 to 113536
Jan 13 20:08:21.316355 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 13 20:08:21.329415 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:08:21.375940 systemd-tmpfiles[1564]: ACLs are not supported, ignoring.
Jan 13 20:08:21.375978 systemd-tmpfiles[1564]: ACLs are not supported, ignoring.
Jan 13 20:08:21.379035 kernel: loop2: detected capacity change from 0 to 53784
Jan 13 20:08:21.386879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:08:21.499026 kernel: loop3: detected capacity change from 0 to 194096
Jan 13 20:08:21.602033 kernel: loop4: detected capacity change from 0 to 116808
Jan 13 20:08:21.619026 kernel: loop5: detected capacity change from 0 to 113536
Jan 13 20:08:21.636025 kernel: loop6: detected capacity change from 0 to 53784
Jan 13 20:08:21.645038 kernel: loop7: detected capacity change from 0 to 194096
Jan 13 20:08:21.674530 (sd-merge)[1570]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Jan 13 20:08:21.675566 (sd-merge)[1570]: Merged extensions into '/usr'.
Jan 13 20:08:21.687755 systemd[1]: Reloading requested from client PID 1540 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 13 20:08:21.687788 systemd[1]: Reloading...
Jan 13 20:08:21.876336 zram_generator::config[1599]: No configuration found.
Jan 13 20:08:22.207705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:08:22.314103 systemd[1]: Reloading finished in 625 ms.
Jan 13 20:08:22.356407 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 13 20:08:22.359315 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 13 20:08:22.374328 systemd[1]: Starting ensure-sysext.service...
Jan 13 20:08:22.385669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:08:22.392356 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:08:22.418037 systemd[1]: Reloading requested from client PID 1648 ('systemctl') (unit ensure-sysext.service)...
Jan 13 20:08:22.418089 systemd[1]: Reloading...
Jan 13 20:08:22.433611 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 13 20:08:22.434369 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 13 20:08:22.437640 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 13 20:08:22.438396 systemd-tmpfiles[1649]: ACLs are not supported, ignoring.
Jan 13 20:08:22.438721 systemd-tmpfiles[1649]: ACLs are not supported, ignoring.
Jan 13 20:08:22.456798 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:08:22.456818 systemd-tmpfiles[1649]: Skipping /boot
Jan 13 20:08:22.485230 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:08:22.486180 systemd-tmpfiles[1649]: Skipping /boot
Jan 13 20:08:22.525753 systemd-udevd[1650]: Using default interface naming scheme 'v255'.
Jan 13 20:08:22.640024 zram_generator::config[1683]: No configuration found.
Jan 13 20:08:22.816084 (udev-worker)[1722]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:08:22.847038 ldconfig[1535]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 13 20:08:22.997066 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1694)
Jan 13 20:08:23.006771 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:08:23.166385 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Jan 13 20:08:23.167521 systemd[1]: Reloading finished in 748 ms.
Jan 13 20:08:23.198364 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:08:23.201700 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 13 20:08:23.214810 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:08:23.282372 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:08:23.295453 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 13 20:08:23.297977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:08:23.303508 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:08:23.311500 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:08:23.320644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:08:23.324375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:08:23.328634 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 13 20:08:23.338077 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:08:23.347615 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:08:23.359229 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 13 20:08:23.383446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:08:23.383878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:08:23.390384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:08:23.425789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:08:23.427980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:08:23.428369 systemd[1]: Reached target time-set.target - System Time Set.
Jan 13 20:08:23.448298 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 13 20:08:23.456860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:08:23.460145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:08:23.460463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:08:23.476099 systemd[1]: Finished ensure-sysext.service.
Jan 13 20:08:23.516699 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 13 20:08:23.532917 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Jan 13 20:08:23.537941 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:08:23.538822 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:08:23.543767 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 13 20:08:23.552137 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 13 20:08:23.563616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:08:23.563980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:08:23.570829 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:08:23.573157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:08:23.589392 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 13 20:08:23.596353 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 13 20:08:23.598757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:08:23.598884 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:08:23.598930 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 13 20:08:23.624467 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 13 20:08:23.638622 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 13 20:08:23.667014 lvm[1881]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:08:23.688054 augenrules[1891]: No rules
Jan 13 20:08:23.689729 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:08:23.691668 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:08:23.700327 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 13 20:08:23.708708 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 13 20:08:23.725261 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 13 20:08:23.754119 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 13 20:08:23.759084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:08:23.770312 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 13 20:08:23.806631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:08:23.812324 lvm[1908]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:08:23.864897 systemd-resolved[1855]: Positive Trust Anchors:
Jan 13 20:08:23.864952 systemd-resolved[1855]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:08:23.865048 systemd-resolved[1855]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:08:23.870215 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 13 20:08:23.877668 systemd-resolved[1855]: Defaulting to hostname 'linux'.
Jan 13 20:08:23.882222 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:08:23.884546 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:08:23.886879 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:08:23.889155 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 13 20:08:23.891615 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 13 20:08:23.894378 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 13 20:08:23.897392 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 13 20:08:23.900034 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 13 20:08:23.902602 systemd-networkd[1852]: lo: Link UP
Jan 13 20:08:23.902615 systemd-networkd[1852]: lo: Gained carrier
Jan 13 20:08:23.903129 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 13 20:08:23.903180 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:08:23.904998 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:08:23.907475 systemd-networkd[1852]: Enumeration completed
Jan 13 20:08:23.908233 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 13 20:08:23.912943 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 13 20:08:23.915688 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:08:23.915703 systemd-networkd[1852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:08:23.921190 systemd-networkd[1852]: eth0: Link UP
Jan 13 20:08:23.922614 systemd-networkd[1852]: eth0: Gained carrier
Jan 13 20:08:23.922655 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:08:23.923772 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 13 20:08:23.927252 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:08:23.930171 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 13 20:08:23.932639 systemd[1]: Reached target network.target - Network.
Jan 13 20:08:23.934457 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:08:23.936306 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:08:23.937108 systemd-networkd[1852]: eth0: DHCPv4 address 172.31.18.148/20, gateway 172.31.16.1 acquired from 172.31.16.1
Jan 13 20:08:23.940332 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:08:23.940492 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:08:23.947235 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 13 20:08:23.962506 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Jan 13 20:08:23.968833 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 13 20:08:23.974161 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 13 20:08:23.991742 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 13 20:08:23.994092 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 13 20:08:24.000273 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 13 20:08:24.005965 systemd[1]: Started ntpd.service - Network Time Service.
Jan 13 20:08:24.016264 systemd[1]: Starting setup-oem.service - Setup OEM...
Jan 13 20:08:24.026351 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 13 20:08:24.032354 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 13 20:08:24.063316 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 13 20:08:24.070320 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 13 20:08:24.075360 jq[1918]: false
Jan 13 20:08:24.073406 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 13 20:08:24.074255 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 13 20:08:24.075872 systemd[1]: Starting update-engine.service - Update Engine...
Jan 13 20:08:24.081371 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 13 20:08:24.087702 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 13 20:08:24.088075 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 13 20:08:24.097478 extend-filesystems[1919]: Found loop4
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found loop5
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found loop6
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found loop7
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found nvme0n1
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found nvme0n1p1
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found nvme0n1p2
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found nvme0n1p3
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found usr
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found nvme0n1p4
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found nvme0n1p6
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found nvme0n1p7
Jan 13 20:08:24.102502 extend-filesystems[1919]: Found nvme0n1p9
Jan 13 20:08:24.102502 extend-filesystems[1919]: Checking size of /dev/nvme0n1p9
Jan 13 20:08:24.139563 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 13 20:08:24.192679 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 13 20:08:24.195137 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 13 20:08:24.206224 extend-filesystems[1919]: Resized partition /dev/nvme0n1p9
Jan 13 20:08:24.211033 jq[1931]: true
Jan 13 20:08:24.229681 extend-filesystems[1958]: resize2fs 1.47.1 (20-May-2024)
Jan 13 20:08:24.230400 ntpd[1921]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting
Jan 13 20:08:24.233684 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting
Jan 13 20:08:24.233684 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Jan 13 20:08:24.233684 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: ----------------------------------------------------
Jan 13 20:08:24.233684 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: ntp-4 is maintained by Network Time Foundation,
Jan 13 20:08:24.233684 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Jan 13 20:08:24.233684 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: corporation.  Support and training for ntp-4 are
Jan 13 20:08:24.233684 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: available at https://www.nwtime.org/support
Jan 13 20:08:24.233684 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: ----------------------------------------------------
Jan 13 20:08:24.230448 ntpd[1921]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Jan 13 20:08:24.235210 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 13 20:08:24.230467 ntpd[1921]: ----------------------------------------------------
Jan 13 20:08:24.230486 ntpd[1921]: ntp-4 is maintained by Network Time Foundation,
Jan 13 20:08:24.242763 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 13 20:08:24.230505 ntpd[1921]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Jan 13 20:08:24.242818 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 13 20:08:24.230523 ntpd[1921]: corporation.  Support and training for ntp-4 are
Jan 13 20:08:24.248307 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 13 20:08:24.230541 ntpd[1921]: available at https://www.nwtime.org/support
Jan 13 20:08:24.248349 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 13 20:08:24.230559 ntpd[1921]: ----------------------------------------------------
Jan 13 20:08:24.234904 dbus-daemon[1917]: [system] SELinux support is enabled
Jan 13 20:08:24.261118 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Jan 13 20:08:24.263691 ntpd[1921]: proto: precision = 0.096 usec (-23)
Jan 13 20:08:24.264979 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: proto: precision = 0.096 usec (-23)
Jan 13 20:08:24.270105 ntpd[1921]: basedate set to 2025-01-01
Jan 13 20:08:24.270152 ntpd[1921]: gps base set to 2025-01-05 (week 2348)
Jan 13 20:08:24.270334 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: basedate set to 2025-01-01
Jan 13 20:08:24.270334 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: gps base set to 2025-01-05 (week 2348)
Jan 13 20:08:24.278474 dbus-daemon[1917]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1852 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Jan 13 20:08:24.283021 ntpd[1921]: Listen and drop on 0 v6wildcard [::]:123
Jan 13 20:08:24.286244 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: Listen and drop on 0 v6wildcard [::]:123
Jan 13 20:08:24.286244 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Jan 13 20:08:24.283120 ntpd[1921]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Jan 13 20:08:24.288075 ntpd[1921]: Listen normally on 2 lo 127.0.0.1:123
Jan 13 20:08:24.290061 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: Listen normally on 2 lo 127.0.0.1:123
Jan 13 20:08:24.290061 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: Listen normally on 3 eth0 172.31.18.148:123
Jan 13 20:08:24.290061 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: Listen normally on 4 lo [::1]:123
Jan 13 20:08:24.290061 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: bind(21) AF_INET6 fe80::435:54ff:fe75:9f77%2#123 flags 0x11 failed: Cannot assign requested address
Jan 13 20:08:24.290061 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: unable to create socket on eth0 (5) for fe80::435:54ff:fe75:9f77%2#123
Jan 13 20:08:24.290061 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: failed to init interface for address fe80::435:54ff:fe75:9f77%2
Jan 13 20:08:24.290061 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: Listening on routing socket on fd #21 for interface updates
Jan 13 20:08:24.288170 ntpd[1921]: Listen normally on 3 eth0 172.31.18.148:123
Jan 13 20:08:24.288237 ntpd[1921]: Listen normally on 4 lo [::1]:123
Jan 13 20:08:24.288319 ntpd[1921]: bind(21) AF_INET6 fe80::435:54ff:fe75:9f77%2#123 flags 0x11 failed: Cannot assign requested address
Jan 13 20:08:24.288359 ntpd[1921]: unable to create socket on eth0 (5) for fe80::435:54ff:fe75:9f77%2#123
Jan 13 20:08:24.288389 ntpd[1921]: failed to init interface for address fe80::435:54ff:fe75:9f77%2
Jan 13 20:08:24.288449 ntpd[1921]: Listening on routing socket on fd #21 for interface updates
Jan 13 20:08:24.295324 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Jan 13 20:08:24.304570 (ntainerd)[1957]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 13 20:08:24.315795 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jan 13 20:08:24.318193 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jan 13 20:08:24.318193 ntpd[1921]: 13 Jan 20:08:24 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jan 13 20:08:24.315868 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Jan 13 20:08:24.326001 update_engine[1930]: I20250113 20:08:24.319703  1930 main.cc:92] Flatcar Update Engine starting
Jan 13 20:08:24.330746 systemd[1]: Started update-engine.service - Update Engine.
Jan 13 20:08:24.343134 update_engine[1930]: I20250113 20:08:24.342761  1930 update_check_scheduler.cc:74] Next update check in 3m43s
Jan 13 20:08:24.346088 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 13 20:08:24.349612 systemd[1]: motdgen.service: Deactivated successfully.
Jan 13 20:08:24.351132 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 13 20:08:24.357232 jq[1951]: true
Jan 13 20:08:24.373292 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Jan 13 20:08:24.395019 extend-filesystems[1958]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Jan 13 20:08:24.395019 extend-filesystems[1958]: old_desc_blocks = 1, new_desc_blocks = 1
Jan 13 20:08:24.395019 extend-filesystems[1958]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Jan 13 20:08:24.401162 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 13 20:08:24.419445 extend-filesystems[1919]: Resized filesystem in /dev/nvme0n1p9
Jan 13 20:08:24.401524 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 13 20:08:24.422210 systemd[1]: Finished setup-oem.service - Setup OEM.
Jan 13 20:08:24.505040 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1702)
Jan 13 20:08:24.603893 bash[2009]: Updated "/home/core/.ssh/authorized_keys"
Jan 13 20:08:24.624073 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 13 20:08:24.628758 systemd-logind[1926]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 13 20:08:24.628812 systemd-logind[1926]: Watching system buttons on /dev/input/event1 (Sleep Button)
Jan 13 20:08:24.633426 systemd-logind[1926]: New seat seat0.
Jan 13 20:08:24.648741 coreos-metadata[1916]: Jan 13 20:08:24.646 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Jan 13 20:08:24.648741 coreos-metadata[1916]: Jan 13 20:08:24.648 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.649 INFO Fetch successful
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.649 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.662 INFO Fetch successful
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.662 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.668 INFO Fetch successful
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.668 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.671 INFO Fetch successful
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.671 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.674 INFO Fetch failed with 404: resource not found
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.674 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.677 INFO Fetch successful
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.677 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.678 INFO Fetch successful
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.678 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.679 INFO Fetch successful
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.679 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.679 INFO Fetch successful
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.679 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Jan 13 20:08:24.740219 coreos-metadata[1916]: Jan 13 20:08:24.680 INFO Fetch successful
Jan 13 20:08:24.722067 systemd[1]: Starting sshkeys.service...
Jan 13 20:08:24.723742 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 13 20:08:24.776245 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Jan 13 20:08:24.784659 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Jan 13 20:08:24.804262 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Jan 13 20:08:24.809067 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 13 20:08:24.829768 dbus-daemon[1917]: [system] Successfully activated service 'org.freedesktop.hostname1'
Jan 13 20:08:24.830074 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Jan 13 20:08:24.834244 dbus-daemon[1917]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1965 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Jan 13 20:08:24.843694 systemd[1]: Starting polkit.service - Authorization Manager...
Jan 13 20:08:24.942026 containerd[1957]: time="2025-01-13T20:08:24.941051305Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Jan 13 20:08:24.958752 polkitd[2066]: Started polkitd version 121
Jan 13 20:08:24.980576 polkitd[2066]: Loading rules from directory /etc/polkit-1/rules.d
Jan 13 20:08:24.980707 polkitd[2066]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 13 20:08:24.989043 polkitd[2066]: Finished loading, compiling and executing 2 rules
Jan 13 20:08:25.000161 dbus-daemon[1917]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Jan 13 20:08:25.005915 systemd[1]: Started polkit.service - Authorization Manager.
Jan 13 20:08:25.008726 polkitd[2066]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 13 20:08:25.048486 locksmithd[1969]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 13 20:08:25.089533 systemd-resolved[1855]: System hostname changed to 'ip-172-31-18-148'.
Jan 13 20:08:25.089680 systemd-hostnamed[1965]: Hostname set to <ip-172-31-18-148> (transient)
Jan 13 20:08:25.093164 coreos-metadata[2056]: Jan 13 20:08:25.092 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Jan 13 20:08:25.095296 coreos-metadata[2056]: Jan 13 20:08:25.095 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Jan 13 20:08:25.097142 coreos-metadata[2056]: Jan 13 20:08:25.097 INFO Fetch successful
Jan 13 20:08:25.097142 coreos-metadata[2056]: Jan 13 20:08:25.097 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Jan 13 20:08:25.098450 coreos-metadata[2056]: Jan 13 20:08:25.098 INFO Fetch successful
Jan 13 20:08:25.104174 unknown[2056]: wrote ssh authorized keys file for user: core
Jan 13 20:08:25.151782 containerd[1957]: time="2025-01-13T20:08:25.151717654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:08:25.158137 containerd[1957]: time="2025-01-13T20:08:25.158057662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:08:25.158137 containerd[1957]: time="2025-01-13T20:08:25.158127610Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 13 20:08:25.158285 containerd[1957]: time="2025-01-13T20:08:25.158176462Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 13 20:08:25.158537 containerd[1957]: time="2025-01-13T20:08:25.158496322Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 13 20:08:25.158690 containerd[1957]: time="2025-01-13T20:08:25.158539762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 13 20:08:25.158690 containerd[1957]: time="2025-01-13T20:08:25.158673274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:08:25.158797 containerd[1957]: time="2025-01-13T20:08:25.158700850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:08:25.159117 containerd[1957]: time="2025-01-13T20:08:25.159058834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:08:25.159117 containerd[1957]: time="2025-01-13T20:08:25.159103582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 13 20:08:25.159230 containerd[1957]: time="2025-01-13T20:08:25.159136990Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:08:25.159230 containerd[1957]: time="2025-01-13T20:08:25.159161662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 13 20:08:25.162062 containerd[1957]: time="2025-01-13T20:08:25.161022166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:08:25.162062 containerd[1957]: time="2025-01-13T20:08:25.161471830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:08:25.162062 containerd[1957]: time="2025-01-13T20:08:25.161713834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:08:25.162062 containerd[1957]: time="2025-01-13T20:08:25.161746894Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 13 20:08:25.162062 containerd[1957]: time="2025-01-13T20:08:25.161918962Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 13 20:08:25.162062 containerd[1957]: time="2025-01-13T20:08:25.162053962Z" level=info msg="metadata content store policy set" policy=shared
Jan 13 20:08:25.170909 containerd[1957]: time="2025-01-13T20:08:25.170640082Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 13 20:08:25.170909 containerd[1957]: time="2025-01-13T20:08:25.170759494Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 13 20:08:25.170909 containerd[1957]: time="2025-01-13T20:08:25.170795854Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 13 20:08:25.170909 containerd[1957]: time="2025-01-13T20:08:25.170832538Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 13 20:08:25.170909 containerd[1957]: time="2025-01-13T20:08:25.170865286Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.171173854Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.171639466Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.171865174Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.171898342Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.171933658Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.171968434Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.172037770Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.172070290Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.172102234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.172136962Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.172167334Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.172197106Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.172224946Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 13 20:08:25.172376 containerd[1957]: time="2025-01-13T20:08:25.172268038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172301038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172333846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172365142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172396018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172425550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172453246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172499866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172531882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172565458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172592722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172622074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172651222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172685398Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172729738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.172910 containerd[1957]: time="2025-01-13T20:08:25.172769878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.173565 containerd[1957]: time="2025-01-13T20:08:25.172797358Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 13 20:08:25.173565 containerd[1957]: time="2025-01-13T20:08:25.172934122Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 13 20:08:25.173565 containerd[1957]: time="2025-01-13T20:08:25.172972858Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 13 20:08:25.173565 containerd[1957]: time="2025-01-13T20:08:25.173023054Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 13 20:08:25.173565 containerd[1957]: time="2025-01-13T20:08:25.173056966Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 13 20:08:25.173565 containerd[1957]: time="2025-01-13T20:08:25.173080306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.173565 containerd[1957]: time="2025-01-13T20:08:25.173110558Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 13 20:08:25.173565 containerd[1957]: time="2025-01-13T20:08:25.173136706Z" level=info msg="NRI interface is disabled by configuration."
Jan 13 20:08:25.173565 containerd[1957]: time="2025-01-13T20:08:25.173161522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 13 20:08:25.173927 containerd[1957]: time="2025-01-13T20:08:25.173685190Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 13 20:08:25.173927 containerd[1957]: time="2025-01-13T20:08:25.173769526Z" level=info msg="Connect containerd service"
Jan 13 20:08:25.173927 containerd[1957]: time="2025-01-13T20:08:25.173841982Z" level=info msg="using legacy CRI server"
Jan 13 20:08:25.173927 containerd[1957]: time="2025-01-13T20:08:25.173859838Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 13 20:08:25.180152 containerd[1957]: time="2025-01-13T20:08:25.174265882Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 13 20:08:25.180152 containerd[1957]: time="2025-01-13T20:08:25.176120326Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 13 20:08:25.180152 containerd[1957]: time="2025-01-13T20:08:25.176458414Z" level=info msg="Start subscribing containerd event"
Jan 13 20:08:25.180152 containerd[1957]: time="2025-01-13T20:08:25.176541886Z" level=info msg="Start recovering state"
Jan 13 20:08:25.180152 containerd[1957]: time="2025-01-13T20:08:25.176665786Z" level=info msg="Start event monitor"
Jan 13 20:08:25.180152 containerd[1957]: time="2025-01-13T20:08:25.176688502Z" level=info msg="Start snapshots syncer"
Jan 13 20:08:25.180152 containerd[1957]: time="2025-01-13T20:08:25.176714170Z" level=info msg="Start cni network conf syncer for default"
Jan 13 20:08:25.180152 containerd[1957]: time="2025-01-13T20:08:25.176750110Z" level=info msg="Start streaming server"
Jan 13 20:08:25.182773 update-ssh-keys[2108]: Updated "/home/core/.ssh/authorized_keys"
Jan 13 20:08:25.180173 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Jan 13 20:08:25.187095 systemd[1]: Finished sshkeys.service.
Jan 13 20:08:25.192036 containerd[1957]: time="2025-01-13T20:08:25.191390782Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 13 20:08:25.195178 containerd[1957]: time="2025-01-13T20:08:25.195124642Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 13 20:08:25.198151 containerd[1957]: time="2025-01-13T20:08:25.198101770Z" level=info msg="containerd successfully booted in 0.260831s"
Jan 13 20:08:25.201199 systemd[1]: Started containerd.service - containerd container runtime.
Jan 13 20:08:25.252321 ntpd[1921]: bind(24) AF_INET6 fe80::435:54ff:fe75:9f77%2#123 flags 0x11 failed: Cannot assign requested address
Jan 13 20:08:25.253051 ntpd[1921]: 13 Jan 20:08:25 ntpd[1921]: bind(24) AF_INET6 fe80::435:54ff:fe75:9f77%2#123 flags 0x11 failed: Cannot assign requested address
Jan 13 20:08:25.253051 ntpd[1921]: 13 Jan 20:08:25 ntpd[1921]: unable to create socket on eth0 (6) for fe80::435:54ff:fe75:9f77%2#123
Jan 13 20:08:25.253051 ntpd[1921]: 13 Jan 20:08:25 ntpd[1921]: failed to init interface for address fe80::435:54ff:fe75:9f77%2
Jan 13 20:08:25.252386 ntpd[1921]: unable to create socket on eth0 (6) for fe80::435:54ff:fe75:9f77%2#123
Jan 13 20:08:25.252414 ntpd[1921]: failed to init interface for address fe80::435:54ff:fe75:9f77%2
Jan 13 20:08:25.552186 systemd-networkd[1852]: eth0: Gained IPv6LL
Jan 13 20:08:25.555194 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 13 20:08:25.559684 systemd[1]: Reached target network-online.target - Network is Online.
Jan 13 20:08:25.573404 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Jan 13 20:08:25.586940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:08:25.595420 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 13 20:08:25.680261 amazon-ssm-agent[2120]: Initializing new seelog logger
Jan 13 20:08:25.682008 amazon-ssm-agent[2120]: New Seelog Logger Creation Complete
Jan 13 20:08:25.682008 amazon-ssm-agent[2120]: 2025/01/13 20:08:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 13 20:08:25.682008 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 13 20:08:25.682008 amazon-ssm-agent[2120]: 2025/01/13 20:08:25 processing appconfig overrides
Jan 13 20:08:25.682770 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 13 20:08:25.685047 amazon-ssm-agent[2120]: 2025/01/13 20:08:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 13 20:08:25.686006 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO Proxy environment variables:
Jan 13 20:08:25.686404 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 13 20:08:25.686676 amazon-ssm-agent[2120]: 2025/01/13 20:08:25 processing appconfig overrides
Jan 13 20:08:25.687239 amazon-ssm-agent[2120]: 2025/01/13 20:08:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 13 20:08:25.687342 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 13 20:08:25.687565 amazon-ssm-agent[2120]: 2025/01/13 20:08:25 processing appconfig overrides
Jan 13 20:08:25.691028 amazon-ssm-agent[2120]: 2025/01/13 20:08:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 13 20:08:25.691028 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Jan 13 20:08:25.692025 amazon-ssm-agent[2120]: 2025/01/13 20:08:25 processing appconfig overrides
Jan 13 20:08:25.786235 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO https_proxy:
Jan 13 20:08:25.884590 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO http_proxy:
Jan 13 20:08:25.982674 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO no_proxy:
Jan 13 20:08:26.081061 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO Checking if agent identity type OnPrem can be assumed
Jan 13 20:08:26.179479 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO Checking if agent identity type EC2 can be assumed
Jan 13 20:08:26.278800 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO Agent will take identity from EC2
Jan 13 20:08:26.379161 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO [amazon-ssm-agent] using named pipe channel for IPC
Jan 13 20:08:26.479069 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO [amazon-ssm-agent] using named pipe channel for IPC
Jan 13 20:08:26.500537 sshd_keygen[1968]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 13 20:08:26.549021 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 13 20:08:26.561416 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 13 20:08:26.568473 systemd[1]: Started sshd@0-172.31.18.148:22-139.178.68.195:46746.service - OpenSSH per-connection server daemon (139.178.68.195:46746).
Jan 13 20:08:26.578575 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO [amazon-ssm-agent] using named pipe channel for IPC
Jan 13 20:08:26.606648 systemd[1]: issuegen.service: Deactivated successfully.
Jan 13 20:08:26.607267 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 13 20:08:26.620412 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 13 20:08:26.644032 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 13 20:08:26.657493 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 13 20:08:26.664655 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Jan 13 20:08:26.668687 systemd[1]: Reached target getty.target - Login Prompts.
Jan 13 20:08:26.678103 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Jan 13 20:08:26.778483 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO [amazon-ssm-agent] OS: linux, Arch: arm64
Jan 13 20:08:26.878953 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO [amazon-ssm-agent] Starting Core Agent
Jan 13 20:08:26.945956 sshd[2147]: Accepted publickey for core from 139.178.68.195 port 46746 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k
Jan 13 20:08:26.949481 sshd-session[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:08:26.973780 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 13 20:08:26.981957 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Jan 13 20:08:26.985521 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 13 20:08:26.995916 systemd-logind[1926]: New session 1 of user core.
Jan 13 20:08:27.030090 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 13 20:08:27.047195 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 13 20:08:27.066050 (systemd)[2158]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 13 20:08:27.082320 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO [Registrar] Starting registrar module
Jan 13 20:08:27.182594 amazon-ssm-agent[2120]: 2025-01-13 20:08:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Jan 13 20:08:27.285042 amazon-ssm-agent[2120]: 2025-01-13 20:08:27 INFO [EC2Identity] EC2 registration was successful.
Jan 13 20:08:27.315168 amazon-ssm-agent[2120]: 2025-01-13 20:08:27 INFO [CredentialRefresher] credentialRefresher has started
Jan 13 20:08:27.315168 amazon-ssm-agent[2120]: 2025-01-13 20:08:27 INFO [CredentialRefresher] Starting credentials refresher loop
Jan 13 20:08:27.315168 amazon-ssm-agent[2120]: 2025-01-13 20:08:27 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Jan 13 20:08:27.327342 systemd[2158]: Queued start job for default target default.target.
Jan 13 20:08:27.334382 systemd[2158]: Created slice app.slice - User Application Slice.
Jan 13 20:08:27.334467 systemd[2158]: Reached target paths.target - Paths.
Jan 13 20:08:27.334501 systemd[2158]: Reached target timers.target - Timers.
Jan 13 20:08:27.337274 systemd[2158]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 13 20:08:27.382161 systemd[2158]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 13 20:08:27.382462 systemd[2158]: Reached target sockets.target - Sockets.
Jan 13 20:08:27.382514 systemd[2158]: Reached target basic.target - Basic System.
Jan 13 20:08:27.382605 systemd[2158]: Reached target default.target - Main User Target.
Jan 13 20:08:27.382670 systemd[2158]: Startup finished in 302ms.
Jan 13 20:08:27.382843 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 13 20:08:27.385009 amazon-ssm-agent[2120]: 2025-01-13 20:08:27 INFO [CredentialRefresher] Next credential rotation will be in 31.924992027000002 minutes
Jan 13 20:08:27.402317 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 13 20:08:27.562538 systemd[1]: Started sshd@1-172.31.18.148:22-139.178.68.195:58234.service - OpenSSH per-connection server daemon (139.178.68.195:58234).
Jan 13 20:08:27.763184 sshd[2169]: Accepted publickey for core from 139.178.68.195 port 58234 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k
Jan 13 20:08:27.765770 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:08:27.773116 systemd-logind[1926]: New session 2 of user core.
Jan 13 20:08:27.783276 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 13 20:08:27.914863 sshd[2171]: Connection closed by 139.178.68.195 port 58234
Jan 13 20:08:27.913305 sshd-session[2169]: pam_unix(sshd:session): session closed for user core
Jan 13 20:08:27.925751 systemd[1]: sshd@1-172.31.18.148:22-139.178.68.195:58234.service: Deactivated successfully.
Jan 13 20:08:27.932025 systemd[1]: session-2.scope: Deactivated successfully.
Jan 13 20:08:27.937403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:08:27.942128 systemd-logind[1926]: Session 2 logged out. Waiting for processes to exit.
Jan 13 20:08:27.950588 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:08:27.955353 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 13 20:08:27.965546 systemd[1]: Started sshd@2-172.31.18.148:22-139.178.68.195:58238.service - OpenSSH per-connection server daemon (139.178.68.195:58238).
Jan 13 20:08:27.968463 systemd[1]: Startup finished in 1.085s (kernel) + 10.034s (initrd) + 9.184s (userspace) = 20.303s.
Jan 13 20:08:27.973231 systemd-logind[1926]: Removed session 2.
Jan 13 20:08:28.177093 sshd[2182]: Accepted publickey for core from 139.178.68.195 port 58238 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k
Jan 13 20:08:28.180178 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:08:28.189915 systemd-logind[1926]: New session 3 of user core.
Jan 13 20:08:28.199308 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 13 20:08:28.252259 ntpd[1921]: Listen normally on 7 eth0 [fe80::435:54ff:fe75:9f77%2]:123
Jan 13 20:08:28.252729 ntpd[1921]: 13 Jan 20:08:28 ntpd[1921]: Listen normally on 7 eth0 [fe80::435:54ff:fe75:9f77%2]:123
Jan 13 20:08:28.327298 sshd[2188]: Connection closed by 139.178.68.195 port 58238
Jan 13 20:08:28.326675 sshd-session[2182]: pam_unix(sshd:session): session closed for user core
Jan 13 20:08:28.334859 systemd[1]: sshd@2-172.31.18.148:22-139.178.68.195:58238.service: Deactivated successfully.
Jan 13 20:08:28.340272 systemd[1]: session-3.scope: Deactivated successfully.
Jan 13 20:08:28.347337 systemd-logind[1926]: Session 3 logged out. Waiting for processes to exit.
Jan 13 20:08:28.347913 amazon-ssm-agent[2120]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Jan 13 20:08:28.351515 systemd-logind[1926]: Removed session 3.
Jan 13 20:08:28.449375 amazon-ssm-agent[2120]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2198) started
Jan 13 20:08:28.550080 amazon-ssm-agent[2120]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Jan 13 20:08:29.240498 kubelet[2178]: E0113 20:08:29.240374    2178 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:08:29.245007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:08:29.245353 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:08:29.246178 systemd[1]: kubelet.service: Consumed 1.317s CPU time.
Jan 13 20:08:31.014706 systemd-resolved[1855]: Clock change detected. Flushing caches.
Jan 13 20:08:38.132824 systemd[1]: Started sshd@3-172.31.18.148:22-139.178.68.195:41494.service - OpenSSH per-connection server daemon (139.178.68.195:41494).
Jan 13 20:08:38.305396 sshd[2212]: Accepted publickey for core from 139.178.68.195 port 41494 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k
Jan 13 20:08:38.307842 sshd-session[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:08:38.316007 systemd-logind[1926]: New session 4 of user core.
Jan 13 20:08:38.325632 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 13 20:08:38.447533 sshd[2214]: Connection closed by 139.178.68.195 port 41494
Jan 13 20:08:38.448387 sshd-session[2212]: pam_unix(sshd:session): session closed for user core
Jan 13 20:08:38.454913 systemd[1]: sshd@3-172.31.18.148:22-139.178.68.195:41494.service: Deactivated successfully.
Jan 13 20:08:38.459336 systemd[1]: session-4.scope: Deactivated successfully.
Jan 13 20:08:38.462144 systemd-logind[1926]: Session 4 logged out. Waiting for processes to exit.
Jan 13 20:08:38.463884 systemd-logind[1926]: Removed session 4.
Jan 13 20:08:38.489788 systemd[1]: Started sshd@4-172.31.18.148:22-139.178.68.195:41504.service - OpenSSH per-connection server daemon (139.178.68.195:41504).
Jan 13 20:08:38.667375 sshd[2219]: Accepted publickey for core from 139.178.68.195 port 41504 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k
Jan 13 20:08:38.669811 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:08:38.678689 systemd-logind[1926]: New session 5 of user core.
Jan 13 20:08:38.685592 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 13 20:08:38.802938 sshd[2221]: Connection closed by 139.178.68.195 port 41504
Jan 13 20:08:38.803674 sshd-session[2219]: pam_unix(sshd:session): session closed for user core
Jan 13 20:08:38.809917 systemd[1]: sshd@4-172.31.18.148:22-139.178.68.195:41504.service: Deactivated successfully.
Jan 13 20:08:38.813651 systemd[1]: session-5.scope: Deactivated successfully.
Jan 13 20:08:38.815080 systemd-logind[1926]: Session 5 logged out. Waiting for processes to exit.
Jan 13 20:08:38.817305 systemd-logind[1926]: Removed session 5.
Jan 13 20:08:38.845876 systemd[1]: Started sshd@5-172.31.18.148:22-139.178.68.195:41506.service - OpenSSH per-connection server daemon (139.178.68.195:41506).
Jan 13 20:08:39.033167 sshd[2226]: Accepted publickey for core from 139.178.68.195 port 41506 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k
Jan 13 20:08:39.035507 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:08:39.037153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jan 13 20:08:39.045724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:08:39.054119 systemd-logind[1926]: New session 6 of user core.
Jan 13 20:08:39.075656 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 13 20:08:39.205331 sshd[2231]: Connection closed by 139.178.68.195 port 41506
Jan 13 20:08:39.206181 sshd-session[2226]: pam_unix(sshd:session): session closed for user core
Jan 13 20:08:39.213638 systemd[1]: sshd@5-172.31.18.148:22-139.178.68.195:41506.service: Deactivated successfully.
Jan 13 20:08:39.219606 systemd[1]: session-6.scope: Deactivated successfully.
Jan 13 20:08:39.224458 systemd-logind[1926]: Session 6 logged out. Waiting for processes to exit.
Jan 13 20:08:39.253980 systemd[1]: Started sshd@6-172.31.18.148:22-139.178.68.195:41522.service - OpenSSH per-connection server daemon (139.178.68.195:41522).
Jan 13 20:08:39.257121 systemd-logind[1926]: Removed session 6.
Jan 13 20:08:39.365630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:08:39.366543 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:08:39.441205 sshd[2236]: Accepted publickey for core from 139.178.68.195 port 41522 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k
Jan 13 20:08:39.443922 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:08:39.455154 systemd-logind[1926]: New session 7 of user core.
Jan 13 20:08:39.462601 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 13 20:08:39.471175 kubelet[2243]: E0113 20:08:39.470118    2243 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:08:39.478180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:08:39.478555 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:08:39.579646 sudo[2252]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Jan 13 20:08:39.580286 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:08:39.595964 sudo[2252]: pam_unix(sudo:session): session closed for user root
Jan 13 20:08:39.618108 sshd[2250]: Connection closed by 139.178.68.195 port 41522
Jan 13 20:08:39.619936 sshd-session[2236]: pam_unix(sshd:session): session closed for user core
Jan 13 20:08:39.625221 systemd[1]: sshd@6-172.31.18.148:22-139.178.68.195:41522.service: Deactivated successfully.
Jan 13 20:08:39.628460 systemd[1]: session-7.scope: Deactivated successfully.
Jan 13 20:08:39.631402 systemd-logind[1926]: Session 7 logged out. Waiting for processes to exit.
Jan 13 20:08:39.633241 systemd-logind[1926]: Removed session 7.
Jan 13 20:08:39.652841 systemd[1]: Started sshd@7-172.31.18.148:22-139.178.68.195:41530.service - OpenSSH per-connection server daemon (139.178.68.195:41530).
Jan 13 20:08:39.847127 sshd[2257]: Accepted publickey for core from 139.178.68.195 port 41530 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k
Jan 13 20:08:39.849633 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:08:39.858694 systemd-logind[1926]: New session 8 of user core.
Jan 13 20:08:39.868594 systemd[1]: Started session-8.scope - Session 8 of User core.
Jan 13 20:08:39.971877 sudo[2261]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Jan 13 20:08:39.973018 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:08:39.979444 sudo[2261]: pam_unix(sudo:session): session closed for user root
Jan 13 20:08:39.989185 sudo[2260]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Jan 13 20:08:39.989839 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:08:40.018162 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:08:40.063921 augenrules[2283]: No rules
Jan 13 20:08:40.066206 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:08:40.066740 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:08:40.069192 sudo[2260]: pam_unix(sudo:session): session closed for user root
Jan 13 20:08:40.091840 sshd[2259]: Connection closed by 139.178.68.195 port 41530
Jan 13 20:08:40.092299 sshd-session[2257]: pam_unix(sshd:session): session closed for user core
Jan 13 20:08:40.097473 systemd[1]: sshd@7-172.31.18.148:22-139.178.68.195:41530.service: Deactivated successfully.
Jan 13 20:08:40.100921 systemd[1]: session-8.scope: Deactivated successfully.
Jan 13 20:08:40.103456 systemd-logind[1926]: Session 8 logged out. Waiting for processes to exit.
Jan 13 20:08:40.105632 systemd-logind[1926]: Removed session 8.
Jan 13 20:08:40.129860 systemd[1]: Started sshd@8-172.31.18.148:22-139.178.68.195:41542.service - OpenSSH per-connection server daemon (139.178.68.195:41542).
Jan 13 20:08:40.310917 sshd[2291]: Accepted publickey for core from 139.178.68.195 port 41542 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k
Jan 13 20:08:40.313338 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:08:40.320622 systemd-logind[1926]: New session 9 of user core.
Jan 13 20:08:40.331627 systemd[1]: Started session-9.scope - Session 9 of User core.
Jan 13 20:08:40.435398 sudo[2294]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 13 20:08:40.436079 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:08:41.581464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:08:41.593841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:08:41.637440 systemd[1]: Reloading requested from client PID 2330 ('systemctl') (unit session-9.scope)...
Jan 13 20:08:41.637472 systemd[1]: Reloading...
Jan 13 20:08:41.875833 zram_generator::config[2373]: No configuration found.
Jan 13 20:08:42.112642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:08:42.276382 systemd[1]: Reloading finished in 638 ms.
Jan 13 20:08:42.375734 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:08:42.380142 systemd[1]: kubelet.service: Deactivated successfully.
Jan 13 20:08:42.380554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:08:42.388521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:08:42.677668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:08:42.682556 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 13 20:08:42.757629 kubelet[2435]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:08:42.757629 kubelet[2435]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 13 20:08:42.757629 kubelet[2435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:08:42.758150 kubelet[2435]: I0113 20:08:42.757730    2435 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 13 20:08:44.228109 kubelet[2435]: I0113 20:08:44.228044    2435 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Jan 13 20:08:44.228109 kubelet[2435]: I0113 20:08:44.228092    2435 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 13 20:08:44.228829 kubelet[2435]: I0113 20:08:44.228467    2435 server.go:927] "Client rotation is on, will bootstrap in background"
Jan 13 20:08:44.253224 kubelet[2435]: I0113 20:08:44.253164    2435 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:08:44.270406 kubelet[2435]: I0113 20:08:44.268884    2435 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 13 20:08:44.270406 kubelet[2435]: I0113 20:08:44.269682    2435 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 13 20:08:44.270406 kubelet[2435]: I0113 20:08:44.269732    2435 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.18.148","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 13 20:08:44.270406 kubelet[2435]: I0113 20:08:44.270309    2435 topology_manager.go:138] "Creating topology manager with none policy"
Jan 13 20:08:44.270812 kubelet[2435]: I0113 20:08:44.270331    2435 container_manager_linux.go:301] "Creating device plugin manager"
Jan 13 20:08:44.271143 kubelet[2435]: I0113 20:08:44.271115    2435 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:08:44.272737 kubelet[2435]: I0113 20:08:44.272698    2435 kubelet.go:400] "Attempting to sync node with API server"
Jan 13 20:08:44.272937 kubelet[2435]: I0113 20:08:44.272915    2435 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 13 20:08:44.273140 kubelet[2435]: I0113 20:08:44.273119    2435 kubelet.go:312] "Adding apiserver pod source"
Jan 13 20:08:44.273275 kubelet[2435]: I0113 20:08:44.273253    2435 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 13 20:08:44.276607 kubelet[2435]: E0113 20:08:44.276548    2435 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:44.276763 kubelet[2435]: E0113 20:08:44.276673    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:44.278286 kubelet[2435]: I0113 20:08:44.278242    2435 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 13 20:08:44.278665 kubelet[2435]: I0113 20:08:44.278628    2435 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 13 20:08:44.278734 kubelet[2435]: W0113 20:08:44.278705    2435 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 13 20:08:44.279981 kubelet[2435]: I0113 20:08:44.279924    2435 server.go:1264] "Started kubelet"
Jan 13 20:08:44.284981 kubelet[2435]: I0113 20:08:44.284863    2435 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 13 20:08:44.286179 kubelet[2435]: I0113 20:08:44.285673    2435 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 13 20:08:44.286179 kubelet[2435]: I0113 20:08:44.285723    2435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 13 20:08:44.286179 kubelet[2435]: I0113 20:08:44.285748    2435 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jan 13 20:08:44.287731 kubelet[2435]: I0113 20:08:44.287429    2435 server.go:455] "Adding debug handlers to kubelet server"
Jan 13 20:08:44.298400 kubelet[2435]: E0113 20:08:44.296577    2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.18.148.181a596b6ffad30d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.18.148,UID:172.31.18.148,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.18.148,},FirstTimestamp:2025-01-13 20:08:44.279829261 +0000 UTC m=+1.591230777,LastTimestamp:2025-01-13 20:08:44.279829261 +0000 UTC m=+1.591230777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.18.148,}"
Jan 13 20:08:44.298400 kubelet[2435]: W0113 20:08:44.296857    2435 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.18.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Jan 13 20:08:44.298400 kubelet[2435]: E0113 20:08:44.296907    2435 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Jan 13 20:08:44.298400 kubelet[2435]: W0113 20:08:44.297105    2435 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Jan 13 20:08:44.298400 kubelet[2435]: E0113 20:08:44.297136    2435 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Jan 13 20:08:44.301324 kubelet[2435]: I0113 20:08:44.301241    2435 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 13 20:08:44.303097 kubelet[2435]: I0113 20:08:44.302710    2435 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jan 13 20:08:44.304315 kubelet[2435]: E0113 20:08:44.304261    2435 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 13 20:08:44.306869 kubelet[2435]: I0113 20:08:44.306807    2435 reconciler.go:26] "Reconciler: start to sync state"
Jan 13 20:08:44.308496 kubelet[2435]: E0113 20:08:44.308330    2435 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.18.148\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Jan 13 20:08:44.313487 kubelet[2435]: I0113 20:08:44.311507    2435 factory.go:221] Registration of the systemd container factory successfully
Jan 13 20:08:44.313487 kubelet[2435]: I0113 20:08:44.312914    2435 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 13 20:08:44.325384 kubelet[2435]: I0113 20:08:44.321856    2435 factory.go:221] Registration of the containerd container factory successfully
Jan 13 20:08:44.358422 kubelet[2435]: I0113 20:08:44.358385    2435 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 13 20:08:44.358694 kubelet[2435]: I0113 20:08:44.358631    2435 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 13 20:08:44.358843 kubelet[2435]: I0113 20:08:44.358825    2435 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:08:44.361752 kubelet[2435]: I0113 20:08:44.361715    2435 policy_none.go:49] "None policy: Start"
Jan 13 20:08:44.363044 kubelet[2435]: I0113 20:08:44.363000    2435 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 13 20:08:44.363224 kubelet[2435]: I0113 20:08:44.363206    2435 state_mem.go:35] "Initializing new in-memory state store"
Jan 13 20:08:44.384081 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jan 13 20:08:44.402753 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jan 13 20:08:44.405723 kubelet[2435]: I0113 20:08:44.403993    2435 kubelet_node_status.go:73] "Attempting to register node" node="172.31.18.148"
Jan 13 20:08:44.412835 kubelet[2435]: I0113 20:08:44.412387    2435 kubelet_node_status.go:76] "Successfully registered node" node="172.31.18.148"
Jan 13 20:08:44.413551 kubelet[2435]: I0113 20:08:44.413338    2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 13 20:08:44.420228 kubelet[2435]: I0113 20:08:44.419604    2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 13 20:08:44.420228 kubelet[2435]: I0113 20:08:44.419694    2435 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 13 20:08:44.420228 kubelet[2435]: I0113 20:08:44.419731    2435 kubelet.go:2337] "Starting kubelet main sync loop"
Jan 13 20:08:44.420228 kubelet[2435]: E0113 20:08:44.419805    2435 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 13 20:08:44.421078 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jan 13 20:08:44.430884 kubelet[2435]: I0113 20:08:44.430830    2435 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 13 20:08:44.431236 kubelet[2435]: I0113 20:08:44.431144    2435 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 13 20:08:44.431501 kubelet[2435]: I0113 20:08:44.431359    2435 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 13 20:08:44.433634 kubelet[2435]: E0113 20:08:44.433489    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.148\" not found"
Jan 13 20:08:44.442048 kubelet[2435]: E0113 20:08:44.441999    2435 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.148\" not found"
Jan 13 20:08:44.533878 kubelet[2435]: E0113 20:08:44.533824    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.148\" not found"
Jan 13 20:08:44.634771 kubelet[2435]: E0113 20:08:44.634699    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.148\" not found"
Jan 13 20:08:44.710299 sudo[2294]: pam_unix(sudo:session): session closed for user root
Jan 13 20:08:44.734176 sshd[2293]: Connection closed by 139.178.68.195 port 41542
Jan 13 20:08:44.735100 sshd-session[2291]: pam_unix(sshd:session): session closed for user core
Jan 13 20:08:44.735857 kubelet[2435]: E0113 20:08:44.735754    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.148\" not found"
Jan 13 20:08:44.741936 systemd[1]: sshd@8-172.31.18.148:22-139.178.68.195:41542.service: Deactivated successfully.
Jan 13 20:08:44.745608 systemd[1]: session-9.scope: Deactivated successfully.
Jan 13 20:08:44.747095 systemd-logind[1926]: Session 9 logged out. Waiting for processes to exit.
Jan 13 20:08:44.749601 systemd-logind[1926]: Removed session 9.
Jan 13 20:08:44.837047 kubelet[2435]: E0113 20:08:44.836863    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.148\" not found"
Jan 13 20:08:44.937696 kubelet[2435]: E0113 20:08:44.937634    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.148\" not found"
Jan 13 20:08:45.038333 kubelet[2435]: E0113 20:08:45.038282    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.148\" not found"
Jan 13 20:08:45.140114 kubelet[2435]: I0113 20:08:45.139437    2435 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Jan 13 20:08:45.140244 containerd[1957]: time="2025-01-13T20:08:45.139903394Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 13 20:08:45.140817 kubelet[2435]: I0113 20:08:45.140581    2435 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Jan 13 20:08:45.232179 kubelet[2435]: I0113 20:08:45.232121    2435 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Jan 13 20:08:45.232892 kubelet[2435]: W0113 20:08:45.232380    2435 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 13 20:08:45.232892 kubelet[2435]: W0113 20:08:45.232439    2435 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 13 20:08:45.232892 kubelet[2435]: W0113 20:08:45.232485    2435 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 13 20:08:45.277867 kubelet[2435]: I0113 20:08:45.277388    2435 apiserver.go:52] "Watching apiserver"
Jan 13 20:08:45.277867 kubelet[2435]: E0113 20:08:45.277817    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:45.292206 kubelet[2435]: I0113 20:08:45.292078    2435 topology_manager.go:215] "Topology Admit Handler" podUID="7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" podNamespace="kube-system" podName="cilium-wnxgd"
Jan 13 20:08:45.292420 kubelet[2435]: I0113 20:08:45.292338    2435 topology_manager.go:215] "Topology Admit Handler" podUID="aa9eb691-838d-40ab-8e3e-034c33eb432a" podNamespace="kube-system" podName="kube-proxy-st959"
Jan 13 20:08:45.305256 kubelet[2435]: I0113 20:08:45.303785    2435 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Jan 13 20:08:45.312336 kubelet[2435]: I0113 20:08:45.312267    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-host-proc-sys-kernel\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312492 kubelet[2435]: I0113 20:08:45.312374    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65tw7\" (UniqueName: \"kubernetes.io/projected/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-kube-api-access-65tw7\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312492 kubelet[2435]: I0113 20:08:45.312423    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-hostproc\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312492 kubelet[2435]: I0113 20:08:45.312460    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-etc-cni-netd\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312684 kubelet[2435]: I0113 20:08:45.312498    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-config-path\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312684 kubelet[2435]: I0113 20:08:45.312533    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjgg\" (UniqueName: \"kubernetes.io/projected/aa9eb691-838d-40ab-8e3e-034c33eb432a-kube-api-access-mtjgg\") pod \"kube-proxy-st959\" (UID: \"aa9eb691-838d-40ab-8e3e-034c33eb432a\") " pod="kube-system/kube-proxy-st959"
Jan 13 20:08:45.312684 kubelet[2435]: I0113 20:08:45.312568    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-lib-modules\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312684 kubelet[2435]: I0113 20:08:45.312611    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-xtables-lock\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312684 kubelet[2435]: I0113 20:08:45.312647    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-clustermesh-secrets\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312920 kubelet[2435]: I0113 20:08:45.312684    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-hubble-tls\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312920 kubelet[2435]: I0113 20:08:45.312718    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa9eb691-838d-40ab-8e3e-034c33eb432a-kube-proxy\") pod \"kube-proxy-st959\" (UID: \"aa9eb691-838d-40ab-8e3e-034c33eb432a\") " pod="kube-system/kube-proxy-st959"
Jan 13 20:08:45.312920 kubelet[2435]: I0113 20:08:45.312753    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-run\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312920 kubelet[2435]: I0113 20:08:45.312785    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-bpf-maps\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312920 kubelet[2435]: I0113 20:08:45.312818    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-cgroup\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.312920 kubelet[2435]: I0113 20:08:45.312852    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa9eb691-838d-40ab-8e3e-034c33eb432a-lib-modules\") pod \"kube-proxy-st959\" (UID: \"aa9eb691-838d-40ab-8e3e-034c33eb432a\") " pod="kube-system/kube-proxy-st959"
Jan 13 20:08:45.313330 kubelet[2435]: I0113 20:08:45.312892    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cni-path\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.313330 kubelet[2435]: I0113 20:08:45.312926    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-host-proc-sys-net\") pod \"cilium-wnxgd\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") " pod="kube-system/cilium-wnxgd"
Jan 13 20:08:45.313330 kubelet[2435]: I0113 20:08:45.312963    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa9eb691-838d-40ab-8e3e-034c33eb432a-xtables-lock\") pod \"kube-proxy-st959\" (UID: \"aa9eb691-838d-40ab-8e3e-034c33eb432a\") " pod="kube-system/kube-proxy-st959"
Jan 13 20:08:45.317849 systemd[1]: Created slice kubepods-besteffort-podaa9eb691_838d_40ab_8e3e_034c33eb432a.slice - libcontainer container kubepods-besteffort-podaa9eb691_838d_40ab_8e3e_034c33eb432a.slice.
Jan 13 20:08:45.332050 systemd[1]: Created slice kubepods-burstable-pod7b5b5acf_91e7_4805_a6fb_2c0c86f2b4c9.slice - libcontainer container kubepods-burstable-pod7b5b5acf_91e7_4805_a6fb_2c0c86f2b4c9.slice.
Jan 13 20:08:45.629516 containerd[1957]: time="2025-01-13T20:08:45.628866988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-st959,Uid:aa9eb691-838d-40ab-8e3e-034c33eb432a,Namespace:kube-system,Attempt:0,}"
Jan 13 20:08:45.645541 containerd[1957]: time="2025-01-13T20:08:45.645443212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wnxgd,Uid:7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9,Namespace:kube-system,Attempt:0,}"
Jan 13 20:08:46.178816 containerd[1957]: time="2025-01-13T20:08:46.178737159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:08:46.181082 containerd[1957]: time="2025-01-13T20:08:46.181007079Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:08:46.182618 containerd[1957]: time="2025-01-13T20:08:46.182548083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173"
Jan 13 20:08:46.184184 containerd[1957]: time="2025-01-13T20:08:46.183771651Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:08:46.184184 containerd[1957]: time="2025-01-13T20:08:46.184115631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 13 20:08:46.187705 containerd[1957]: time="2025-01-13T20:08:46.187615035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:08:46.192880 containerd[1957]: time="2025-01-13T20:08:46.192265923Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.277747ms"
Jan 13 20:08:46.195482 containerd[1957]: time="2025-01-13T20:08:46.195077139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.530115ms"
Jan 13 20:08:46.367004 kubelet[2435]: E0113 20:08:46.366948    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:46.427507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49656007.mount: Deactivated successfully.
Jan 13 20:08:46.452554 containerd[1957]: time="2025-01-13T20:08:46.450991432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:08:46.453251 containerd[1957]: time="2025-01-13T20:08:46.452412832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:08:46.453251 containerd[1957]: time="2025-01-13T20:08:46.452839456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:08:46.453592 containerd[1957]: time="2025-01-13T20:08:46.453335824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:08:46.454167 containerd[1957]: time="2025-01-13T20:08:46.453453364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:08:46.454167 containerd[1957]: time="2025-01-13T20:08:46.453527488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:08:46.454167 containerd[1957]: time="2025-01-13T20:08:46.453677812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:08:46.455000 containerd[1957]: time="2025-01-13T20:08:46.454489132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:08:46.594790 systemd[1]: Started cri-containerd-d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74.scope - libcontainer container d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74.
Jan 13 20:08:46.605162 systemd[1]: Started cri-containerd-aa3cc619fe3f9dfb92a8bfa630c2d795e6dcbc8a90708f451beb5b841bee449b.scope - libcontainer container aa3cc619fe3f9dfb92a8bfa630c2d795e6dcbc8a90708f451beb5b841bee449b.
Jan 13 20:08:46.665302 containerd[1957]: time="2025-01-13T20:08:46.665102801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wnxgd,Uid:7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\""
Jan 13 20:08:46.670043 containerd[1957]: time="2025-01-13T20:08:46.669671093Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Jan 13 20:08:46.679524 containerd[1957]: time="2025-01-13T20:08:46.679462169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-st959,Uid:aa9eb691-838d-40ab-8e3e-034c33eb432a,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa3cc619fe3f9dfb92a8bfa630c2d795e6dcbc8a90708f451beb5b841bee449b\""
Jan 13 20:08:47.367455 kubelet[2435]: E0113 20:08:47.367391    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:48.367920 kubelet[2435]: E0113 20:08:48.367865    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:49.368878 kubelet[2435]: E0113 20:08:49.368811    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:50.369772 kubelet[2435]: E0113 20:08:50.369728    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:51.370654 kubelet[2435]: E0113 20:08:51.370598    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:52.095089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184404979.mount: Deactivated successfully.
Jan 13 20:08:52.371239 kubelet[2435]: E0113 20:08:52.371101    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:53.371823 kubelet[2435]: E0113 20:08:53.371751    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:54.372717 kubelet[2435]: E0113 20:08:54.372658    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:54.887576 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 13 20:08:55.373102 kubelet[2435]: E0113 20:08:55.373031    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:56.373728 kubelet[2435]: E0113 20:08:56.373671    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:57.374339 kubelet[2435]: E0113 20:08:57.374279    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:58.375199 kubelet[2435]: E0113 20:08:58.375137    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:08:59.376389 kubelet[2435]: E0113 20:08:59.376316    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:00.377203 kubelet[2435]: E0113 20:09:00.377138    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:01.378161 kubelet[2435]: E0113 20:09:01.378054    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:02.378603 kubelet[2435]: E0113 20:09:02.378540    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:03.378881 kubelet[2435]: E0113 20:09:03.378818    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:04.273620 kubelet[2435]: E0113 20:09:04.273537    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:04.380007 kubelet[2435]: E0113 20:09:04.379947    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:05.276092 containerd[1957]: time="2025-01-13T20:09:05.276030106Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:05.277930 containerd[1957]: time="2025-01-13T20:09:05.277864306Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650942"
Jan 13 20:09:05.278634 containerd[1957]: time="2025-01-13T20:09:05.278132818Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:05.283204 containerd[1957]: time="2025-01-13T20:09:05.283143190Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 18.613409781s"
Jan 13 20:09:05.283400 containerd[1957]: time="2025-01-13T20:09:05.283205782Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\""
Jan 13 20:09:05.285416 containerd[1957]: time="2025-01-13T20:09:05.285370342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\""
Jan 13 20:09:05.288205 containerd[1957]: time="2025-01-13T20:09:05.288129850Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Jan 13 20:09:05.310512 containerd[1957]: time="2025-01-13T20:09:05.310458274Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\""
Jan 13 20:09:05.311869 containerd[1957]: time="2025-01-13T20:09:05.311807614Z" level=info msg="StartContainer for \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\""
Jan 13 20:09:05.370678 systemd[1]: Started cri-containerd-33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d.scope - libcontainer container 33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d.
Jan 13 20:09:05.380282 kubelet[2435]: E0113 20:09:05.380235    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:05.418374 containerd[1957]: time="2025-01-13T20:09:05.417730018Z" level=info msg="StartContainer for \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\" returns successfully"
Jan 13 20:09:05.442154 systemd[1]: cri-containerd-33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d.scope: Deactivated successfully.
Jan 13 20:09:05.947311 containerd[1957]: time="2025-01-13T20:09:05.947216053Z" level=info msg="shim disconnected" id=33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d namespace=k8s.io
Jan 13 20:09:05.947311 containerd[1957]: time="2025-01-13T20:09:05.947300017Z" level=warning msg="cleaning up after shim disconnected" id=33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d namespace=k8s.io
Jan 13 20:09:05.947646 containerd[1957]: time="2025-01-13T20:09:05.947322421Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:09:06.302277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d-rootfs.mount: Deactivated successfully.
Jan 13 20:09:06.380774 kubelet[2435]: E0113 20:09:06.380721    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:06.504493 containerd[1957]: time="2025-01-13T20:09:06.503687928Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Jan 13 20:09:06.532805 containerd[1957]: time="2025-01-13T20:09:06.532106196Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\""
Jan 13 20:09:06.535401 containerd[1957]: time="2025-01-13T20:09:06.534902976Z" level=info msg="StartContainer for \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\""
Jan 13 20:09:06.610910 systemd[1]: Started cri-containerd-99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478.scope - libcontainer container 99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478.
Jan 13 20:09:06.677802 containerd[1957]: time="2025-01-13T20:09:06.677716873Z" level=info msg="StartContainer for \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\" returns successfully"
Jan 13 20:09:06.695200 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 13 20:09:06.695772 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:09:06.695885 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:09:06.709797 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:09:06.710225 systemd[1]: cri-containerd-99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478.scope: Deactivated successfully.
Jan 13 20:09:06.765746 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:09:06.849630 containerd[1957]: time="2025-01-13T20:09:06.849454921Z" level=info msg="shim disconnected" id=99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478 namespace=k8s.io
Jan 13 20:09:06.849630 containerd[1957]: time="2025-01-13T20:09:06.849532021Z" level=warning msg="cleaning up after shim disconnected" id=99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478 namespace=k8s.io
Jan 13 20:09:06.849630 containerd[1957]: time="2025-01-13T20:09:06.849551437Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:09:06.877444 containerd[1957]: time="2025-01-13T20:09:06.877245590Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:09:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Jan 13 20:09:07.299836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478-rootfs.mount: Deactivated successfully.
Jan 13 20:09:07.300044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744345922.mount: Deactivated successfully.
Jan 13 20:09:07.381241 kubelet[2435]: E0113 20:09:07.380922    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:07.513211 containerd[1957]: time="2025-01-13T20:09:07.512878225Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Jan 13 20:09:07.547278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2925206243.mount: Deactivated successfully.
Jan 13 20:09:07.559927 containerd[1957]: time="2025-01-13T20:09:07.559306189Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\""
Jan 13 20:09:07.561999 containerd[1957]: time="2025-01-13T20:09:07.561933805Z" level=info msg="StartContainer for \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\""
Jan 13 20:09:07.588296 containerd[1957]: time="2025-01-13T20:09:07.588222649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:07.596876 containerd[1957]: time="2025-01-13T20:09:07.596597617Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011"
Jan 13 20:09:07.597972 containerd[1957]: time="2025-01-13T20:09:07.597906013Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:07.603653 containerd[1957]: time="2025-01-13T20:09:07.603576205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:07.606029 containerd[1957]: time="2025-01-13T20:09:07.605774641Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 2.320039955s"
Jan 13 20:09:07.606029 containerd[1957]: time="2025-01-13T20:09:07.605952253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\""
Jan 13 20:09:07.612623 containerd[1957]: time="2025-01-13T20:09:07.612282049Z" level=info msg="CreateContainer within sandbox \"aa3cc619fe3f9dfb92a8bfa630c2d795e6dcbc8a90708f451beb5b841bee449b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 13 20:09:07.635879 systemd[1]: Started cri-containerd-5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796.scope - libcontainer container 5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796.
Jan 13 20:09:07.655154 containerd[1957]: time="2025-01-13T20:09:07.655073689Z" level=info msg="CreateContainer within sandbox \"aa3cc619fe3f9dfb92a8bfa630c2d795e6dcbc8a90708f451beb5b841bee449b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ee84c9ca3469646e0201d150ad4a5b9b753661d05b9dfbcec59894194f08736b\""
Jan 13 20:09:07.656532 containerd[1957]: time="2025-01-13T20:09:07.656465786Z" level=info msg="StartContainer for \"ee84c9ca3469646e0201d150ad4a5b9b753661d05b9dfbcec59894194f08736b\""
Jan 13 20:09:07.728310 systemd[1]: Started cri-containerd-ee84c9ca3469646e0201d150ad4a5b9b753661d05b9dfbcec59894194f08736b.scope - libcontainer container ee84c9ca3469646e0201d150ad4a5b9b753661d05b9dfbcec59894194f08736b.
Jan 13 20:09:07.730675 containerd[1957]: time="2025-01-13T20:09:07.730609358Z" level=info msg="StartContainer for \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\" returns successfully"
Jan 13 20:09:07.731696 systemd[1]: cri-containerd-5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796.scope: Deactivated successfully.
Jan 13 20:09:07.826306 containerd[1957]: time="2025-01-13T20:09:07.826153010Z" level=info msg="StartContainer for \"ee84c9ca3469646e0201d150ad4a5b9b753661d05b9dfbcec59894194f08736b\" returns successfully"
Jan 13 20:09:07.891611 containerd[1957]: time="2025-01-13T20:09:07.891227199Z" level=info msg="shim disconnected" id=5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796 namespace=k8s.io
Jan 13 20:09:07.891965 containerd[1957]: time="2025-01-13T20:09:07.891908499Z" level=warning msg="cleaning up after shim disconnected" id=5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796 namespace=k8s.io
Jan 13 20:09:07.892090 containerd[1957]: time="2025-01-13T20:09:07.892063563Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:09:08.302159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796-rootfs.mount: Deactivated successfully.
Jan 13 20:09:08.381563 kubelet[2435]: E0113 20:09:08.381459    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:08.519018 containerd[1957]: time="2025-01-13T20:09:08.518965298Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Jan 13 20:09:08.550543 containerd[1957]: time="2025-01-13T20:09:08.549735386Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\""
Jan 13 20:09:08.551805 containerd[1957]: time="2025-01-13T20:09:08.551561198Z" level=info msg="StartContainer for \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\""
Jan 13 20:09:08.566261 kubelet[2435]: I0113 20:09:08.565492    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-st959" podStartSLOduration=3.638733738 podStartE2EDuration="24.565472426s" podCreationTimestamp="2025-01-13 20:08:44 +0000 UTC" firstStartedPulling="2025-01-13 20:08:46.681722069 +0000 UTC m=+3.993123597" lastFinishedPulling="2025-01-13 20:09:07.608460781 +0000 UTC m=+24.919862285" observedRunningTime="2025-01-13 20:09:08.565048466 +0000 UTC m=+25.876450066" watchObservedRunningTime="2025-01-13 20:09:08.565472426 +0000 UTC m=+25.876873930"
Jan 13 20:09:08.607667 systemd[1]: Started cri-containerd-b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b.scope - libcontainer container b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b.
Jan 13 20:09:08.655950 systemd[1]: cri-containerd-b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b.scope: Deactivated successfully.
Jan 13 20:09:08.660200 containerd[1957]: time="2025-01-13T20:09:08.660025586Z" level=info msg="StartContainer for \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\" returns successfully"
Jan 13 20:09:08.694601 containerd[1957]: time="2025-01-13T20:09:08.694446951Z" level=info msg="shim disconnected" id=b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b namespace=k8s.io
Jan 13 20:09:08.694601 containerd[1957]: time="2025-01-13T20:09:08.694586631Z" level=warning msg="cleaning up after shim disconnected" id=b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b namespace=k8s.io
Jan 13 20:09:08.694986 containerd[1957]: time="2025-01-13T20:09:08.694609083Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:09:09.302135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b-rootfs.mount: Deactivated successfully.
Jan 13 20:09:09.382704 kubelet[2435]: E0113 20:09:09.382626    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:09.429402 update_engine[1930]: I20250113 20:09:09.428946  1930 update_attempter.cc:509] Updating boot flags...
Jan 13 20:09:09.507451 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3018)
Jan 13 20:09:09.536427 containerd[1957]: time="2025-01-13T20:09:09.535945587Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Jan 13 20:09:09.573913 containerd[1957]: time="2025-01-13T20:09:09.573559527Z" level=info msg="CreateContainer within sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\""
Jan 13 20:09:09.575457 containerd[1957]: time="2025-01-13T20:09:09.575017767Z" level=info msg="StartContainer for \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\""
Jan 13 20:09:09.653719 systemd[1]: Started cri-containerd-999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0.scope - libcontainer container 999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0.
Jan 13 20:09:09.819976 containerd[1957]: time="2025-01-13T20:09:09.818900020Z" level=info msg="StartContainer for \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\" returns successfully"
Jan 13 20:09:09.915489 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3020)
Jan 13 20:09:10.217391 kubelet[2435]: I0113 20:09:10.213971    2435 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Jan 13 20:09:10.383315 kubelet[2435]: E0113 20:09:10.383267    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:10.565790 kubelet[2435]: I0113 20:09:10.565560    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wnxgd" podStartSLOduration=7.9497103110000005 podStartE2EDuration="26.565535344s" podCreationTimestamp="2025-01-13 20:08:44 +0000 UTC" firstStartedPulling="2025-01-13 20:08:46.668579297 +0000 UTC m=+3.979980813" lastFinishedPulling="2025-01-13 20:09:05.284404342 +0000 UTC m=+22.595805846" observedRunningTime="2025-01-13 20:09:10.565110892 +0000 UTC m=+27.876512420" watchObservedRunningTime="2025-01-13 20:09:10.565535344 +0000 UTC m=+27.876936860"
Jan 13 20:09:10.938528 kernel: Initializing XFRM netlink socket
Jan 13 20:09:11.386381 kubelet[2435]: E0113 20:09:11.385454    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:12.386487 kubelet[2435]: E0113 20:09:12.386416    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:12.766478 systemd-networkd[1852]: cilium_host: Link UP
Jan 13 20:09:12.767153 systemd-networkd[1852]: cilium_net: Link UP
Jan 13 20:09:12.767722 systemd-networkd[1852]: cilium_net: Gained carrier
Jan 13 20:09:12.768053 systemd-networkd[1852]: cilium_host: Gained carrier
Jan 13 20:09:12.770250 (udev-worker)[2889]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:09:12.770843 (udev-worker)[3020]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:09:12.946736 systemd-networkd[1852]: cilium_vxlan: Link UP
Jan 13 20:09:12.946757 systemd-networkd[1852]: cilium_vxlan: Gained carrier
Jan 13 20:09:13.178565 systemd-networkd[1852]: cilium_net: Gained IPv6LL
Jan 13 20:09:13.348043 kubelet[2435]: I0113 20:09:13.347859    2435 topology_manager.go:215] "Topology Admit Handler" podUID="5b953ed5-42ed-4a3b-a183-e4ff43a01366" podNamespace="default" podName="nginx-deployment-85f456d6dd-9ztpb"
Jan 13 20:09:13.362465 systemd[1]: Created slice kubepods-besteffort-pod5b953ed5_42ed_4a3b_a183_e4ff43a01366.slice - libcontainer container kubepods-besteffort-pod5b953ed5_42ed_4a3b_a183_e4ff43a01366.slice.
Jan 13 20:09:13.387003 kubelet[2435]: E0113 20:09:13.386867    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:13.430747 kernel: NET: Registered PF_ALG protocol family
Jan 13 20:09:13.542905 kubelet[2435]: I0113 20:09:13.542800    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb45q\" (UniqueName: \"kubernetes.io/projected/5b953ed5-42ed-4a3b-a183-e4ff43a01366-kube-api-access-tb45q\") pod \"nginx-deployment-85f456d6dd-9ztpb\" (UID: \"5b953ed5-42ed-4a3b-a183-e4ff43a01366\") " pod="default/nginx-deployment-85f456d6dd-9ztpb"
Jan 13 20:09:13.668848 containerd[1957]: time="2025-01-13T20:09:13.668745031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-9ztpb,Uid:5b953ed5-42ed-4a3b-a183-e4ff43a01366,Namespace:default,Attempt:0,}"
Jan 13 20:09:13.702013 systemd-networkd[1852]: cilium_host: Gained IPv6LL
Jan 13 20:09:14.388084 kubelet[2435]: E0113 20:09:14.388013    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:14.721167 (udev-worker)[3018]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:09:14.724444 systemd-networkd[1852]: lxc_health: Link UP
Jan 13 20:09:14.733165 systemd-networkd[1852]: lxc_health: Gained carrier
Jan 13 20:09:14.978529 systemd-networkd[1852]: cilium_vxlan: Gained IPv6LL
Jan 13 20:09:15.250665 systemd-networkd[1852]: lxc234281b74772: Link UP
Jan 13 20:09:15.258401 kernel: eth0: renamed from tmpda20b
Jan 13 20:09:15.264489 systemd-networkd[1852]: lxc234281b74772: Gained carrier
Jan 13 20:09:15.388706 kubelet[2435]: E0113 20:09:15.388640    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:16.322667 systemd-networkd[1852]: lxc_health: Gained IPv6LL
Jan 13 20:09:16.389229 kubelet[2435]: E0113 20:09:16.389123    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:16.771192 systemd-networkd[1852]: lxc234281b74772: Gained IPv6LL
Jan 13 20:09:17.389595 kubelet[2435]: E0113 20:09:17.389530    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:18.390129 kubelet[2435]: E0113 20:09:18.390056    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:19.014688 ntpd[1921]: Listen normally on 8 cilium_host 192.168.1.171:123
Jan 13 20:09:19.014831 ntpd[1921]: Listen normally on 9 cilium_net [fe80::9051:a0ff:fea9:1d0f%3]:123
Jan 13 20:09:19.015291 ntpd[1921]: 13 Jan 20:09:19 ntpd[1921]: Listen normally on 8 cilium_host 192.168.1.171:123
Jan 13 20:09:19.015291 ntpd[1921]: 13 Jan 20:09:19 ntpd[1921]: Listen normally on 9 cilium_net [fe80::9051:a0ff:fea9:1d0f%3]:123
Jan 13 20:09:19.015291 ntpd[1921]: 13 Jan 20:09:19 ntpd[1921]: Listen normally on 10 cilium_host [fe80::3488:a9ff:fec1:f86b%4]:123
Jan 13 20:09:19.015291 ntpd[1921]: 13 Jan 20:09:19 ntpd[1921]: Listen normally on 11 cilium_vxlan [fe80::4478:b6ff:feb2:fc49%5]:123
Jan 13 20:09:19.015291 ntpd[1921]: 13 Jan 20:09:19 ntpd[1921]: Listen normally on 12 lxc_health [fe80::2878:deff:fed5:4cfc%7]:123
Jan 13 20:09:19.015291 ntpd[1921]: 13 Jan 20:09:19 ntpd[1921]: Listen normally on 13 lxc234281b74772 [fe80::7cfa:15ff:fed4:afa4%9]:123
Jan 13 20:09:19.014914 ntpd[1921]: Listen normally on 10 cilium_host [fe80::3488:a9ff:fec1:f86b%4]:123
Jan 13 20:09:19.014984 ntpd[1921]: Listen normally on 11 cilium_vxlan [fe80::4478:b6ff:feb2:fc49%5]:123
Jan 13 20:09:19.015052 ntpd[1921]: Listen normally on 12 lxc_health [fe80::2878:deff:fed5:4cfc%7]:123
Jan 13 20:09:19.015126 ntpd[1921]: Listen normally on 13 lxc234281b74772 [fe80::7cfa:15ff:fed4:afa4%9]:123
Jan 13 20:09:19.391270 kubelet[2435]: E0113 20:09:19.390587    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:20.391074 kubelet[2435]: E0113 20:09:20.390998    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:21.391907 kubelet[2435]: E0113 20:09:21.391833    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:22.392079 kubelet[2435]: E0113 20:09:22.392003    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:23.254274 containerd[1957]: time="2025-01-13T20:09:23.254111043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:09:23.254274 containerd[1957]: time="2025-01-13T20:09:23.254202615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:09:23.255119 containerd[1957]: time="2025-01-13T20:09:23.254244135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:09:23.255119 containerd[1957]: time="2025-01-13T20:09:23.254957763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:09:23.291680 systemd[1]: Started cri-containerd-da20bc4b2009a5f4c5488082ef44d9d9cbaa20b937503658ed8bd081b4055ada.scope - libcontainer container da20bc4b2009a5f4c5488082ef44d9d9cbaa20b937503658ed8bd081b4055ada.
Jan 13 20:09:23.354861 containerd[1957]: time="2025-01-13T20:09:23.354700371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-9ztpb,Uid:5b953ed5-42ed-4a3b-a183-e4ff43a01366,Namespace:default,Attempt:0,} returns sandbox id \"da20bc4b2009a5f4c5488082ef44d9d9cbaa20b937503658ed8bd081b4055ada\""
Jan 13 20:09:23.358724 containerd[1957]: time="2025-01-13T20:09:23.358407855Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 13 20:09:23.392650 kubelet[2435]: E0113 20:09:23.392573    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:24.274250 kubelet[2435]: E0113 20:09:24.274198    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:24.393213 kubelet[2435]: E0113 20:09:24.393162    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:25.393730 kubelet[2435]: E0113 20:09:25.393673    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:26.394389 kubelet[2435]: E0113 20:09:26.394251    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:26.673122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1497402705.mount: Deactivated successfully.
Jan 13 20:09:27.394525 kubelet[2435]: E0113 20:09:27.394467    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:28.001726 containerd[1957]: time="2025-01-13T20:09:28.001591399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:28.006402 containerd[1957]: time="2025-01-13T20:09:28.005177119Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045"
Jan 13 20:09:28.006402 containerd[1957]: time="2025-01-13T20:09:28.005882719Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:28.016526 containerd[1957]: time="2025-01-13T20:09:28.016465447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:28.018505 containerd[1957]: time="2025-01-13T20:09:28.018442639Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 4.659978444s"
Jan 13 20:09:28.018649 containerd[1957]: time="2025-01-13T20:09:28.018503467Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\""
Jan 13 20:09:28.022873 containerd[1957]: time="2025-01-13T20:09:28.022816255Z" level=info msg="CreateContainer within sandbox \"da20bc4b2009a5f4c5488082ef44d9d9cbaa20b937503658ed8bd081b4055ada\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Jan 13 20:09:28.042454 containerd[1957]: time="2025-01-13T20:09:28.042214579Z" level=info msg="CreateContainer within sandbox \"da20bc4b2009a5f4c5488082ef44d9d9cbaa20b937503658ed8bd081b4055ada\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"eac9f36060a8677bab3dc04429414a506a225d35ffca4a3b1df1ba19ae0278c9\""
Jan 13 20:09:28.043338 containerd[1957]: time="2025-01-13T20:09:28.043214095Z" level=info msg="StartContainer for \"eac9f36060a8677bab3dc04429414a506a225d35ffca4a3b1df1ba19ae0278c9\""
Jan 13 20:09:28.089693 systemd[1]: run-containerd-runc-k8s.io-eac9f36060a8677bab3dc04429414a506a225d35ffca4a3b1df1ba19ae0278c9-runc.6NoxQj.mount: Deactivated successfully.
Jan 13 20:09:28.101673 systemd[1]: Started cri-containerd-eac9f36060a8677bab3dc04429414a506a225d35ffca4a3b1df1ba19ae0278c9.scope - libcontainer container eac9f36060a8677bab3dc04429414a506a225d35ffca4a3b1df1ba19ae0278c9.
Jan 13 20:09:28.147572 containerd[1957]: time="2025-01-13T20:09:28.147147391Z" level=info msg="StartContainer for \"eac9f36060a8677bab3dc04429414a506a225d35ffca4a3b1df1ba19ae0278c9\" returns successfully"
Jan 13 20:09:28.395117 kubelet[2435]: E0113 20:09:28.394936    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:28.602399 kubelet[2435]: I0113 20:09:28.602290    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-9ztpb" podStartSLOduration=10.938895294 podStartE2EDuration="15.602271886s" podCreationTimestamp="2025-01-13 20:09:13 +0000 UTC" firstStartedPulling="2025-01-13 20:09:23.357453675 +0000 UTC m=+40.668855191" lastFinishedPulling="2025-01-13 20:09:28.020830267 +0000 UTC m=+45.332231783" observedRunningTime="2025-01-13 20:09:28.601570954 +0000 UTC m=+45.912972494" watchObservedRunningTime="2025-01-13 20:09:28.602271886 +0000 UTC m=+45.913673414"
Jan 13 20:09:29.395971 kubelet[2435]: E0113 20:09:29.395899    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:30.397167 kubelet[2435]: E0113 20:09:30.397096    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:31.398059 kubelet[2435]: E0113 20:09:31.397979    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:32.399010 kubelet[2435]: E0113 20:09:32.398948    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:33.400080 kubelet[2435]: E0113 20:09:33.400001    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:33.816309 kubelet[2435]: I0113 20:09:33.816215    2435 topology_manager.go:215] "Topology Admit Handler" podUID="50d62070-7104-423e-9d0b-b15952d335c7" podNamespace="default" podName="nfs-server-provisioner-0"
Jan 13 20:09:33.836550 systemd[1]: Created slice kubepods-besteffort-pod50d62070_7104_423e_9d0b_b15952d335c7.slice - libcontainer container kubepods-besteffort-pod50d62070_7104_423e_9d0b_b15952d335c7.slice.
Jan 13 20:09:33.974454 kubelet[2435]: I0113 20:09:33.974330    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvd6j\" (UniqueName: \"kubernetes.io/projected/50d62070-7104-423e-9d0b-b15952d335c7-kube-api-access-fvd6j\") pod \"nfs-server-provisioner-0\" (UID: \"50d62070-7104-423e-9d0b-b15952d335c7\") " pod="default/nfs-server-provisioner-0"
Jan 13 20:09:33.974454 kubelet[2435]: I0113 20:09:33.974429    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/50d62070-7104-423e-9d0b-b15952d335c7-data\") pod \"nfs-server-provisioner-0\" (UID: \"50d62070-7104-423e-9d0b-b15952d335c7\") " pod="default/nfs-server-provisioner-0"
Jan 13 20:09:34.143322 containerd[1957]: time="2025-01-13T20:09:34.142818193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:50d62070-7104-423e-9d0b-b15952d335c7,Namespace:default,Attempt:0,}"
Jan 13 20:09:34.188714 systemd-networkd[1852]: lxc5974cf6ed6f8: Link UP
Jan 13 20:09:34.195454 (udev-worker)[3798]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:09:34.200565 kernel: eth0: renamed from tmpe0ab8
Jan 13 20:09:34.206830 systemd-networkd[1852]: lxc5974cf6ed6f8: Gained carrier
Jan 13 20:09:34.400329 kubelet[2435]: E0113 20:09:34.400190    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:34.585289 containerd[1957]: time="2025-01-13T20:09:34.585138891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:09:34.586121 containerd[1957]: time="2025-01-13T20:09:34.585252915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:09:34.586255 containerd[1957]: time="2025-01-13T20:09:34.586086591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:09:34.586410 containerd[1957]: time="2025-01-13T20:09:34.586284795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:09:34.622690 systemd[1]: Started cri-containerd-e0ab8aa6b21e46151f24e8d60a32ea29ee0ece4248b0edfaaa689fef2d521dd4.scope - libcontainer container e0ab8aa6b21e46151f24e8d60a32ea29ee0ece4248b0edfaaa689fef2d521dd4.
Jan 13 20:09:34.683711 containerd[1957]: time="2025-01-13T20:09:34.683535892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:50d62070-7104-423e-9d0b-b15952d335c7,Namespace:default,Attempt:0,} returns sandbox id \"e0ab8aa6b21e46151f24e8d60a32ea29ee0ece4248b0edfaaa689fef2d521dd4\""
Jan 13 20:09:34.687927 containerd[1957]: time="2025-01-13T20:09:34.687771952Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Jan 13 20:09:35.401913 kubelet[2435]: E0113 20:09:35.401820    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:36.226667 systemd-networkd[1852]: lxc5974cf6ed6f8: Gained IPv6LL
Jan 13 20:09:36.402411 kubelet[2435]: E0113 20:09:36.402337    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:37.160322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3261133637.mount: Deactivated successfully.
Jan 13 20:09:37.403157 kubelet[2435]: E0113 20:09:37.403073    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:38.404393 kubelet[2435]: E0113 20:09:38.404144    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:39.014720 ntpd[1921]: Listen normally on 14 lxc5974cf6ed6f8 [fe80::f091:ebff:fe64:44ec%11]:123
Jan 13 20:09:39.015238 ntpd[1921]: 13 Jan 20:09:39 ntpd[1921]: Listen normally on 14 lxc5974cf6ed6f8 [fe80::f091:ebff:fe64:44ec%11]:123
Jan 13 20:09:39.404898 kubelet[2435]: E0113 20:09:39.404379    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:40.365195 containerd[1957]: time="2025-01-13T20:09:40.365113316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:40.367752 containerd[1957]: time="2025-01-13T20:09:40.367381292Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623"
Jan 13 20:09:40.369391 containerd[1957]: time="2025-01-13T20:09:40.368816768Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:40.374228 containerd[1957]: time="2025-01-13T20:09:40.374162540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:40.376563 containerd[1957]: time="2025-01-13T20:09:40.376510688Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.688681772s"
Jan 13 20:09:40.376732 containerd[1957]: time="2025-01-13T20:09:40.376701008Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\""
Jan 13 20:09:40.381697 containerd[1957]: time="2025-01-13T20:09:40.381622808Z" level=info msg="CreateContainer within sandbox \"e0ab8aa6b21e46151f24e8d60a32ea29ee0ece4248b0edfaaa689fef2d521dd4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Jan 13 20:09:40.405407 kubelet[2435]: E0113 20:09:40.405310    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:40.406805 containerd[1957]: time="2025-01-13T20:09:40.406738376Z" level=info msg="CreateContainer within sandbox \"e0ab8aa6b21e46151f24e8d60a32ea29ee0ece4248b0edfaaa689fef2d521dd4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"31042f03cc340ffb06e51e6204a0782fd790a1767df8a9ef44b5e8ecf663dfc4\""
Jan 13 20:09:40.407581 containerd[1957]: time="2025-01-13T20:09:40.407491736Z" level=info msg="StartContainer for \"31042f03cc340ffb06e51e6204a0782fd790a1767df8a9ef44b5e8ecf663dfc4\""
Jan 13 20:09:40.463688 systemd[1]: Started cri-containerd-31042f03cc340ffb06e51e6204a0782fd790a1767df8a9ef44b5e8ecf663dfc4.scope - libcontainer container 31042f03cc340ffb06e51e6204a0782fd790a1767df8a9ef44b5e8ecf663dfc4.
Jan 13 20:09:40.509974 containerd[1957]: time="2025-01-13T20:09:40.508724133Z" level=info msg="StartContainer for \"31042f03cc340ffb06e51e6204a0782fd790a1767df8a9ef44b5e8ecf663dfc4\" returns successfully"
Jan 13 20:09:40.646291 kubelet[2435]: I0113 20:09:40.646033    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.9546612049999998 podStartE2EDuration="7.645984249s" podCreationTimestamp="2025-01-13 20:09:33 +0000 UTC" firstStartedPulling="2025-01-13 20:09:34.6869872 +0000 UTC m=+51.998388704" lastFinishedPulling="2025-01-13 20:09:40.378310244 +0000 UTC m=+57.689711748" observedRunningTime="2025-01-13 20:09:40.645143277 +0000 UTC m=+57.956544829" watchObservedRunningTime="2025-01-13 20:09:40.645984249 +0000 UTC m=+57.957385765"
Jan 13 20:09:41.405827 kubelet[2435]: E0113 20:09:41.405755    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:42.406463 kubelet[2435]: E0113 20:09:42.406397    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:43.406650 kubelet[2435]: E0113 20:09:43.406577    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:44.273778 kubelet[2435]: E0113 20:09:44.273714    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:44.406874 kubelet[2435]: E0113 20:09:44.406796    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:45.407843 kubelet[2435]: E0113 20:09:45.407789    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:46.408912 kubelet[2435]: E0113 20:09:46.408819    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:47.409786 kubelet[2435]: E0113 20:09:47.409728    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:48.411503 kubelet[2435]: E0113 20:09:48.411433    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:49.412530 kubelet[2435]: E0113 20:09:49.412464    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:50.251931 kubelet[2435]: I0113 20:09:50.251832    2435 topology_manager.go:215] "Topology Admit Handler" podUID="47900eab-1f18-4491-961d-f5eeae9c2b07" podNamespace="default" podName="test-pod-1"
Jan 13 20:09:50.262302 systemd[1]: Created slice kubepods-besteffort-pod47900eab_1f18_4491_961d_f5eeae9c2b07.slice - libcontainer container kubepods-besteffort-pod47900eab_1f18_4491_961d_f5eeae9c2b07.slice.
Jan 13 20:09:50.380967 kubelet[2435]: I0113 20:09:50.380889    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnngm\" (UniqueName: \"kubernetes.io/projected/47900eab-1f18-4491-961d-f5eeae9c2b07-kube-api-access-lnngm\") pod \"test-pod-1\" (UID: \"47900eab-1f18-4491-961d-f5eeae9c2b07\") " pod="default/test-pod-1"
Jan 13 20:09:50.380967 kubelet[2435]: I0113 20:09:50.380969    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-26076f35-a00e-4874-91a6-be9f8e0ffb28\" (UniqueName: \"kubernetes.io/nfs/47900eab-1f18-4491-961d-f5eeae9c2b07-pvc-26076f35-a00e-4874-91a6-be9f8e0ffb28\") pod \"test-pod-1\" (UID: \"47900eab-1f18-4491-961d-f5eeae9c2b07\") " pod="default/test-pod-1"
Jan 13 20:09:50.413009 kubelet[2435]: E0113 20:09:50.412950    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:50.519659 kernel: FS-Cache: Loaded
Jan 13 20:09:50.562105 kernel: RPC: Registered named UNIX socket transport module.
Jan 13 20:09:50.562250 kernel: RPC: Registered udp transport module.
Jan 13 20:09:50.562294 kernel: RPC: Registered tcp transport module.
Jan 13 20:09:50.564190 kernel: RPC: Registered tcp-with-tls transport module.
Jan 13 20:09:50.564285 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 13 20:09:50.888158 kernel: NFS: Registering the id_resolver key type
Jan 13 20:09:50.888284 kernel: Key type id_resolver registered
Jan 13 20:09:50.888324 kernel: Key type id_legacy registered
Jan 13 20:09:50.926692 nfsidmap[3988]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Jan 13 20:09:50.932938 nfsidmap[3989]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Jan 13 20:09:51.169196 containerd[1957]: time="2025-01-13T20:09:51.169057974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:47900eab-1f18-4491-961d-f5eeae9c2b07,Namespace:default,Attempt:0,}"
Jan 13 20:09:51.216619 (udev-worker)[3975]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:09:51.217131 (udev-worker)[3979]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:09:51.220792 systemd-networkd[1852]: lxc966bcb99c7eb: Link UP
Jan 13 20:09:51.229401 kernel: eth0: renamed from tmp10407
Jan 13 20:09:51.237709 systemd-networkd[1852]: lxc966bcb99c7eb: Gained carrier
Jan 13 20:09:51.413275 kubelet[2435]: E0113 20:09:51.413222    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:51.559932 containerd[1957]: time="2025-01-13T20:09:51.559505468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:09:51.559932 containerd[1957]: time="2025-01-13T20:09:51.559590740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:09:51.559932 containerd[1957]: time="2025-01-13T20:09:51.559615472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:09:51.559932 containerd[1957]: time="2025-01-13T20:09:51.559739600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:09:51.598692 systemd[1]: Started cri-containerd-10407f5108d3622d01480fba3ac55d225164f3e647ef8c4677f9301218695dc0.scope - libcontainer container 10407f5108d3622d01480fba3ac55d225164f3e647ef8c4677f9301218695dc0.
Jan 13 20:09:51.659840 containerd[1957]: time="2025-01-13T20:09:51.659763668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:47900eab-1f18-4491-961d-f5eeae9c2b07,Namespace:default,Attempt:0,} returns sandbox id \"10407f5108d3622d01480fba3ac55d225164f3e647ef8c4677f9301218695dc0\""
Jan 13 20:09:51.663615 containerd[1957]: time="2025-01-13T20:09:51.663557144Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 13 20:09:52.109081 containerd[1957]: time="2025-01-13T20:09:52.109016622Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:09:52.110031 containerd[1957]: time="2025-01-13T20:09:52.109952370Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Jan 13 20:09:52.116239 containerd[1957]: time="2025-01-13T20:09:52.116177730Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 452.563478ms"
Jan 13 20:09:52.116239 containerd[1957]: time="2025-01-13T20:09:52.116234094Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\""
Jan 13 20:09:52.119417 containerd[1957]: time="2025-01-13T20:09:52.119279118Z" level=info msg="CreateContainer within sandbox \"10407f5108d3622d01480fba3ac55d225164f3e647ef8c4677f9301218695dc0\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Jan 13 20:09:52.140498 containerd[1957]: time="2025-01-13T20:09:52.140419014Z" level=info msg="CreateContainer within sandbox \"10407f5108d3622d01480fba3ac55d225164f3e647ef8c4677f9301218695dc0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3fb6aac414df69fd9daabe8a744050dec84144177bc10e1de0c4421a7f5ee1b0\""
Jan 13 20:09:52.143993 containerd[1957]: time="2025-01-13T20:09:52.141621882Z" level=info msg="StartContainer for \"3fb6aac414df69fd9daabe8a744050dec84144177bc10e1de0c4421a7f5ee1b0\""
Jan 13 20:09:52.184691 systemd[1]: Started cri-containerd-3fb6aac414df69fd9daabe8a744050dec84144177bc10e1de0c4421a7f5ee1b0.scope - libcontainer container 3fb6aac414df69fd9daabe8a744050dec84144177bc10e1de0c4421a7f5ee1b0.
Jan 13 20:09:52.230490 containerd[1957]: time="2025-01-13T20:09:52.229576399Z" level=info msg="StartContainer for \"3fb6aac414df69fd9daabe8a744050dec84144177bc10e1de0c4421a7f5ee1b0\" returns successfully"
Jan 13 20:09:52.415200 kubelet[2435]: E0113 20:09:52.415021    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:52.418616 systemd-networkd[1852]: lxc966bcb99c7eb: Gained IPv6LL
Jan 13 20:09:52.678419 kubelet[2435]: I0113 20:09:52.678203    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.224097823 podStartE2EDuration="18.678181833s" podCreationTimestamp="2025-01-13 20:09:34 +0000 UTC" firstStartedPulling="2025-01-13 20:09:51.663036224 +0000 UTC m=+68.974437740" lastFinishedPulling="2025-01-13 20:09:52.117120246 +0000 UTC m=+69.428521750" observedRunningTime="2025-01-13 20:09:52.677978361 +0000 UTC m=+69.989379889" watchObservedRunningTime="2025-01-13 20:09:52.678181833 +0000 UTC m=+69.989583361"
Jan 13 20:09:53.416055 kubelet[2435]: E0113 20:09:53.415979    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:54.416504 kubelet[2435]: E0113 20:09:54.416446    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:55.014745 ntpd[1921]: Listen normally on 15 lxc966bcb99c7eb [fe80::e4a8:acff:fe05:f2b6%13]:123
Jan 13 20:09:55.015229 ntpd[1921]: 13 Jan 20:09:55 ntpd[1921]: Listen normally on 15 lxc966bcb99c7eb [fe80::e4a8:acff:fe05:f2b6%13]:123
Jan 13 20:09:55.417075 kubelet[2435]: E0113 20:09:55.416916    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:56.417176 kubelet[2435]: E0113 20:09:56.417099    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:57.418146 kubelet[2435]: E0113 20:09:57.418087    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:58.418752 kubelet[2435]: E0113 20:09:58.418684    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:09:59.419356 kubelet[2435]: E0113 20:09:59.419287    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:00.420482 kubelet[2435]: E0113 20:10:00.420415    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:01.131733 containerd[1957]: time="2025-01-13T20:10:01.131664867Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 13 20:10:01.142936 containerd[1957]: time="2025-01-13T20:10:01.142763871Z" level=info msg="StopContainer for \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\" with timeout 2 (s)"
Jan 13 20:10:01.143530 containerd[1957]: time="2025-01-13T20:10:01.143196747Z" level=info msg="Stop container \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\" with signal terminated"
Jan 13 20:10:01.155569 systemd-networkd[1852]: lxc_health: Link DOWN
Jan 13 20:10:01.155589 systemd-networkd[1852]: lxc_health: Lost carrier
Jan 13 20:10:01.183452 systemd[1]: cri-containerd-999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0.scope: Deactivated successfully.
Jan 13 20:10:01.184314 systemd[1]: cri-containerd-999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0.scope: Consumed 14.144s CPU time.
Jan 13 20:10:01.221054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0-rootfs.mount: Deactivated successfully.
Jan 13 20:10:01.420947 kubelet[2435]: E0113 20:10:01.420767    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:01.433998 containerd[1957]: time="2025-01-13T20:10:01.433742609Z" level=info msg="shim disconnected" id=999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0 namespace=k8s.io
Jan 13 20:10:01.434177 containerd[1957]: time="2025-01-13T20:10:01.434012405Z" level=warning msg="cleaning up after shim disconnected" id=999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0 namespace=k8s.io
Jan 13 20:10:01.434177 containerd[1957]: time="2025-01-13T20:10:01.434035553Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:10:01.456313 containerd[1957]: time="2025-01-13T20:10:01.456161381Z" level=info msg="StopContainer for \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\" returns successfully"
Jan 13 20:10:01.457206 containerd[1957]: time="2025-01-13T20:10:01.457159997Z" level=info msg="StopPodSandbox for \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\""
Jan 13 20:10:01.457292 containerd[1957]: time="2025-01-13T20:10:01.457256201Z" level=info msg="Container to stop \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:10:01.457406 containerd[1957]: time="2025-01-13T20:10:01.457282745Z" level=info msg="Container to stop \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:10:01.457476 containerd[1957]: time="2025-01-13T20:10:01.457329797Z" level=info msg="Container to stop \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:10:01.457476 containerd[1957]: time="2025-01-13T20:10:01.457435913Z" level=info msg="Container to stop \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:10:01.457580 containerd[1957]: time="2025-01-13T20:10:01.457469033Z" level=info msg="Container to stop \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Jan 13 20:10:01.460731 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74-shm.mount: Deactivated successfully.
Jan 13 20:10:01.471254 systemd[1]: cri-containerd-d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74.scope: Deactivated successfully.
Jan 13 20:10:01.503075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74-rootfs.mount: Deactivated successfully.
Jan 13 20:10:01.508880 containerd[1957]: time="2025-01-13T20:10:01.508817189Z" level=info msg="shim disconnected" id=d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74 namespace=k8s.io
Jan 13 20:10:01.508880 containerd[1957]: time="2025-01-13T20:10:01.508868945Z" level=warning msg="cleaning up after shim disconnected" id=d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74 namespace=k8s.io
Jan 13 20:10:01.509390 containerd[1957]: time="2025-01-13T20:10:01.508891061Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:10:01.529813 containerd[1957]: time="2025-01-13T20:10:01.529695725Z" level=info msg="TearDown network for sandbox \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" successfully"
Jan 13 20:10:01.529813 containerd[1957]: time="2025-01-13T20:10:01.529757681Z" level=info msg="StopPodSandbox for \"d62ac96aa5c2d28a8478fc04c4a3976d7a885356f7729ef0ab688c20ab9e0b74\" returns successfully"
Jan 13 20:10:01.654540 kubelet[2435]: I0113 20:10:01.654477    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-lib-modules\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.654738 kubelet[2435]: I0113 20:10:01.654546    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-host-proc-sys-kernel\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.654738 kubelet[2435]: I0113 20:10:01.654593    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65tw7\" (UniqueName: \"kubernetes.io/projected/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-kube-api-access-65tw7\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.654738 kubelet[2435]: I0113 20:10:01.654630    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-clustermesh-secrets\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.654738 kubelet[2435]: I0113 20:10:01.654667    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-hubble-tls\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.654738 kubelet[2435]: I0113 20:10:01.654705    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-config-path\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.654738 kubelet[2435]: I0113 20:10:01.654737    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-xtables-lock\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.655049 kubelet[2435]: I0113 20:10:01.654770    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-run\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.655049 kubelet[2435]: I0113 20:10:01.654801    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-cgroup\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.655049 kubelet[2435]: I0113 20:10:01.654834    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-host-proc-sys-net\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.655049 kubelet[2435]: I0113 20:10:01.654866    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-hostproc\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.655049 kubelet[2435]: I0113 20:10:01.654900    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-bpf-maps\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.655049 kubelet[2435]: I0113 20:10:01.654934    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cni-path\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.655477 kubelet[2435]: I0113 20:10:01.654969    2435 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-etc-cni-netd\") pod \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\" (UID: \"7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9\") "
Jan 13 20:10:01.655477 kubelet[2435]: I0113 20:10:01.655062    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.655477 kubelet[2435]: I0113 20:10:01.655118    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.655477 kubelet[2435]: I0113 20:10:01.655155    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.655711 kubelet[2435]: I0113 20:10:01.655679    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.662375 kubelet[2435]: I0113 20:10:01.659643    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.662375 kubelet[2435]: I0113 20:10:01.659726    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.662375 kubelet[2435]: I0113 20:10:01.659798    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-hostproc" (OuterVolumeSpecName: "hostproc") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.662375 kubelet[2435]: I0113 20:10:01.659851    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.662375 kubelet[2435]: I0113 20:10:01.659895    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cni-path" (OuterVolumeSpecName: "cni-path") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.662949 kubelet[2435]: I0113 20:10:01.662901    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-kube-api-access-65tw7" (OuterVolumeSpecName: "kube-api-access-65tw7") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "kube-api-access-65tw7". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 13 20:10:01.663126 kubelet[2435]: I0113 20:10:01.663098    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 13 20:10:01.663951 systemd[1]: var-lib-kubelet-pods-7b5b5acf\x2d91e7\x2d4805\x2da6fb\x2d2c0c86f2b4c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d65tw7.mount: Deactivated successfully.
Jan 13 20:10:01.664218 systemd[1]: var-lib-kubelet-pods-7b5b5acf\x2d91e7\x2d4805\x2da6fb\x2d2c0c86f2b4c9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Jan 13 20:10:01.668676 kubelet[2435]: I0113 20:10:01.668622    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jan 13 20:10:01.671253 kubelet[2435]: I0113 20:10:01.671098    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 13 20:10:01.673879 kubelet[2435]: I0113 20:10:01.673785    2435 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" (UID: "7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jan 13 20:10:01.692372 kubelet[2435]: I0113 20:10:01.690901    2435 scope.go:117] "RemoveContainer" containerID="999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0"
Jan 13 20:10:01.695326 containerd[1957]: time="2025-01-13T20:10:01.695260698Z" level=info msg="RemoveContainer for \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\""
Jan 13 20:10:01.701621 containerd[1957]: time="2025-01-13T20:10:01.701555406Z" level=info msg="RemoveContainer for \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\" returns successfully"
Jan 13 20:10:01.702049 kubelet[2435]: I0113 20:10:01.701997    2435 scope.go:117] "RemoveContainer" containerID="b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b"
Jan 13 20:10:01.702865 systemd[1]: Removed slice kubepods-burstable-pod7b5b5acf_91e7_4805_a6fb_2c0c86f2b4c9.slice - libcontainer container kubepods-burstable-pod7b5b5acf_91e7_4805_a6fb_2c0c86f2b4c9.slice.
Jan 13 20:10:01.703121 systemd[1]: kubepods-burstable-pod7b5b5acf_91e7_4805_a6fb_2c0c86f2b4c9.slice: Consumed 14.296s CPU time.
Jan 13 20:10:01.705143 containerd[1957]: time="2025-01-13T20:10:01.704591070Z" level=info msg="RemoveContainer for \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\""
Jan 13 20:10:01.709928 containerd[1957]: time="2025-01-13T20:10:01.709852074Z" level=info msg="RemoveContainer for \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\" returns successfully"
Jan 13 20:10:01.710386 kubelet[2435]: I0113 20:10:01.710185    2435 scope.go:117] "RemoveContainer" containerID="5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796"
Jan 13 20:10:01.712659 containerd[1957]: time="2025-01-13T20:10:01.712610130Z" level=info msg="RemoveContainer for \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\""
Jan 13 20:10:01.717200 containerd[1957]: time="2025-01-13T20:10:01.717040422Z" level=info msg="RemoveContainer for \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\" returns successfully"
Jan 13 20:10:01.717641 kubelet[2435]: I0113 20:10:01.717606    2435 scope.go:117] "RemoveContainer" containerID="99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478"
Jan 13 20:10:01.719696 containerd[1957]: time="2025-01-13T20:10:01.719573478Z" level=info msg="RemoveContainer for \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\""
Jan 13 20:10:01.723209 containerd[1957]: time="2025-01-13T20:10:01.723142506Z" level=info msg="RemoveContainer for \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\" returns successfully"
Jan 13 20:10:01.723717 kubelet[2435]: I0113 20:10:01.723497    2435 scope.go:117] "RemoveContainer" containerID="33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d"
Jan 13 20:10:01.726076 containerd[1957]: time="2025-01-13T20:10:01.725853246Z" level=info msg="RemoveContainer for \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\""
Jan 13 20:10:01.730775 containerd[1957]: time="2025-01-13T20:10:01.730625022Z" level=info msg="RemoveContainer for \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\" returns successfully"
Jan 13 20:10:01.731197 kubelet[2435]: I0113 20:10:01.731053    2435 scope.go:117] "RemoveContainer" containerID="999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0"
Jan 13 20:10:01.731729 containerd[1957]: time="2025-01-13T20:10:01.731486142Z" level=error msg="ContainerStatus for \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\": not found"
Jan 13 20:10:01.731821 kubelet[2435]: E0113 20:10:01.731765    2435 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\": not found" containerID="999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0"
Jan 13 20:10:01.731946 kubelet[2435]: I0113 20:10:01.731813    2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0"} err="failed to get container status \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"999e4035bfbc151c25ae8bf7ba07d331462acf31c7e6b1d811ba53fbb44b30a0\": not found"
Jan 13 20:10:01.731946 kubelet[2435]: I0113 20:10:01.731938    2435 scope.go:117] "RemoveContainer" containerID="b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b"
Jan 13 20:10:01.732318 containerd[1957]: time="2025-01-13T20:10:01.732249870Z" level=error msg="ContainerStatus for \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\": not found"
Jan 13 20:10:01.732741 kubelet[2435]: E0113 20:10:01.732707    2435 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\": not found" containerID="b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b"
Jan 13 20:10:01.732937 kubelet[2435]: I0113 20:10:01.732896    2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b"} err="failed to get container status \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7803fb68131706de6d3fcb4886f9fafdd9250b4cc7cff8c15d690b64613595b\": not found"
Jan 13 20:10:01.732937 kubelet[2435]: I0113 20:10:01.732976    2435 scope.go:117] "RemoveContainer" containerID="5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796"
Jan 13 20:10:01.733635 containerd[1957]: time="2025-01-13T20:10:01.733470678Z" level=error msg="ContainerStatus for \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\": not found"
Jan 13 20:10:01.733828 kubelet[2435]: E0113 20:10:01.733746    2435 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\": not found" containerID="5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796"
Jan 13 20:10:01.733828 kubelet[2435]: I0113 20:10:01.733789    2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796"} err="failed to get container status \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fee31279fdd18f997cc8c6a132b1e13cb5296c268c5f8b903be214444b2d796\": not found"
Jan 13 20:10:01.733828 kubelet[2435]: I0113 20:10:01.733821    2435 scope.go:117] "RemoveContainer" containerID="99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478"
Jan 13 20:10:01.734186 containerd[1957]: time="2025-01-13T20:10:01.734112054Z" level=error msg="ContainerStatus for \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\": not found"
Jan 13 20:10:01.734600 kubelet[2435]: E0113 20:10:01.734367    2435 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\": not found" containerID="99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478"
Jan 13 20:10:01.734600 kubelet[2435]: I0113 20:10:01.734412    2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478"} err="failed to get container status \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\": rpc error: code = NotFound desc = an error occurred when try to find container \"99570f3fe91f17ecb579d3d86526fcfe2d7f37a06c7aaf0be55d3d86657d0478\": not found"
Jan 13 20:10:01.734600 kubelet[2435]: I0113 20:10:01.734443    2435 scope.go:117] "RemoveContainer" containerID="33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d"
Jan 13 20:10:01.735182 containerd[1957]: time="2025-01-13T20:10:01.735093822Z" level=error msg="ContainerStatus for \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\": not found"
Jan 13 20:10:01.735499 kubelet[2435]: E0113 20:10:01.735458    2435 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\": not found" containerID="33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d"
Jan 13 20:10:01.735664 kubelet[2435]: I0113 20:10:01.735623    2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d"} err="failed to get container status \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\": rpc error: code = NotFound desc = an error occurred when try to find container \"33ad2cd97b3152ba724862fe68c088b5808de028026df87a4016ba922885c68d\": not found"
Jan 13 20:10:01.755913 kubelet[2435]: I0113 20:10:01.755862    2435 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-config-path\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.755913 kubelet[2435]: I0113 20:10:01.755914    2435 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-xtables-lock\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756124 kubelet[2435]: I0113 20:10:01.755938    2435 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-run\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756124 kubelet[2435]: I0113 20:10:01.755958    2435 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cilium-cgroup\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756124 kubelet[2435]: I0113 20:10:01.755978    2435 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-host-proc-sys-net\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756124 kubelet[2435]: I0113 20:10:01.755997    2435 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-hostproc\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756124 kubelet[2435]: I0113 20:10:01.756015    2435 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-bpf-maps\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756124 kubelet[2435]: I0113 20:10:01.756034    2435 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-cni-path\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756124 kubelet[2435]: I0113 20:10:01.756053    2435 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-etc-cni-netd\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756124 kubelet[2435]: I0113 20:10:01.756073    2435 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-lib-modules\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756545 kubelet[2435]: I0113 20:10:01.756094    2435 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-host-proc-sys-kernel\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756545 kubelet[2435]: I0113 20:10:01.756113    2435 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-65tw7\" (UniqueName: \"kubernetes.io/projected/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-kube-api-access-65tw7\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756545 kubelet[2435]: I0113 20:10:01.756137    2435 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-clustermesh-secrets\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:01.756545 kubelet[2435]: I0113 20:10:01.756157    2435 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9-hubble-tls\") on node \"172.31.18.148\" DevicePath \"\""
Jan 13 20:10:02.112315 systemd[1]: var-lib-kubelet-pods-7b5b5acf\x2d91e7\x2d4805\x2da6fb\x2d2c0c86f2b4c9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Jan 13 20:10:02.421959 kubelet[2435]: E0113 20:10:02.421808    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:02.426395 kubelet[2435]: I0113 20:10:02.425609    2435 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" path="/var/lib/kubelet/pods/7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9/volumes"
Jan 13 20:10:03.422436 kubelet[2435]: E0113 20:10:03.422369    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:04.014692 ntpd[1921]: Deleting interface #12 lxc_health, fe80::2878:deff:fed5:4cfc%7#123, interface stats: received=0, sent=0, dropped=0, active_time=45 secs
Jan 13 20:10:04.015186 ntpd[1921]: 13 Jan 20:10:04 ntpd[1921]: Deleting interface #12 lxc_health, fe80::2878:deff:fed5:4cfc%7#123, interface stats: received=0, sent=0, dropped=0, active_time=45 secs
Jan 13 20:10:04.273417 kubelet[2435]: E0113 20:10:04.273245    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:04.422767 kubelet[2435]: E0113 20:10:04.422708    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:04.465075 kubelet[2435]: E0113 20:10:04.464983    2435 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jan 13 20:10:05.423944 kubelet[2435]: E0113 20:10:05.423875    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:06.188277 kubelet[2435]: I0113 20:10:06.187937    2435 topology_manager.go:215] "Topology Admit Handler" podUID="55a5dd44-2efd-4698-a089-a90bd7876a50" podNamespace="kube-system" podName="cilium-operator-599987898-gm6zw"
Jan 13 20:10:06.188277 kubelet[2435]: E0113 20:10:06.188011    2435 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" containerName="clean-cilium-state"
Jan 13 20:10:06.188277 kubelet[2435]: E0113 20:10:06.188034    2435 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" containerName="cilium-agent"
Jan 13 20:10:06.188277 kubelet[2435]: E0113 20:10:06.188050    2435 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" containerName="mount-cgroup"
Jan 13 20:10:06.188277 kubelet[2435]: E0113 20:10:06.188064    2435 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" containerName="apply-sysctl-overwrites"
Jan 13 20:10:06.188277 kubelet[2435]: E0113 20:10:06.188079    2435 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" containerName="mount-bpf-fs"
Jan 13 20:10:06.188277 kubelet[2435]: I0113 20:10:06.188116    2435 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5b5acf-91e7-4805-a6fb-2c0c86f2b4c9" containerName="cilium-agent"
Jan 13 20:10:06.199083 systemd[1]: Created slice kubepods-besteffort-pod55a5dd44_2efd_4698_a089_a90bd7876a50.slice - libcontainer container kubepods-besteffort-pod55a5dd44_2efd_4698_a089_a90bd7876a50.slice.
Jan 13 20:10:06.284453 kubelet[2435]: I0113 20:10:06.284375    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wb2\" (UniqueName: \"kubernetes.io/projected/55a5dd44-2efd-4698-a089-a90bd7876a50-kube-api-access-96wb2\") pod \"cilium-operator-599987898-gm6zw\" (UID: \"55a5dd44-2efd-4698-a089-a90bd7876a50\") " pod="kube-system/cilium-operator-599987898-gm6zw"
Jan 13 20:10:06.284589 kubelet[2435]: I0113 20:10:06.284459    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55a5dd44-2efd-4698-a089-a90bd7876a50-cilium-config-path\") pod \"cilium-operator-599987898-gm6zw\" (UID: \"55a5dd44-2efd-4698-a089-a90bd7876a50\") " pod="kube-system/cilium-operator-599987898-gm6zw"
Jan 13 20:10:06.306544 kubelet[2435]: I0113 20:10:06.306477    2435 topology_manager.go:215] "Topology Admit Handler" podUID="80fc10ff-836c-467f-991b-0d79e800212a" podNamespace="kube-system" podName="cilium-trt85"
Jan 13 20:10:06.314978 kubelet[2435]: W0113 20:10:06.314877    2435 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.18.148" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.18.148' and this object
Jan 13 20:10:06.314978 kubelet[2435]: E0113 20:10:06.314951    2435 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.18.148" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.18.148' and this object
Jan 13 20:10:06.318687 systemd[1]: Created slice kubepods-burstable-pod80fc10ff_836c_467f_991b_0d79e800212a.slice - libcontainer container kubepods-burstable-pod80fc10ff_836c_467f_991b_0d79e800212a.slice.
Jan 13 20:10:06.424317 kubelet[2435]: E0113 20:10:06.424245    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:06.486771 kubelet[2435]: I0113 20:10:06.486616    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-hostproc\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.486771 kubelet[2435]: I0113 20:10:06.486681    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-etc-cni-netd\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.486975 kubelet[2435]: I0113 20:10:06.486722    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80fc10ff-836c-467f-991b-0d79e800212a-clustermesh-secrets\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.486975 kubelet[2435]: I0113 20:10:06.486864    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-cilium-cgroup\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.486975 kubelet[2435]: I0113 20:10:06.486899    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-cni-path\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.486975 kubelet[2435]: I0113 20:10:06.486940    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/80fc10ff-836c-467f-991b-0d79e800212a-cilium-ipsec-secrets\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.487170 kubelet[2435]: I0113 20:10:06.486978    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-host-proc-sys-kernel\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.487170 kubelet[2435]: I0113 20:10:06.487016    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-cilium-run\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.487170 kubelet[2435]: I0113 20:10:06.487056    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtnch\" (UniqueName: \"kubernetes.io/projected/80fc10ff-836c-467f-991b-0d79e800212a-kube-api-access-gtnch\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.487170 kubelet[2435]: I0113 20:10:06.487093    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-bpf-maps\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.487170 kubelet[2435]: I0113 20:10:06.487129    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-lib-modules\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.487466 kubelet[2435]: I0113 20:10:06.487172    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-xtables-lock\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.487466 kubelet[2435]: I0113 20:10:06.487207    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80fc10ff-836c-467f-991b-0d79e800212a-cilium-config-path\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.487466 kubelet[2435]: I0113 20:10:06.487243    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80fc10ff-836c-467f-991b-0d79e800212a-host-proc-sys-net\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.487466 kubelet[2435]: I0113 20:10:06.487278    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80fc10ff-836c-467f-991b-0d79e800212a-hubble-tls\") pod \"cilium-trt85\" (UID: \"80fc10ff-836c-467f-991b-0d79e800212a\") " pod="kube-system/cilium-trt85"
Jan 13 20:10:06.490193 kubelet[2435]: I0113 20:10:06.488904    2435 setters.go:580] "Node became not ready" node="172.31.18.148" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:10:06Z","lastTransitionTime":"2025-01-13T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Jan 13 20:10:06.504983 containerd[1957]: time="2025-01-13T20:10:06.504759202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gm6zw,Uid:55a5dd44-2efd-4698-a089-a90bd7876a50,Namespace:kube-system,Attempt:0,}"
Jan 13 20:10:06.545617 containerd[1957]: time="2025-01-13T20:10:06.545409802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:10:06.545617 containerd[1957]: time="2025-01-13T20:10:06.545527054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:10:06.545617 containerd[1957]: time="2025-01-13T20:10:06.545577454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:10:06.546102 containerd[1957]: time="2025-01-13T20:10:06.545763898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:10:06.581024 systemd[1]: run-containerd-runc-k8s.io-9bec09c24c39f401f553314f2287fa0bee4e01e19036e82935abc7916f7b5f2b-runc.Y7M9gf.mount: Deactivated successfully.
Jan 13 20:10:06.593664 systemd[1]: Started cri-containerd-9bec09c24c39f401f553314f2287fa0bee4e01e19036e82935abc7916f7b5f2b.scope - libcontainer container 9bec09c24c39f401f553314f2287fa0bee4e01e19036e82935abc7916f7b5f2b.
Jan 13 20:10:06.662783 containerd[1957]: time="2025-01-13T20:10:06.662725259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gm6zw,Uid:55a5dd44-2efd-4698-a089-a90bd7876a50,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bec09c24c39f401f553314f2287fa0bee4e01e19036e82935abc7916f7b5f2b\""
Jan 13 20:10:06.666383 containerd[1957]: time="2025-01-13T20:10:06.666299711Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Jan 13 20:10:07.424702 kubelet[2435]: E0113 20:10:07.424630    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:07.831538 containerd[1957]: time="2025-01-13T20:10:07.831385056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trt85,Uid:80fc10ff-836c-467f-991b-0d79e800212a,Namespace:kube-system,Attempt:0,}"
Jan 13 20:10:07.866119 containerd[1957]: time="2025-01-13T20:10:07.865929925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:10:07.866119 containerd[1957]: time="2025-01-13T20:10:07.866051017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:10:07.866119 containerd[1957]: time="2025-01-13T20:10:07.866077729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:10:07.866578 containerd[1957]: time="2025-01-13T20:10:07.866232733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:10:07.904690 systemd[1]: Started cri-containerd-6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6.scope - libcontainer container 6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6.
Jan 13 20:10:07.947817 containerd[1957]: time="2025-01-13T20:10:07.947753305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trt85,Uid:80fc10ff-836c-467f-991b-0d79e800212a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\""
Jan 13 20:10:07.953270 containerd[1957]: time="2025-01-13T20:10:07.953063929Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Jan 13 20:10:07.978628 containerd[1957]: time="2025-01-13T20:10:07.978546949Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90722f0c499ec99f7b4121fe1cb3c29f045826d472aeed9f444ffda90acbdb5a\""
Jan 13 20:10:07.979713 containerd[1957]: time="2025-01-13T20:10:07.979622893Z" level=info msg="StartContainer for \"90722f0c499ec99f7b4121fe1cb3c29f045826d472aeed9f444ffda90acbdb5a\""
Jan 13 20:10:08.021678 systemd[1]: Started cri-containerd-90722f0c499ec99f7b4121fe1cb3c29f045826d472aeed9f444ffda90acbdb5a.scope - libcontainer container 90722f0c499ec99f7b4121fe1cb3c29f045826d472aeed9f444ffda90acbdb5a.
Jan 13 20:10:08.070029 containerd[1957]: time="2025-01-13T20:10:08.069682930Z" level=info msg="StartContainer for \"90722f0c499ec99f7b4121fe1cb3c29f045826d472aeed9f444ffda90acbdb5a\" returns successfully"
Jan 13 20:10:08.084277 systemd[1]: cri-containerd-90722f0c499ec99f7b4121fe1cb3c29f045826d472aeed9f444ffda90acbdb5a.scope: Deactivated successfully.
Jan 13 20:10:08.127576 containerd[1957]: time="2025-01-13T20:10:08.127501186Z" level=info msg="shim disconnected" id=90722f0c499ec99f7b4121fe1cb3c29f045826d472aeed9f444ffda90acbdb5a namespace=k8s.io
Jan 13 20:10:08.127876 containerd[1957]: time="2025-01-13T20:10:08.127844794Z" level=warning msg="cleaning up after shim disconnected" id=90722f0c499ec99f7b4121fe1cb3c29f045826d472aeed9f444ffda90acbdb5a namespace=k8s.io
Jan 13 20:10:08.127980 containerd[1957]: time="2025-01-13T20:10:08.127954870Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:10:08.425932 kubelet[2435]: E0113 20:10:08.425812    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:08.716056 containerd[1957]: time="2025-01-13T20:10:08.715808893Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Jan 13 20:10:08.735280 containerd[1957]: time="2025-01-13T20:10:08.734983897Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0617c1387cde0541d041bd59bf49fb0fc875092deaded7306b83fd98d48cf129\""
Jan 13 20:10:08.737124 containerd[1957]: time="2025-01-13T20:10:08.736873933Z" level=info msg="StartContainer for \"0617c1387cde0541d041bd59bf49fb0fc875092deaded7306b83fd98d48cf129\""
Jan 13 20:10:08.785760 systemd[1]: Started cri-containerd-0617c1387cde0541d041bd59bf49fb0fc875092deaded7306b83fd98d48cf129.scope - libcontainer container 0617c1387cde0541d041bd59bf49fb0fc875092deaded7306b83fd98d48cf129.
Jan 13 20:10:08.837456 containerd[1957]: time="2025-01-13T20:10:08.837384517Z" level=info msg="StartContainer for \"0617c1387cde0541d041bd59bf49fb0fc875092deaded7306b83fd98d48cf129\" returns successfully"
Jan 13 20:10:08.844617 systemd[1]: cri-containerd-0617c1387cde0541d041bd59bf49fb0fc875092deaded7306b83fd98d48cf129.scope: Deactivated successfully.
Jan 13 20:10:08.883842 containerd[1957]: time="2025-01-13T20:10:08.883521446Z" level=info msg="shim disconnected" id=0617c1387cde0541d041bd59bf49fb0fc875092deaded7306b83fd98d48cf129 namespace=k8s.io
Jan 13 20:10:08.883842 containerd[1957]: time="2025-01-13T20:10:08.883593746Z" level=warning msg="cleaning up after shim disconnected" id=0617c1387cde0541d041bd59bf49fb0fc875092deaded7306b83fd98d48cf129 namespace=k8s.io
Jan 13 20:10:08.883842 containerd[1957]: time="2025-01-13T20:10:08.883617518Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:10:09.402615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0617c1387cde0541d041bd59bf49fb0fc875092deaded7306b83fd98d48cf129-rootfs.mount: Deactivated successfully.
Jan 13 20:10:09.427888 kubelet[2435]: E0113 20:10:09.427823    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:09.467002 kubelet[2435]: E0113 20:10:09.466933    2435 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jan 13 20:10:09.721102 containerd[1957]: time="2025-01-13T20:10:09.720673562Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Jan 13 20:10:09.743186 containerd[1957]: time="2025-01-13T20:10:09.743102858Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4676ba528dd8283da2546004a1383d000db625b1355d57840f5cc5798175b40d\""
Jan 13 20:10:09.744076 containerd[1957]: time="2025-01-13T20:10:09.743876162Z" level=info msg="StartContainer for \"4676ba528dd8283da2546004a1383d000db625b1355d57840f5cc5798175b40d\""
Jan 13 20:10:09.794730 systemd[1]: Started cri-containerd-4676ba528dd8283da2546004a1383d000db625b1355d57840f5cc5798175b40d.scope - libcontainer container 4676ba528dd8283da2546004a1383d000db625b1355d57840f5cc5798175b40d.
Jan 13 20:10:09.859653 containerd[1957]: time="2025-01-13T20:10:09.859566890Z" level=info msg="StartContainer for \"4676ba528dd8283da2546004a1383d000db625b1355d57840f5cc5798175b40d\" returns successfully"
Jan 13 20:10:09.865914 systemd[1]: cri-containerd-4676ba528dd8283da2546004a1383d000db625b1355d57840f5cc5798175b40d.scope: Deactivated successfully.
Jan 13 20:10:09.909702 containerd[1957]: time="2025-01-13T20:10:09.909609639Z" level=info msg="shim disconnected" id=4676ba528dd8283da2546004a1383d000db625b1355d57840f5cc5798175b40d namespace=k8s.io
Jan 13 20:10:09.909702 containerd[1957]: time="2025-01-13T20:10:09.909685311Z" level=warning msg="cleaning up after shim disconnected" id=4676ba528dd8283da2546004a1383d000db625b1355d57840f5cc5798175b40d namespace=k8s.io
Jan 13 20:10:09.909702 containerd[1957]: time="2025-01-13T20:10:09.909707475Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:10:10.402734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4676ba528dd8283da2546004a1383d000db625b1355d57840f5cc5798175b40d-rootfs.mount: Deactivated successfully.
Jan 13 20:10:10.428700 kubelet[2435]: E0113 20:10:10.428640    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:10.732475 containerd[1957]: time="2025-01-13T20:10:10.731720559Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Jan 13 20:10:10.803086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927634200.mount: Deactivated successfully.
Jan 13 20:10:10.809509 containerd[1957]: time="2025-01-13T20:10:10.809244639Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e493d87fcbe640a4003bb1540b8bf1a1df871fdff607f3cbd6cb9d4c4ea8a17\""
Jan 13 20:10:10.811003 containerd[1957]: time="2025-01-13T20:10:10.810736467Z" level=info msg="StartContainer for \"1e493d87fcbe640a4003bb1540b8bf1a1df871fdff607f3cbd6cb9d4c4ea8a17\""
Jan 13 20:10:10.874650 systemd[1]: Started cri-containerd-1e493d87fcbe640a4003bb1540b8bf1a1df871fdff607f3cbd6cb9d4c4ea8a17.scope - libcontainer container 1e493d87fcbe640a4003bb1540b8bf1a1df871fdff607f3cbd6cb9d4c4ea8a17.
Jan 13 20:10:10.916001 systemd[1]: cri-containerd-1e493d87fcbe640a4003bb1540b8bf1a1df871fdff607f3cbd6cb9d4c4ea8a17.scope: Deactivated successfully.
Jan 13 20:10:10.920317 containerd[1957]: time="2025-01-13T20:10:10.920009416Z" level=info msg="StartContainer for \"1e493d87fcbe640a4003bb1540b8bf1a1df871fdff607f3cbd6cb9d4c4ea8a17\" returns successfully"
Jan 13 20:10:10.962994 containerd[1957]: time="2025-01-13T20:10:10.962881360Z" level=info msg="shim disconnected" id=1e493d87fcbe640a4003bb1540b8bf1a1df871fdff607f3cbd6cb9d4c4ea8a17 namespace=k8s.io
Jan 13 20:10:10.963265 containerd[1957]: time="2025-01-13T20:10:10.962993512Z" level=warning msg="cleaning up after shim disconnected" id=1e493d87fcbe640a4003bb1540b8bf1a1df871fdff607f3cbd6cb9d4c4ea8a17 namespace=k8s.io
Jan 13 20:10:10.963265 containerd[1957]: time="2025-01-13T20:10:10.963037588Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:10:11.429289 kubelet[2435]: E0113 20:10:11.429214    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:11.736383 containerd[1957]: time="2025-01-13T20:10:11.736151824Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Jan 13 20:10:11.769655 containerd[1957]: time="2025-01-13T20:10:11.769508584Z" level=info msg="CreateContainer within sandbox \"6d0595a2b0757afb47d12d9ce3892a2fc203b80a48ed0176d14d1fbecd83b5e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dc4f954f830042277da2e3cd90c4f1cfa3c3d425f2c03d348602dfb58824946f\""
Jan 13 20:10:11.770686 containerd[1957]: time="2025-01-13T20:10:11.770557612Z" level=info msg="StartContainer for \"dc4f954f830042277da2e3cd90c4f1cfa3c3d425f2c03d348602dfb58824946f\""
Jan 13 20:10:11.824770 systemd[1]: Started cri-containerd-dc4f954f830042277da2e3cd90c4f1cfa3c3d425f2c03d348602dfb58824946f.scope - libcontainer container dc4f954f830042277da2e3cd90c4f1cfa3c3d425f2c03d348602dfb58824946f.
Jan 13 20:10:11.880979 containerd[1957]: time="2025-01-13T20:10:11.880906493Z" level=info msg="StartContainer for \"dc4f954f830042277da2e3cd90c4f1cfa3c3d425f2c03d348602dfb58824946f\" returns successfully"
Jan 13 20:10:12.406036 systemd[1]: run-containerd-runc-k8s.io-dc4f954f830042277da2e3cd90c4f1cfa3c3d425f2c03d348602dfb58824946f-runc.waRTae.mount: Deactivated successfully.
Jan 13 20:10:12.430082 kubelet[2435]: E0113 20:10:12.430023    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:12.634445 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce))
Jan 13 20:10:12.769996 kubelet[2435]: I0113 20:10:12.769895    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-trt85" podStartSLOduration=6.769873373 podStartE2EDuration="6.769873373s" podCreationTimestamp="2025-01-13 20:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:10:12.769194485 +0000 UTC m=+90.080596037" watchObservedRunningTime="2025-01-13 20:10:12.769873373 +0000 UTC m=+90.081274901"
Jan 13 20:10:13.430702 kubelet[2435]: E0113 20:10:13.430631    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:14.134280 containerd[1957]: time="2025-01-13T20:10:14.134190616Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:10:14.137124 containerd[1957]: time="2025-01-13T20:10:14.137031532Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137782"
Jan 13 20:10:14.139534 containerd[1957]: time="2025-01-13T20:10:14.139461952Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:10:14.143013 containerd[1957]: time="2025-01-13T20:10:14.142375660Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 7.475974117s"
Jan 13 20:10:14.143013 containerd[1957]: time="2025-01-13T20:10:14.142443976Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\""
Jan 13 20:10:14.146810 containerd[1957]: time="2025-01-13T20:10:14.146539408Z" level=info msg="CreateContainer within sandbox \"9bec09c24c39f401f553314f2287fa0bee4e01e19036e82935abc7916f7b5f2b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Jan 13 20:10:14.184170 containerd[1957]: time="2025-01-13T20:10:14.184034536Z" level=info msg="CreateContainer within sandbox \"9bec09c24c39f401f553314f2287fa0bee4e01e19036e82935abc7916f7b5f2b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c003de94e03c0332f3f1928d44ff800a93e794367e6a499d035daca83b01045e\""
Jan 13 20:10:14.188265 containerd[1957]: time="2025-01-13T20:10:14.186586456Z" level=info msg="StartContainer for \"c003de94e03c0332f3f1928d44ff800a93e794367e6a499d035daca83b01045e\""
Jan 13 20:10:14.254682 systemd[1]: Started cri-containerd-c003de94e03c0332f3f1928d44ff800a93e794367e6a499d035daca83b01045e.scope - libcontainer container c003de94e03c0332f3f1928d44ff800a93e794367e6a499d035daca83b01045e.
Jan 13 20:10:14.336056 containerd[1957]: time="2025-01-13T20:10:14.335977937Z" level=info msg="StartContainer for \"c003de94e03c0332f3f1928d44ff800a93e794367e6a499d035daca83b01045e\" returns successfully"
Jan 13 20:10:14.431417 kubelet[2435]: E0113 20:10:14.431213    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:15.432236 kubelet[2435]: E0113 20:10:15.432151    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:16.432711 kubelet[2435]: E0113 20:10:16.432644    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:16.893549 (udev-worker)[5121]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:10:16.897171 systemd-networkd[1852]: lxc_health: Link UP
Jan 13 20:10:16.905881 (udev-worker)[5122]: Network interface NamePolicy= disabled on kernel command line.
Jan 13 20:10:16.907277 systemd-networkd[1852]: lxc_health: Gained carrier
Jan 13 20:10:17.432868 kubelet[2435]: E0113 20:10:17.432790    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:17.858934 kubelet[2435]: I0113 20:10:17.858683    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gm6zw" podStartSLOduration=4.379934845 podStartE2EDuration="11.858660706s" podCreationTimestamp="2025-01-13 20:10:06 +0000 UTC" firstStartedPulling="2025-01-13 20:10:06.665286791 +0000 UTC m=+83.976688307" lastFinishedPulling="2025-01-13 20:10:14.144012652 +0000 UTC m=+91.455414168" observedRunningTime="2025-01-13 20:10:14.832378903 +0000 UTC m=+92.143780467" watchObservedRunningTime="2025-01-13 20:10:17.858660706 +0000 UTC m=+95.170062234"
Jan 13 20:10:18.433036 kubelet[2435]: E0113 20:10:18.432979    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:18.530613 systemd-networkd[1852]: lxc_health: Gained IPv6LL
Jan 13 20:10:19.434640 kubelet[2435]: E0113 20:10:19.434582    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:20.436461 kubelet[2435]: E0113 20:10:20.436238    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:21.014767 ntpd[1921]: Listen normally on 16 lxc_health [fe80::2422:c3ff:fe5a:5d7d%15]:123
Jan 13 20:10:21.015703 ntpd[1921]: 13 Jan 20:10:21 ntpd[1921]: Listen normally on 16 lxc_health [fe80::2422:c3ff:fe5a:5d7d%15]:123
Jan 13 20:10:21.436783 kubelet[2435]: E0113 20:10:21.436624    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:22.438746 kubelet[2435]: E0113 20:10:22.438680    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:22.757535 systemd[1]: run-containerd-runc-k8s.io-dc4f954f830042277da2e3cd90c4f1cfa3c3d425f2c03d348602dfb58824946f-runc.ASd5CO.mount: Deactivated successfully.
Jan 13 20:10:23.439793 kubelet[2435]: E0113 20:10:23.439731    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:24.274404 kubelet[2435]: E0113 20:10:24.274080    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:24.441335 kubelet[2435]: E0113 20:10:24.441273    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:10:25.441740 kubelet[2435]: E0113 20:10:25.441667    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"