Dec 13 13:59:53.725323 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:59:53.725345 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 13:59:53.725352 kernel: efi: EFI v2.70 by EDK II Dec 13 13:59:53.725358 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Dec 13 13:59:53.725363 kernel: random: crng init done Dec 13 13:59:53.725369 kernel: ACPI: Early table checksum verification disabled Dec 13 13:59:53.725375 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Dec 13 13:59:53.725384 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:59:53.725390 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:59:53.725395 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:59:53.725401 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:59:53.725406 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:59:53.725411 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:59:53.725417 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:59:53.725428 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:59:53.725435 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:59:53.725441 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:59:53.725447 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 13:59:53.725453 kernel: NUMA: Failed to initialise from firmware Dec 13 13:59:53.725458 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:59:53.725464 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Dec 13 13:59:53.725472 kernel: Zone ranges: Dec 13 13:59:53.725478 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:59:53.725485 kernel: DMA32 empty Dec 13 13:59:53.725490 kernel: Normal empty Dec 13 13:59:53.725496 kernel: Movable zone start for each node Dec 13 13:59:53.725504 kernel: Early memory node ranges Dec 13 13:59:53.725510 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Dec 13 13:59:53.725516 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Dec 13 13:59:53.725522 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Dec 13 13:59:53.725527 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Dec 13 13:59:53.725537 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Dec 13 13:59:53.725546 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Dec 13 13:59:53.725552 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Dec 13 13:59:53.725557 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:59:53.725565 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 13:59:53.725579 kernel: psci: probing for conduit method from ACPI. Dec 13 13:59:53.725598 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:59:53.725604 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:59:53.725610 kernel: psci: Trusted OS migration not required Dec 13 13:59:53.725619 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:59:53.725625 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 13:59:53.725633 kernel: ACPI: SRAT not present Dec 13 13:59:53.725639 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 13:59:53.725645 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 13:59:53.725652 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 13:59:53.725658 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:59:53.725664 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:59:53.725671 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:59:53.725677 kernel: CPU features: detected: Spectre-v4 Dec 13 13:59:53.725683 kernel: CPU features: detected: Spectre-BHB Dec 13 13:59:53.725690 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:59:53.725696 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:59:53.725702 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:59:53.725708 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:59:53.725714 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 13:59:53.725720 kernel: Policy zone: DMA Dec 13 13:59:53.725728 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 13:59:53.725734 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:59:53.725740 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:59:53.725747 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:59:53.725753 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:59:53.725760 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) Dec 13 13:59:53.725767 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:59:53.725773 kernel: trace event string verifier disabled Dec 13 13:59:53.725779 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:59:53.725785 kernel: rcu: RCU event tracing is enabled. Dec 13 13:59:53.725792 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:59:53.725798 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:59:53.725804 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:59:53.725811 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:59:53.725817 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:59:53.725823 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:59:53.725830 kernel: GICv3: 256 SPIs implemented Dec 13 13:59:53.725836 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:59:53.725842 kernel: GICv3: Distributor has no Range Selector support Dec 13 13:59:53.725848 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:59:53.725854 kernel: GICv3: 16 PPIs implemented Dec 13 13:59:53.725861 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 13:59:53.725867 kernel: ACPI: SRAT not present Dec 13 13:59:53.725873 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 13:59:53.725879 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:59:53.725885 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:59:53.725892 kernel: GICv3: using LPI property table @0x00000000400d0000 Dec 13 13:59:53.725898 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Dec 13 13:59:53.725905 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:59:53.725911 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:59:53.725918 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:59:53.725924 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:59:53.725930 kernel: arm-pv: using stolen time PV Dec 13 13:59:53.725936 kernel: Console: colour dummy device 80x25 Dec 13 13:59:53.725943 kernel: ACPI: Core revision 20210730 Dec 13 13:59:53.725949 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:59:53.725956 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:59:53.725962 kernel: LSM: Security Framework initializing Dec 13 13:59:53.725969 kernel: SELinux: Initializing. Dec 13 13:59:53.725975 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:59:53.725982 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:59:53.725988 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:59:53.725995 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 13:59:53.726001 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 13:59:53.726007 kernel: Remapping and enabling EFI services. Dec 13 13:59:53.726013 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:59:53.726020 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:59:53.726027 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 13:59:53.726034 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Dec 13 13:59:53.726040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:59:53.726046 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:59:53.726053 kernel: Detected PIPT I-cache on CPU2 Dec 13 13:59:53.726059 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 13:59:53.726066 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Dec 13 13:59:53.726072 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:59:53.726079 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 13:59:53.726085 kernel: Detected PIPT I-cache on CPU3 Dec 13 13:59:53.726092 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 13:59:53.726099 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Dec 13 13:59:53.726105 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:59:53.726111 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 13:59:53.726122 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:59:53.726130 kernel: SMP: Total of 4 processors activated. Dec 13 13:59:53.726137 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:59:53.726144 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:59:53.726151 kernel: CPU features: detected: Common not Private translations Dec 13 13:59:53.726157 kernel: CPU features: detected: CRC32 instructions Dec 13 13:59:53.726164 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:59:53.726170 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:59:53.726178 kernel: CPU features: detected: Privileged Access Never Dec 13 13:59:53.726185 kernel: CPU features: detected: RAS Extension Support Dec 13 13:59:53.726192 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 13:59:53.726198 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:59:53.726205 kernel: alternatives: patching kernel code Dec 13 13:59:53.726213 kernel: devtmpfs: initialized Dec 13 13:59:53.726219 kernel: KASLR enabled Dec 13 13:59:53.726226 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:59:53.726233 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:59:53.726239 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:59:53.726246 kernel: SMBIOS 3.0.0 present. Dec 13 13:59:53.726252 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 13 13:59:53.726259 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:59:53.726266 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:59:53.726274 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:59:53.726281 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:59:53.726288 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:59:53.726294 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Dec 13 13:59:53.726301 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:59:53.726308 kernel: cpuidle: using governor menu Dec 13 13:59:53.726314 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:59:53.726321 kernel: ASID allocator initialised with 32768 entries Dec 13 13:59:53.726328 kernel: ACPI: bus type PCI registered Dec 13 13:59:53.726336 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:59:53.726342 kernel: Serial: AMBA PL011 UART driver Dec 13 13:59:53.726349 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:59:53.726356 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:59:53.726362 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:59:53.726369 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:59:53.726376 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:59:53.726382 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:59:53.726389 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:59:53.726397 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:59:53.726404 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:59:53.726410 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:59:53.726417 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 13:59:53.726423 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 13:59:53.726430 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 13:59:53.726437 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:59:53.726443 kernel: ACPI: Interpreter enabled Dec 13 13:59:53.726450 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:59:53.726458 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:59:53.726465 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:59:53.726471 kernel: printk: console [ttyAMA0] enabled Dec 13 13:59:53.726478 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:59:53.726622 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:59:53.726689 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:59:53.726748 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:59:53.726808 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 13:59:53.726865 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 13:59:53.726874 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 13:59:53.726881 kernel: PCI host bridge to bus 0000:00 Dec 13 13:59:53.726948 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 13:59:53.727004 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:59:53.727058 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 13:59:53.727110 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:59:53.727183 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 13:59:53.727251 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:59:53.727315 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 13:59:53.727376 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 13:59:53.727436 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:59:53.727495 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:59:53.727563 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 13:59:53.727664 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 13:59:53.727754 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 13:59:53.727812 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:59:53.727946 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 13:59:53.727958 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:59:53.727965 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:59:53.727972 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:59:53.727984 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:59:53.727991 kernel: iommu: Default domain type: Translated Dec 13 13:59:53.727997 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:59:53.728004 kernel: vgaarb: loaded Dec 13 13:59:53.728011 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 13:59:53.728018 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 13:59:53.728024 kernel: PTP clock support registered Dec 13 13:59:53.728031 kernel: Registered efivars operations Dec 13 13:59:53.728038 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:59:53.728045 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:59:53.728052 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:59:53.728059 kernel: pnp: PnP ACPI init Dec 13 13:59:53.728128 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 13:59:53.728138 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:59:53.728145 kernel: NET: Registered PF_INET protocol family Dec 13 13:59:53.728152 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:59:53.728159 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:59:53.728168 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:59:53.728175 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:59:53.728181 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 13:59:53.728188 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:59:53.728195 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:59:53.728202 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:59:53.728208 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:59:53.728215 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:59:53.728222 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 13:59:53.728230 kernel: kvm [1]: HYP mode not available Dec 13 13:59:53.728237 kernel: Initialise system trusted keyrings Dec 13 13:59:53.728243 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:59:53.728250 kernel: Key type asymmetric registered Dec 13 13:59:53.728256 kernel: Asymmetric key parser 'x509' registered Dec 13 13:59:53.728263 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 13:59:53.728270 kernel: io scheduler mq-deadline registered Dec 13 13:59:53.728276 kernel: io scheduler kyber registered Dec 13 13:59:53.728283 kernel: io scheduler bfq registered Dec 13 13:59:53.728291 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:59:53.728297 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:59:53.728305 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:59:53.728366 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 13:59:53.728375 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:59:53.728382 kernel: thunder_xcv, ver 1.0 Dec 13 13:59:53.728388 kernel: thunder_bgx, ver 1.0 Dec 13 13:59:53.728395 kernel: nicpf, ver 1.0 Dec 13 13:59:53.728402 kernel: nicvf, ver 1.0 Dec 13 13:59:53.728474 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:59:53.728530 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:59:53 UTC (1734098393) Dec 13 13:59:53.728546 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:59:53.728553 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:59:53.728559 kernel: Segment Routing with IPv6 Dec 13 13:59:53.728566 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:59:53.728582 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:59:53.728590 kernel: Key type dns_resolver registered Dec 13 13:59:53.728598 kernel: registered taskstats version 1 Dec 13 13:59:53.728605 kernel: Loading compiled-in X.509 certificates Dec 13 13:59:53.728612 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 13:59:53.728618 kernel: Key type .fscrypt registered Dec 13 13:59:53.728625 kernel: Key type fscrypt-provisioning registered Dec 13 13:59:53.728632 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:59:53.728638 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:59:53.728645 kernel: ima: No architecture policies found Dec 13 13:59:53.728651 kernel: clk: Disabling unused clocks Dec 13 13:59:53.728668 kernel: Freeing unused kernel memory: 36416K Dec 13 13:59:53.728676 kernel: Run /init as init process Dec 13 13:59:53.728682 kernel: with arguments: Dec 13 13:59:53.728689 kernel: /init Dec 13 13:59:53.728695 kernel: with environment: Dec 13 13:59:53.728702 kernel: HOME=/ Dec 13 13:59:53.728708 kernel: TERM=linux Dec 13 13:59:53.728715 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:59:53.728723 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 13:59:53.728733 systemd[1]: Detected virtualization kvm. Dec 13 13:59:53.728741 systemd[1]: Detected architecture arm64. Dec 13 13:59:53.728748 systemd[1]: Running in initrd. Dec 13 13:59:53.728755 systemd[1]: No hostname configured, using default hostname. Dec 13 13:59:53.728761 systemd[1]: Hostname set to . Dec 13 13:59:53.728769 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:59:53.728776 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:59:53.728785 systemd[1]: Started systemd-ask-password-console.path. Dec 13 13:59:53.728792 systemd[1]: Reached target cryptsetup.target. Dec 13 13:59:53.728799 systemd[1]: Reached target paths.target. Dec 13 13:59:53.728806 systemd[1]: Reached target slices.target. Dec 13 13:59:53.728813 systemd[1]: Reached target swap.target. Dec 13 13:59:53.728820 systemd[1]: Reached target timers.target. Dec 13 13:59:53.728827 systemd[1]: Listening on iscsid.socket. Dec 13 13:59:53.728835 systemd[1]: Listening on iscsiuio.socket. Dec 13 13:59:53.728842 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 13:59:53.728850 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 13:59:53.728857 systemd[1]: Listening on systemd-journald.socket. Dec 13 13:59:53.728864 systemd[1]: Listening on systemd-networkd.socket. Dec 13 13:59:53.728871 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 13:59:53.728878 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 13:59:53.728885 systemd[1]: Reached target sockets.target. Dec 13 13:59:53.728892 systemd[1]: Starting kmod-static-nodes.service... Dec 13 13:59:53.728901 systemd[1]: Finished network-cleanup.service. Dec 13 13:59:53.728908 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:59:53.728915 systemd[1]: Starting systemd-journald.service... Dec 13 13:59:53.728922 systemd[1]: Starting systemd-modules-load.service... Dec 13 13:59:53.728929 systemd[1]: Starting systemd-resolved.service... Dec 13 13:59:53.728936 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 13:59:53.728943 systemd[1]: Finished kmod-static-nodes.service. Dec 13 13:59:53.728950 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:59:53.728957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 13:59:53.728965 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 13:59:53.728976 systemd-journald[290]: Journal started Dec 13 13:59:53.729017 systemd-journald[290]: Runtime Journal (/run/log/journal/7ac8b675721f4a9a859f42b596fbe9b8) is 6.0M, max 48.7M, 42.6M free. Dec 13 13:59:53.717615 systemd-modules-load[291]: Inserted module 'overlay' Dec 13 13:59:53.732820 kernel: audit: type=1130 audit(1734098393.728:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.732837 systemd[1]: Started systemd-journald.service. Dec 13 13:59:53.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.733597 kernel: audit: type=1130 audit(1734098393.732:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.733732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 13:59:53.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.737195 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 13:59:53.740680 kernel: audit: type=1130 audit(1734098393.735:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.742005 systemd-resolved[292]: Positive Trust Anchors: Dec 13 13:59:53.742018 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:59:53.742048 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 13:59:53.750656 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:59:53.749330 systemd-resolved[292]: Defaulting to hostname 'linux'. Dec 13 13:59:53.754716 kernel: audit: type=1130 audit(1734098393.750:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.750110 systemd[1]: Started systemd-resolved.service. Dec 13 13:59:53.756732 kernel: Bridge firewalling registered Dec 13 13:59:53.754152 systemd[1]: Reached target nss-lookup.target. Dec 13 13:59:53.756290 systemd-modules-load[291]: Inserted module 'br_netfilter' Dec 13 13:59:53.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.756851 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 13:59:53.761183 systemd[1]: Starting dracut-cmdline.service... Dec 13 13:59:53.762615 kernel: audit: type=1130 audit(1734098393.758:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.768602 kernel: SCSI subsystem initialized Dec 13 13:59:53.770551 dracut-cmdline[308]: dracut-dracut-053 Dec 13 13:59:53.773146 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 13:59:53.780435 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:59:53.780484 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:59:53.780500 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 13:59:53.782737 systemd-modules-load[291]: Inserted module 'dm_multipath' Dec 13 13:59:53.783530 systemd[1]: Finished systemd-modules-load.service. Dec 13 13:59:53.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.787598 kernel: audit: type=1130 audit(1734098393.784:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.787897 systemd[1]: Starting systemd-sysctl.service... Dec 13 13:59:53.795180 systemd[1]: Finished systemd-sysctl.service. Dec 13 13:59:53.798659 kernel: audit: type=1130 audit(1734098393.795:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.837589 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:59:53.849588 kernel: iscsi: registered transport (tcp) Dec 13 13:59:53.864590 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:59:53.864605 kernel: QLogic iSCSI HBA Driver Dec 13 13:59:53.897990 systemd[1]: Finished dracut-cmdline.service. Dec 13 13:59:53.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.899355 systemd[1]: Starting dracut-pre-udev.service... Dec 13 13:59:53.902708 kernel: audit: type=1130 audit(1734098393.897:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:53.942599 kernel: raid6: neonx8 gen() 13788 MB/s Dec 13 13:59:53.959585 kernel: raid6: neonx8 xor() 10822 MB/s Dec 13 13:59:53.976589 kernel: raid6: neonx4 gen() 13560 MB/s Dec 13 13:59:53.993594 kernel: raid6: neonx4 xor() 11296 MB/s Dec 13 13:59:54.010586 kernel: raid6: neonx2 gen() 12950 MB/s Dec 13 13:59:54.027610 kernel: raid6: neonx2 xor() 10464 MB/s Dec 13 13:59:54.044595 kernel: raid6: neonx1 gen() 10536 MB/s Dec 13 13:59:54.061594 kernel: raid6: neonx1 xor() 8759 MB/s Dec 13 13:59:54.078590 kernel: raid6: int64x8 gen() 6262 MB/s Dec 13 13:59:54.095601 kernel: raid6: int64x8 xor() 3540 MB/s Dec 13 13:59:54.112589 kernel: raid6: int64x4 gen() 7214 MB/s Dec 13 13:59:54.129593 kernel: raid6: int64x4 xor() 3847 MB/s Dec 13 13:59:54.146589 kernel: raid6: int64x2 gen() 6131 MB/s Dec 13 13:59:54.163588 kernel: raid6: int64x2 xor() 3313 MB/s Dec 13 13:59:54.180604 kernel: raid6: int64x1 gen() 5018 MB/s Dec 13 13:59:54.197678 kernel: raid6: int64x1 xor() 2642 MB/s Dec 13 13:59:54.197701 kernel: raid6: using algorithm neonx8 gen() 13788 MB/s Dec 13 13:59:54.197710 kernel: raid6: .... xor() 10822 MB/s, rmw enabled Dec 13 13:59:54.198763 kernel: raid6: using neon recovery algorithm Dec 13 13:59:54.209590 kernel: xor: measuring software checksum speed Dec 13 13:59:54.209605 kernel: 8regs : 17202 MB/sec Dec 13 13:59:54.210762 kernel: 32regs : 17944 MB/sec Dec 13 13:59:54.210777 kernel: arm64_neon : 27860 MB/sec Dec 13 13:59:54.210785 kernel: xor: using function: arm64_neon (27860 MB/sec) Dec 13 13:59:54.265595 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 13:59:54.276539 systemd[1]: Finished dracut-pre-udev.service. Dec 13 13:59:54.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:54.279000 audit: BPF prog-id=7 op=LOAD Dec 13 13:59:54.279000 audit: BPF prog-id=8 op=LOAD Dec 13 13:59:54.280596 kernel: audit: type=1130 audit(1734098394.276:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:54.280792 systemd[1]: Starting systemd-udevd.service... Dec 13 13:59:54.293560 systemd-udevd[490]: Using default interface naming scheme 'v252'. Dec 13 13:59:54.296956 systemd[1]: Started systemd-udevd.service. Dec 13 13:59:54.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:54.298440 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 13:59:54.311676 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Dec 13 13:59:54.340181 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 13:59:54.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:54.341639 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 13:59:54.374955 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 13:59:54.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:54.411720 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:59:54.419046 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:59:54.419060 kernel: GPT:9289727 != 19775487 Dec 13 13:59:54.419068 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:59:54.419077 kernel: GPT:9289727 != 19775487 Dec 13 13:59:54.419084 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:59:54.419099 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:59:54.431288 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 13:59:54.433309 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (539) Dec 13 13:59:54.432134 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 13:59:54.440127 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 13:59:54.443290 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 13:59:54.448625 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 13:59:54.450069 systemd[1]: Starting disk-uuid.service... Dec 13 13:59:54.455943 disk-uuid[561]: Primary Header is updated. Dec 13 13:59:54.455943 disk-uuid[561]: Secondary Entries is updated. Dec 13 13:59:54.455943 disk-uuid[561]: Secondary Header is updated. Dec 13 13:59:54.459594 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:59:55.474128 disk-uuid[562]: The operation has completed successfully. Dec 13 13:59:55.475079 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:59:55.506754 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:59:55.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.506844 systemd[1]: Finished disk-uuid.service. Dec 13 13:59:55.508220 systemd[1]: Starting verity-setup.service... Dec 13 13:59:55.527873 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:59:55.559860 systemd[1]: Found device dev-mapper-usr.device. Dec 13 13:59:55.561436 systemd[1]: Mounting sysusr-usr.mount... Dec 13 13:59:55.562298 systemd[1]: Finished verity-setup.service. Dec 13 13:59:55.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.609609 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 13:59:55.609620 systemd[1]: Mounted sysusr-usr.mount. Dec 13 13:59:55.610245 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 13:59:55.610964 systemd[1]: Starting ignition-setup.service... Dec 13 13:59:55.612630 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 13:59:55.621686 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:59:55.621728 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:59:55.621738 kernel: BTRFS info (device vda6): has skinny extents Dec 13 13:59:55.630014 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:59:55.635957 systemd[1]: Finished ignition-setup.service. Dec 13 13:59:55.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.637320 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 13:59:55.691122 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 13:59:55.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.692000 audit: BPF prog-id=9 op=LOAD Dec 13 13:59:55.693269 systemd[1]: Starting systemd-networkd.service... Dec 13 13:59:55.721735 systemd-networkd[737]: lo: Link UP Dec 13 13:59:55.721746 systemd-networkd[737]: lo: Gained carrier Dec 13 13:59:55.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.722110 systemd-networkd[737]: Enumeration completed Dec 13 13:59:55.722275 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:59:55.722667 systemd[1]: Started systemd-networkd.service. Dec 13 13:59:55.723777 systemd[1]: Reached target network.target. Dec 13 13:59:55.723953 systemd-networkd[737]: eth0: Link UP Dec 13 13:59:55.723957 systemd-networkd[737]: eth0: Gained carrier Dec 13 13:59:55.725913 systemd[1]: Starting iscsiuio.service... Dec 13 13:59:55.736946 systemd[1]: Started iscsiuio.service. Dec 13 13:59:55.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.738408 systemd[1]: Starting iscsid.service... Dec 13 13:59:55.740712 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:59:55.742129 iscsid[742]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 13:59:55.742129 iscsid[742]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 13:59:55.742129 iscsid[742]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 13:59:55.742129 iscsid[742]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 13:59:55.742129 iscsid[742]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 13:59:55.742129 iscsid[742]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 13:59:55.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.745443 ignition[651]: Ignition 2.14.0 Dec 13 13:59:55.745636 systemd[1]: Started iscsid.service. Dec 13 13:59:55.745450 ignition[651]: Stage: fetch-offline Dec 13 13:59:55.749295 systemd[1]: Starting dracut-initqueue.service... Dec 13 13:59:55.745488 ignition[651]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:55.745496 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:55.745661 ignition[651]: parsed url from cmdline: "" Dec 13 13:59:55.745664 ignition[651]: no config URL provided Dec 13 13:59:55.745669 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:59:55.745677 ignition[651]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:59:55.745695 ignition[651]: op(1): [started] loading QEMU firmware config module Dec 13 13:59:55.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.759961 systemd[1]: Finished dracut-initqueue.service. Dec 13 13:59:55.745699 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:59:55.760965 systemd[1]: Reached target remote-fs-pre.target. Dec 13 13:59:55.753880 ignition[651]: op(1): [finished] loading QEMU firmware config module Dec 13 13:59:55.762324 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 13:59:55.763625 systemd[1]: Reached target remote-fs.target. Dec 13 13:59:55.765608 systemd[1]: Starting dracut-pre-mount.service... Dec 13 13:59:55.769757 ignition[651]: parsing config with SHA512: d9af7dadbd895e669e163050d9065d0b6731e190777c5060150e67e6e99630ac8396e301b2a45af34898d50d8f9202bc3a6d646f07b947a89c6b300982ec3f35 Dec 13 13:59:55.773051 systemd[1]: Finished dracut-pre-mount.service. Dec 13 13:59:55.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.784945 unknown[651]: fetched base config from "system" Dec 13 13:59:55.784957 unknown[651]: fetched user config from "qemu" Dec 13 13:59:55.785480 ignition[651]: fetch-offline: fetch-offline passed Dec 13 13:59:55.787004 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 13:59:55.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.785561 ignition[651]: Ignition finished successfully Dec 13 13:59:55.788152 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:59:55.788838 systemd[1]: Starting ignition-kargs.service... Dec 13 13:59:55.801690 ignition[758]: Ignition 2.14.0 Dec 13 13:59:55.801699 ignition[758]: Stage: kargs Dec 13 13:59:55.801785 ignition[758]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:55.801794 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:55.802499 ignition[758]: kargs: kargs passed Dec 13 13:59:55.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.804344 systemd[1]: Finished ignition-kargs.service. Dec 13 13:59:55.802550 ignition[758]: Ignition finished successfully Dec 13 13:59:55.806387 systemd[1]: Starting ignition-disks.service... Dec 13 13:59:55.813013 ignition[764]: Ignition 2.14.0 Dec 13 13:59:55.813023 ignition[764]: Stage: disks Dec 13 13:59:55.813111 ignition[764]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:55.814876 systemd[1]: Finished ignition-disks.service. Dec 13 13:59:55.813121 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:55.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.816089 systemd[1]: Reached target initrd-root-device.target. Dec 13 13:59:55.813873 ignition[764]: disks: disks passed Dec 13 13:59:55.817131 systemd[1]: Reached target local-fs-pre.target. Dec 13 13:59:55.813913 ignition[764]: Ignition finished successfully Dec 13 13:59:55.818329 systemd[1]: Reached target local-fs.target. Dec 13 13:59:55.819370 systemd[1]: Reached target sysinit.target. Dec 13 13:59:55.820384 systemd[1]: Reached target basic.target. Dec 13 13:59:55.822138 systemd[1]: Starting systemd-fsck-root.service... Dec 13 13:59:55.832748 systemd-fsck[773]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 13:59:55.836216 systemd[1]: Finished systemd-fsck-root.service. Dec 13 13:59:55.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.837922 systemd[1]: Mounting sysroot.mount... Dec 13 13:59:55.844606 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 13:59:55.844713 systemd[1]: Mounted sysroot.mount. Dec 13 13:59:55.845282 systemd[1]: Reached target initrd-root-fs.target. Dec 13 13:59:55.847093 systemd[1]: Mounting sysroot-usr.mount... Dec 13 13:59:55.847846 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 13:59:55.847881 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:59:55.847903 systemd[1]: Reached target ignition-diskful.target. Dec 13 13:59:55.849666 systemd[1]: Mounted sysroot-usr.mount. Dec 13 13:59:55.851930 systemd[1]: Starting initrd-setup-root.service... Dec 13 13:59:55.856107 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:59:55.860296 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:59:55.864379 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:59:55.868110 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:59:55.894659 systemd[1]: Finished initrd-setup-root.service. Dec 13 13:59:55.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.896010 systemd[1]: Starting ignition-mount.service... Dec 13 13:59:55.897324 systemd[1]: Starting sysroot-boot.service... Dec 13 13:59:55.901025 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 13:59:55.908410 ignition[825]: INFO : Ignition 2.14.0 Dec 13 13:59:55.908410 ignition[825]: INFO : Stage: mount Dec 13 13:59:55.910633 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:55.910633 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:55.910633 ignition[825]: INFO : mount: mount passed Dec 13 13:59:55.910633 ignition[825]: INFO : Ignition finished successfully Dec 13 13:59:55.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:55.912336 systemd[1]: Finished ignition-mount.service. Dec 13 13:59:55.917432 systemd[1]: Finished sysroot-boot.service. Dec 13 13:59:55.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:56.570277 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 13:59:56.581595 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) Dec 13 13:59:56.583949 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:59:56.583966 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:59:56.583975 kernel: BTRFS info (device vda6): has skinny extents Dec 13 13:59:56.587669 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 13:59:56.589043 systemd[1]: Starting ignition-files.service... Dec 13 13:59:56.602849 ignition[855]: INFO : Ignition 2.14.0 Dec 13 13:59:56.602849 ignition[855]: INFO : Stage: files Dec 13 13:59:56.604200 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:56.604200 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:56.604200 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:59:56.606790 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:59:56.606790 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:59:56.610091 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:59:56.611192 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:59:56.611192 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:59:56.610852 unknown[855]: wrote ssh authorized keys file for user: core Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:59:56.614130 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 13:59:56.876853 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 13:59:57.145896 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:59:57.145896 ignition[855]: INFO : files: op(8): [started] processing unit "containerd.service" Dec 13 13:59:57.148765 ignition[855]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 13:59:57.148765 ignition[855]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 13:59:57.148765 ignition[855]: INFO : files: op(8): [finished] processing unit "containerd.service" Dec 13 13:59:57.148765 ignition[855]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Dec 13 13:59:57.148765 ignition[855]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:59:57.148765 ignition[855]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:59:57.148765 ignition[855]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Dec 13 13:59:57.148765 ignition[855]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:59:57.148765 ignition[855]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:59:57.195333 ignition[855]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:59:57.196471 ignition[855]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:59:57.196471 ignition[855]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:59:57.196471 ignition[855]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:59:57.196471 ignition[855]: INFO : files: files passed Dec 13 13:59:57.196471 ignition[855]: INFO : Ignition finished successfully Dec 13 13:59:57.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.197597 systemd[1]: Finished ignition-files.service. Dec 13 13:59:57.199421 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 13:59:57.200277 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 13:59:57.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.206807 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 13:59:57.201230 systemd[1]: Starting ignition-quench.service... Dec 13 13:59:57.208472 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:59:57.204758 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:59:57.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.204844 systemd[1]: Finished ignition-quench.service. Dec 13 13:59:57.208674 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 13:59:57.210394 systemd[1]: Reached target ignition-complete.target. Dec 13 13:59:57.212213 systemd[1]: Starting initrd-parse-etc.service... Dec 13 13:59:57.226813 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:59:57.226927 systemd[1]: Finished initrd-parse-etc.service. Dec 13 13:59:57.228471 systemd[1]: Reached target initrd-fs.target. Dec 13 13:59:57.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.229602 systemd[1]: Reached target initrd.target. Dec 13 13:59:57.230750 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 13:59:57.231553 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 13:59:57.240684 systemd-networkd[737]: eth0: Gained IPv6LL Dec 13 13:59:57.243043 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 13:59:57.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.244643 systemd[1]: Starting initrd-cleanup.service... Dec 13 13:59:57.253485 systemd[1]: Stopped target nss-lookup.target. Dec 13 13:59:57.254237 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 13:59:57.255470 systemd[1]: Stopped target timers.target. Dec 13 13:59:57.256483 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:59:57.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.256622 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 13:59:57.257564 systemd[1]: Stopped target initrd.target. Dec 13 13:59:57.258768 systemd[1]: Stopped target basic.target. Dec 13 13:59:57.259744 systemd[1]: Stopped target ignition-complete.target. Dec 13 13:59:57.260861 systemd[1]: Stopped target ignition-diskful.target. Dec 13 13:59:57.261829 systemd[1]: Stopped target initrd-root-device.target. Dec 13 13:59:57.262921 systemd[1]: Stopped target remote-fs.target. Dec 13 13:59:57.263935 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 13:59:57.265189 systemd[1]: Stopped target sysinit.target. Dec 13 13:59:57.266176 systemd[1]: Stopped target local-fs.target. Dec 13 13:59:57.267305 systemd[1]: Stopped target local-fs-pre.target. Dec 13 13:59:57.268256 systemd[1]: Stopped target swap.target. Dec 13 13:59:57.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.269201 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:59:57.269326 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 13:59:57.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.270314 systemd[1]: Stopped target cryptsetup.target. Dec 13 13:59:57.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.271243 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:59:57.271346 systemd[1]: Stopped dracut-initqueue.service. Dec 13 13:59:57.272464 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:59:57.272568 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 13:59:57.273549 systemd[1]: Stopped target paths.target. Dec 13 13:59:57.274645 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:59:57.276627 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 13:59:57.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.277342 systemd[1]: Stopped target slices.target. Dec 13 13:59:57.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.278510 systemd[1]: Stopped target sockets.target. Dec 13 13:59:57.279530 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:59:57.285752 iscsid[742]: iscsid shutting down. Dec 13 13:59:57.279661 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 13:59:57.280885 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:59:57.280977 systemd[1]: Stopped ignition-files.service. Dec 13 13:59:57.282958 systemd[1]: Stopping ignition-mount.service... Dec 13 13:59:57.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.283663 systemd[1]: Stopping iscsid.service... Dec 13 13:59:57.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.291558 ignition[896]: INFO : Ignition 2.14.0 Dec 13 13:59:57.291558 ignition[896]: INFO : Stage: umount Dec 13 13:59:57.291558 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:59:57.291558 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:59:57.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.287649 systemd[1]: Stopping sysroot-boot.service... Dec 13 13:59:57.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.296146 ignition[896]: INFO : umount: umount passed Dec 13 13:59:57.296146 ignition[896]: INFO : Ignition finished successfully Dec 13 13:59:57.288730 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:59:57.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.288909 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 13:59:57.289941 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:59:57.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.290081 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 13:59:57.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.293349 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 13:59:57.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.293468 systemd[1]: Stopped iscsid.service. Dec 13 13:59:57.294672 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:59:57.294755 systemd[1]: Stopped ignition-mount.service. Dec 13 13:59:57.297361 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:59:57.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.297454 systemd[1]: Finished initrd-cleanup.service. Dec 13 13:59:57.298448 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:59:57.298488 systemd[1]: Closed iscsid.socket. Dec 13 13:59:57.300733 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:59:57.300783 systemd[1]: Stopped ignition-disks.service. Dec 13 13:59:57.302056 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:59:57.302097 systemd[1]: Stopped ignition-kargs.service. Dec 13 13:59:57.303165 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:59:57.303201 systemd[1]: Stopped ignition-setup.service. Dec 13 13:59:57.304271 systemd[1]: Stopping iscsiuio.service... Dec 13 13:59:57.306358 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:59:57.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.307030 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 13:59:57.307121 systemd[1]: Stopped iscsiuio.service. Dec 13 13:59:57.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.307979 systemd[1]: Stopped target network.target. Dec 13 13:59:57.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.309125 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:59:57.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.309157 systemd[1]: Closed iscsiuio.socket. Dec 13 13:59:57.310429 systemd[1]: Stopping systemd-networkd.service... Dec 13 13:59:57.311515 systemd[1]: Stopping systemd-resolved.service... Dec 13 13:59:57.315652 systemd-networkd[737]: eth0: DHCPv6 lease lost Dec 13 13:59:57.328000 audit: BPF prog-id=9 op=UNLOAD Dec 13 13:59:57.316814 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:59:57.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.316960 systemd[1]: Stopped systemd-networkd.service. Dec 13 13:59:57.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.317945 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:59:57.317973 systemd[1]: Closed systemd-networkd.socket. Dec 13 13:59:57.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.321295 systemd[1]: Stopping network-cleanup.service... Dec 13 13:59:57.334000 audit: BPF prog-id=6 op=UNLOAD Dec 13 13:59:57.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.322078 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:59:57.322136 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 13:59:57.322943 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:59:57.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.322983 systemd[1]: Stopped systemd-sysctl.service. Dec 13 13:59:57.324372 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:59:57.324411 systemd[1]: Stopped systemd-modules-load.service. Dec 13 13:59:57.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.325259 systemd[1]: Stopping systemd-udevd.service... Dec 13 13:59:57.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.328878 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 13:59:57.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.329332 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:59:57.329430 systemd[1]: Stopped systemd-resolved.service. Dec 13 13:59:57.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.330771 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:59:57.330852 systemd[1]: Stopped sysroot-boot.service. Dec 13 13:59:57.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.332341 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:59:57.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.332388 systemd[1]: Stopped initrd-setup-root.service. Dec 13 13:59:57.333856 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:59:57.333941 systemd[1]: Stopped network-cleanup.service. Dec 13 13:59:57.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:57.336679 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:59:57.336796 systemd[1]: Stopped systemd-udevd.service. Dec 13 13:59:57.337918 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:59:57.337953 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 13:59:57.338852 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:59:57.338880 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 13:59:57.339843 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:59:57.339885 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 13:59:57.341039 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:59:57.341077 systemd[1]: Stopped dracut-cmdline.service. Dec 13 13:59:57.342146 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:59:57.342185 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 13:59:57.343925 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 13:59:57.344512 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:59:57.344658 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 13:59:57.361000 audit: BPF prog-id=5 op=UNLOAD Dec 13 13:59:57.361000 audit: BPF prog-id=4 op=UNLOAD Dec 13 13:59:57.361000 audit: BPF prog-id=3 op=UNLOAD Dec 13 13:59:57.361000 audit: BPF prog-id=8 op=UNLOAD Dec 13 13:59:57.361000 audit: BPF prog-id=7 op=UNLOAD Dec 13 13:59:57.346328 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:59:57.346375 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 13:59:57.347600 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:59:57.347637 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 13:59:57.349377 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 13:59:57.349798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:59:57.349879 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 13:59:57.351387 systemd[1]: Reached target initrd-switch-root.target. Dec 13 13:59:57.353004 systemd[1]: Starting initrd-switch-root.service... Dec 13 13:59:57.358449 systemd[1]: Switching root. Dec 13 13:59:57.371996 systemd-journald[290]: Journal stopped Dec 13 13:59:59.398816 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Dec 13 13:59:59.398879 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 13:59:59.398895 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 13:59:59.398908 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 13:59:59.398918 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:59:59.398927 kernel: SELinux: policy capability open_perms=1 Dec 13 13:59:59.398937 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:59:59.398947 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:59:59.398956 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:59:59.398966 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:59:59.398976 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:59:59.398988 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:59:59.398999 systemd[1]: Successfully loaded SELinux policy in 37.704ms. Dec 13 13:59:59.399015 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.325ms. Dec 13 13:59:59.399027 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 13:59:59.399038 systemd[1]: Detected virtualization kvm. Dec 13 13:59:59.399047 systemd[1]: Detected architecture arm64. Dec 13 13:59:59.399058 systemd[1]: Detected first boot. Dec 13 13:59:59.399068 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:59:59.399079 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 13:59:59.399089 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:59:59.399100 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 13:59:59.399111 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 13:59:59.399123 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:59:59.399134 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:59:59.399144 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 13:59:59.399154 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 13:59:59.399166 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 13:59:59.399249 systemd[1]: Created slice system-getty.slice. Dec 13 13:59:59.399263 systemd[1]: Created slice system-modprobe.slice. Dec 13 13:59:59.399273 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 13:59:59.399284 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 13:59:59.399299 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 13:59:59.399310 systemd[1]: Created slice user.slice. Dec 13 13:59:59.399324 systemd[1]: Started systemd-ask-password-console.path. Dec 13 13:59:59.399334 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 13:59:59.399344 systemd[1]: Set up automount boot.automount. Dec 13 13:59:59.399354 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 13:59:59.399364 systemd[1]: Reached target integritysetup.target. Dec 13 13:59:59.399375 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 13:59:59.399386 systemd[1]: Reached target remote-fs.target. Dec 13 13:59:59.399397 systemd[1]: Reached target slices.target. Dec 13 13:59:59.399408 systemd[1]: Reached target swap.target. Dec 13 13:59:59.399418 systemd[1]: Reached target torcx.target. Dec 13 13:59:59.399428 systemd[1]: Reached target veritysetup.target. Dec 13 13:59:59.399439 systemd[1]: Listening on systemd-coredump.socket. Dec 13 13:59:59.399449 systemd[1]: Listening on systemd-initctl.socket. Dec 13 13:59:59.399460 kernel: kauditd_printk_skb: 81 callbacks suppressed Dec 13 13:59:59.399470 kernel: audit: type=1400 audit(1734098399.306:85): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 13:59:59.399481 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 13:59:59.399492 kernel: audit: type=1335 audit(1734098399.306:86): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 13:59:59.399503 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 13:59:59.399522 systemd[1]: Listening on systemd-journald.socket. Dec 13 13:59:59.399535 systemd[1]: Listening on systemd-networkd.socket. Dec 13 13:59:59.399545 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 13:59:59.399597 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 13:59:59.399611 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 13:59:59.399622 systemd[1]: Mounting dev-hugepages.mount... Dec 13 13:59:59.399635 systemd[1]: Mounting dev-mqueue.mount... Dec 13 13:59:59.399646 systemd[1]: Mounting media.mount... Dec 13 13:59:59.399656 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 13:59:59.399666 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 13:59:59.399676 systemd[1]: Mounting tmp.mount... Dec 13 13:59:59.399687 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 13:59:59.399702 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 13:59:59.399712 systemd[1]: Starting kmod-static-nodes.service... Dec 13 13:59:59.399722 systemd[1]: Starting modprobe@configfs.service... Dec 13 13:59:59.399733 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 13:59:59.399744 systemd[1]: Starting modprobe@drm.service... Dec 13 13:59:59.399755 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 13:59:59.399766 systemd[1]: Starting modprobe@fuse.service... Dec 13 13:59:59.399776 systemd[1]: Starting modprobe@loop.service... Dec 13 13:59:59.399788 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:59:59.399799 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 13:59:59.399809 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 13:59:59.399819 systemd[1]: Starting systemd-journald.service... Dec 13 13:59:59.399831 systemd[1]: Starting systemd-modules-load.service... Dec 13 13:59:59.399842 systemd[1]: Starting systemd-network-generator.service... Dec 13 13:59:59.399852 systemd[1]: Starting systemd-remount-fs.service... Dec 13 13:59:59.399862 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 13:59:59.399872 systemd[1]: Mounted dev-hugepages.mount. Dec 13 13:59:59.399882 kernel: fuse: init (API version 7.34) Dec 13 13:59:59.399892 systemd[1]: Mounted dev-mqueue.mount. Dec 13 13:59:59.399902 systemd[1]: Mounted media.mount. Dec 13 13:59:59.399916 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 13:59:59.399926 kernel: audit: type=1305 audit(1734098399.394:87): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 13:59:59.399938 kernel: audit: type=1300 audit(1734098399.394:87): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdfa77d50 a2=4000 a3=1 items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 13:59:59.399953 systemd-journald[1023]: Journal started Dec 13 13:59:59.400000 systemd-journald[1023]: Runtime Journal (/run/log/journal/7ac8b675721f4a9a859f42b596fbe9b8) is 6.0M, max 48.7M, 42.6M free. Dec 13 13:59:59.306000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 13:59:59.394000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 13:59:59.394000 audit[1023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdfa77d50 a2=4000 a3=1 items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 13:59:59.402330 kernel: audit: type=1327 audit(1734098399.394:87): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 13:59:59.394000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 13:59:59.403709 systemd[1]: Started systemd-journald.service. Dec 13 13:59:59.406669 kernel: audit: type=1130 audit(1734098399.403:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.405161 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 13:59:59.407764 systemd[1]: Mounted tmp.mount. Dec 13 13:59:59.409692 systemd[1]: Finished kmod-static-nodes.service. Dec 13 13:59:59.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.410502 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:59:59.410809 systemd[1]: Finished modprobe@configfs.service. Dec 13 13:59:59.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.413725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:59:59.416282 kernel: audit: type=1130 audit(1734098399.409:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.416334 kernel: audit: type=1130 audit(1734098399.412:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.416351 kernel: loop: module loaded Dec 13 13:59:59.416365 kernel: audit: type=1131 audit(1734098399.412:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.414036 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 13:59:59.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.419600 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:59:59.419800 systemd[1]: Finished modprobe@drm.service. Dec 13 13:59:59.421793 kernel: audit: type=1130 audit(1734098399.418:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.422738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:59:59.422933 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 13:59:59.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.423769 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:59:59.423969 systemd[1]: Finished modprobe@fuse.service. Dec 13 13:59:59.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.424919 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:59:59.425127 systemd[1]: Finished modprobe@loop.service. Dec 13 13:59:59.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.426088 systemd[1]: Finished systemd-modules-load.service. Dec 13 13:59:59.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.427497 systemd[1]: Finished systemd-network-generator.service. Dec 13 13:59:59.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.428643 systemd[1]: Finished systemd-remount-fs.service. Dec 13 13:59:59.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.430040 systemd[1]: Reached target network-pre.target. Dec 13 13:59:59.433555 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 13:59:59.435330 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 13:59:59.435904 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:59:59.437491 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 13:59:59.439155 systemd[1]: Starting systemd-journal-flush.service... Dec 13 13:59:59.439929 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:59:59.440917 systemd[1]: Starting systemd-random-seed.service... Dec 13 13:59:59.441698 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 13:59:59.443150 systemd[1]: Starting systemd-sysctl.service... Dec 13 13:59:59.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.445481 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 13:59:59.446423 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 13:59:59.447322 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 13:59:59.449336 systemd[1]: Starting systemd-udev-settle.service... Dec 13 13:59:59.453077 systemd-journald[1023]: Time spent on flushing to /var/log/journal/7ac8b675721f4a9a859f42b596fbe9b8 is 20.292ms for 922 entries. Dec 13 13:59:59.453077 systemd-journald[1023]: System Journal (/var/log/journal/7ac8b675721f4a9a859f42b596fbe9b8) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:59:59.485523 systemd-journald[1023]: Received client request to flush runtime journal. Dec 13 13:59:59.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.485923 udevadm[1070]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:59:59.459926 systemd[1]: Finished systemd-random-seed.service. Dec 13 13:59:59.460653 systemd[1]: Reached target first-boot-complete.target. Dec 13 13:59:59.463654 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 13:59:59.464528 systemd[1]: Finished systemd-sysctl.service. Dec 13 13:59:59.466192 systemd[1]: Starting systemd-sysusers.service... Dec 13 13:59:59.482351 systemd[1]: Finished systemd-sysusers.service. Dec 13 13:59:59.484162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 13:59:59.486794 systemd[1]: Finished systemd-journal-flush.service. Dec 13 13:59:59.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.500181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 13:59:59.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.809472 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 13:59:59.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.811475 systemd[1]: Starting systemd-udevd.service... Dec 13 13:59:59.829562 systemd-udevd[1088]: Using default interface naming scheme 'v252'. Dec 13 13:59:59.844433 systemd[1]: Started systemd-udevd.service. Dec 13 13:59:59.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.849247 systemd[1]: Starting systemd-networkd.service... Dec 13 13:59:59.853822 systemd[1]: Starting systemd-userdbd.service... Dec 13 13:59:59.873335 systemd[1]: Found device dev-ttyAMA0.device. Dec 13 13:59:59.912672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 13:59:59.913487 systemd[1]: Started systemd-userdbd.service. Dec 13 13:59:59.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.948990 systemd[1]: Finished systemd-udev-settle.service. Dec 13 13:59:59.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.950914 systemd[1]: Starting lvm2-activation-early.service... Dec 13 13:59:59.974913 lvm[1123]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:59:59.979221 systemd-networkd[1103]: lo: Link UP Dec 13 13:59:59.979233 systemd-networkd[1103]: lo: Gained carrier Dec 13 13:59:59.979622 systemd-networkd[1103]: Enumeration completed Dec 13 13:59:59.979732 systemd-networkd[1103]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:59:59.979740 systemd[1]: Started systemd-networkd.service. Dec 13 13:59:59.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 13:59:59.980931 systemd-networkd[1103]: eth0: Link UP Dec 13 13:59:59.980942 systemd-networkd[1103]: eth0: Gained carrier Dec 13 13:59:59.999683 systemd-networkd[1103]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:00:00.012498 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:00:00.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.013310 systemd[1]: Reached target cryptsetup.target. Dec 13 14:00:00.015112 systemd[1]: Starting lvm2-activation.service... Dec 13 14:00:00.018633 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:00:00.042490 systemd[1]: Finished lvm2-activation.service. Dec 13 14:00:00.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.043249 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:00:00.043898 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:00:00.043928 systemd[1]: Reached target local-fs.target. Dec 13 14:00:00.044588 systemd[1]: Reached target machines.target. Dec 13 14:00:00.046313 systemd[1]: Starting ldconfig.service... Dec 13 14:00:00.047153 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.047224 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:00:00.048268 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:00:00.050001 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:00:00.051988 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:00:00.053871 systemd[1]: Starting systemd-sysext.service... Dec 13 14:00:00.062075 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1128 (bootctl) Dec 13 14:00:00.063200 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:00:00.065878 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:00:00.067075 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:00:00.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.072641 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:00:00.072894 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:00:00.124605 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:00:00.130652 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:00:00.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.142997 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) Dec 13 14:00:00.142997 systemd-fsck[1140]: /dev/vda1: 236 files, 117175/258078 clusters Dec 13 14:00:00.146399 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:00:00.146662 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:00:00.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.168713 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:00:00.172417 (sd-sysext)[1146]: Using extensions 'kubernetes'. Dec 13 14:00:00.172754 (sd-sysext)[1146]: Merged extensions into '/usr'. Dec 13 14:00:00.187190 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.188489 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:00:00.190342 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:00:00.192183 systemd[1]: Starting modprobe@loop.service... Dec 13 14:00:00.192843 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.192963 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:00:00.193698 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:00:00.193854 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:00:00.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.195367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:00:00.195508 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:00:00.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.196757 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:00:00.196903 systemd[1]: Finished modprobe@loop.service. Dec 13 14:00:00.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.197984 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:00:00.198081 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.231051 ldconfig[1127]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:00:00.234849 systemd[1]: Finished ldconfig.service. Dec 13 14:00:00.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.390271 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:00:00.392190 systemd[1]: Mounting boot.mount... Dec 13 14:00:00.393961 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:00:00.400350 systemd[1]: Mounted boot.mount. Dec 13 14:00:00.401135 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:00:00.403023 systemd[1]: Finished systemd-sysext.service. Dec 13 14:00:00.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.404918 systemd[1]: Starting ensure-sysext.service... Dec 13 14:00:00.406922 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:00:00.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.410061 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:00:00.412507 systemd[1]: Reloading. Dec 13 14:00:00.415886 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:00:00.416601 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:00:00.417986 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:00:00.451438 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2024-12-13T14:00:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:00:00.451473 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2024-12-13T14:00:00Z" level=info msg="torcx already run" Dec 13 14:00:00.514987 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:00:00.515159 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:00:00.533461 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:00:00.577876 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:00:00.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.582799 systemd[1]: Starting audit-rules.service... Dec 13 14:00:00.585114 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:00:00.587287 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:00:00.590227 systemd[1]: Starting systemd-resolved.service... Dec 13 14:00:00.592540 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:00:00.594327 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:00:00.595869 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:00:00.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.601492 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:00:00.601000 audit[1240]: SYSTEM_BOOT pid=1240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.603640 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.604925 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:00:00.607072 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:00:00.608951 systemd[1]: Starting modprobe@loop.service... Dec 13 14:00:00.609666 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.609842 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:00:00.609982 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:00:00.610960 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:00:00.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.612278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:00:00.612415 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:00:00.613693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:00:00.613822 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:00:00.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.616304 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:00:00.616452 systemd[1]: Finished modprobe@loop.service. Dec 13 14:00:00.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.619992 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.622585 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:00:00.626007 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:00:00.628375 systemd[1]: Starting modprobe@loop.service... Dec 13 14:00:00.629445 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.629644 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:00:00.629817 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:00:00.631559 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:00:00.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.633092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:00:00.633246 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:00:00.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.634591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:00:00.634753 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:00:00.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.636432 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:00:00.636625 systemd[1]: Finished modprobe@loop.service. Dec 13 14:00:00.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.637968 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:00:00.638068 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.639741 systemd[1]: Starting systemd-update-done.service... Dec 13 14:00:00.646323 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.647842 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:00:00.649770 systemd[1]: Starting modprobe@drm.service... Dec 13 14:00:00.652033 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:00:00.654021 systemd[1]: Starting modprobe@loop.service... Dec 13 14:00:00.654850 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.655004 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:00:00.656475 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:00:00.658337 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:00:00.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.659766 systemd[1]: Finished systemd-update-done.service. Dec 13 14:00:00.660815 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:00:00.660962 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:00:00.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.662015 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:00:00.662153 systemd[1]: Finished modprobe@drm.service. Dec 13 14:00:00.663166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:00:00.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.663302 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:00:00.664341 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:00:00.664532 systemd[1]: Finished modprobe@loop.service. Dec 13 14:00:00.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.665811 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:00:00.665897 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.667482 systemd[1]: Finished ensure-sysext.service. Dec 13 14:00:00.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:00:00.678983 systemd-resolved[1235]: Positive Trust Anchors: Dec 13 14:00:00.679039 systemd-resolved[1235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:00:00.679066 systemd-resolved[1235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:00:00.681000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:00:00.681000 audit[1281]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff663a0b0 a2=420 a3=0 items=0 ppid=1230 pid=1281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:00:00.681000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:00:00.682281 augenrules[1281]: No rules Dec 13 14:00:00.684924 systemd[1]: Finished audit-rules.service. Dec 13 14:00:00.693106 systemd-resolved[1235]: Defaulting to hostname 'linux'. Dec 13 14:00:00.694768 systemd[1]: Started systemd-resolved.service. Dec 13 14:00:00.695728 systemd[1]: Reached target network.target. Dec 13 14:00:00.696331 systemd[1]: Reached target nss-lookup.target. Dec 13 14:00:00.700663 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:00:00.701675 systemd-timesyncd[1236]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:00:00.701885 systemd[1]: Reached target sysinit.target. Dec 13 14:00:00.702046 systemd-timesyncd[1236]: Initial clock synchronization to Fri 2024-12-13 14:00:00.302701 UTC. Dec 13 14:00:00.702533 systemd[1]: Started motdgen.path. Dec 13 14:00:00.703123 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:00:00.704162 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:00:00.704842 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:00:00.704875 systemd[1]: Reached target paths.target. Dec 13 14:00:00.705526 systemd[1]: Reached target time-set.target. Dec 13 14:00:00.706304 systemd[1]: Started logrotate.timer. Dec 13 14:00:00.706976 systemd[1]: Started mdadm.timer. Dec 13 14:00:00.707624 systemd[1]: Reached target timers.target. Dec 13 14:00:00.708558 systemd[1]: Listening on dbus.socket. Dec 13 14:00:00.710353 systemd[1]: Starting docker.socket... Dec 13 14:00:00.713353 systemd[1]: Listening on sshd.socket. Dec 13 14:00:00.714230 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:00:00.714733 systemd[1]: Listening on docker.socket. Dec 13 14:00:00.715438 systemd[1]: Reached target sockets.target. Dec 13 14:00:00.716192 systemd[1]: Reached target basic.target. Dec 13 14:00:00.716930 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:00:00.716983 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.717007 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:00:00.718162 systemd[1]: Starting containerd.service... Dec 13 14:00:00.720057 systemd[1]: Starting dbus.service... Dec 13 14:00:00.721906 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:00:00.724382 systemd[1]: Starting extend-filesystems.service... Dec 13 14:00:00.725186 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:00:00.727451 systemd[1]: Starting motdgen.service... Dec 13 14:00:00.729995 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:00:00.732346 systemd[1]: Starting sshd-keygen.service... Dec 13 14:00:00.738109 systemd[1]: Starting systemd-logind.service... Dec 13 14:00:00.738849 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:00:00.739003 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:00:00.740826 systemd[1]: Starting update-engine.service... Dec 13 14:00:00.742774 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:00:00.746876 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:00:00.748372 jq[1293]: false Dec 13 14:00:00.747132 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:00:00.748774 jq[1302]: true Dec 13 14:00:00.755901 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:00:00.756219 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:00:00.772621 jq[1313]: true Dec 13 14:00:00.780425 extend-filesystems[1294]: Found loop1 Dec 13 14:00:00.780425 extend-filesystems[1294]: Found vda Dec 13 14:00:00.782905 extend-filesystems[1294]: Found vda1 Dec 13 14:00:00.782905 extend-filesystems[1294]: Found vda2 Dec 13 14:00:00.782905 extend-filesystems[1294]: Found vda3 Dec 13 14:00:00.782905 extend-filesystems[1294]: Found usr Dec 13 14:00:00.782905 extend-filesystems[1294]: Found vda4 Dec 13 14:00:00.782905 extend-filesystems[1294]: Found vda6 Dec 13 14:00:00.782905 extend-filesystems[1294]: Found vda7 Dec 13 14:00:00.782905 extend-filesystems[1294]: Found vda9 Dec 13 14:00:00.782905 extend-filesystems[1294]: Checking size of /dev/vda9 Dec 13 14:00:00.783011 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:00:00.783256 systemd[1]: Finished motdgen.service. Dec 13 14:00:00.827601 extend-filesystems[1294]: Resized partition /dev/vda9 Dec 13 14:00:00.840203 dbus-daemon[1292]: [system] SELinux support is enabled Dec 13 14:00:00.841234 extend-filesystems[1342]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:00:00.840397 systemd[1]: Started dbus.service. Dec 13 14:00:00.844472 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:00:00.844516 systemd[1]: Reached target system-config.target. Dec 13 14:00:00.845228 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:00:00.845243 systemd[1]: Reached target user-config.target. Dec 13 14:00:00.849352 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:00:00.886782 update_engine[1300]: I1213 14:00:00.885355 1300 main.cc:92] Flatcar Update Engine starting Dec 13 14:00:00.888866 systemd-logind[1299]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:00:00.890869 systemd-logind[1299]: New seat seat0. Dec 13 14:00:00.892600 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:00:00.893449 systemd[1]: Started update-engine.service. Dec 13 14:00:00.893592 update_engine[1300]: I1213 14:00:00.893460 1300 update_check_scheduler.cc:74] Next update check in 6m33s Dec 13 14:00:00.898481 systemd[1]: Started locksmithd.service. Dec 13 14:00:00.901111 systemd[1]: Started systemd-logind.service. Dec 13 14:00:00.903917 extend-filesystems[1342]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:00:00.903917 extend-filesystems[1342]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:00:00.903917 extend-filesystems[1342]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:00:00.908688 extend-filesystems[1294]: Resized filesystem in /dev/vda9 Dec 13 14:00:00.909351 bash[1346]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:00:00.904582 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:00:00.904818 systemd[1]: Finished extend-filesystems.service. Dec 13 14:00:00.906391 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:00:00.911981 env[1308]: time="2024-12-13T14:00:00.911938200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:00:00.929043 env[1308]: time="2024-12-13T14:00:00.928996560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:00:00.929319 env[1308]: time="2024-12-13T14:00:00.929299880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:00:00.930541 env[1308]: time="2024-12-13T14:00:00.930500440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:00:00.930648 env[1308]: time="2024-12-13T14:00:00.930631040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:00:00.930960 env[1308]: time="2024-12-13T14:00:00.930936520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:00:00.931037 env[1308]: time="2024-12-13T14:00:00.931022040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:00:00.931112 env[1308]: time="2024-12-13T14:00:00.931096000Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:00:00.931164 env[1308]: time="2024-12-13T14:00:00.931151200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:00:00.931296 env[1308]: time="2024-12-13T14:00:00.931278320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:00:00.931724 env[1308]: time="2024-12-13T14:00:00.931702080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:00:00.931986 env[1308]: time="2024-12-13T14:00:00.931965960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:00:00.932210 env[1308]: time="2024-12-13T14:00:00.932192280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:00:00.932432 env[1308]: time="2024-12-13T14:00:00.932412880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:00:00.932525 env[1308]: time="2024-12-13T14:00:00.932501360Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:00:00.935989 env[1308]: time="2024-12-13T14:00:00.935964680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:00:00.936177 env[1308]: time="2024-12-13T14:00:00.936159440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:00:00.936269 env[1308]: time="2024-12-13T14:00:00.936254160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:00:00.936360 env[1308]: time="2024-12-13T14:00:00.936344160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:00:00.936609 env[1308]: time="2024-12-13T14:00:00.936590760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:00:00.936686 env[1308]: time="2024-12-13T14:00:00.936671880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:00:00.936743 env[1308]: time="2024-12-13T14:00:00.936729800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:00:00.937120 env[1308]: time="2024-12-13T14:00:00.937093160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:00:00.937210 env[1308]: time="2024-12-13T14:00:00.937193200Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:00:00.937275 env[1308]: time="2024-12-13T14:00:00.937261120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:00:00.937335 env[1308]: time="2024-12-13T14:00:00.937321280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:00:00.937410 env[1308]: time="2024-12-13T14:00:00.937395440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:00:00.937604 env[1308]: time="2024-12-13T14:00:00.937568800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:00:00.937758 env[1308]: time="2024-12-13T14:00:00.937739680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:00:00.938189 env[1308]: time="2024-12-13T14:00:00.938164440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:00:00.938239 env[1308]: time="2024-12-13T14:00:00.938204440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938239 env[1308]: time="2024-12-13T14:00:00.938220200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:00:00.938375 env[1308]: time="2024-12-13T14:00:00.938362640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938407 env[1308]: time="2024-12-13T14:00:00.938379080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938407 env[1308]: time="2024-12-13T14:00:00.938392720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938407 env[1308]: time="2024-12-13T14:00:00.938403960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938475 env[1308]: time="2024-12-13T14:00:00.938416320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938475 env[1308]: time="2024-12-13T14:00:00.938428080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938475 env[1308]: time="2024-12-13T14:00:00.938439440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938475 env[1308]: time="2024-12-13T14:00:00.938450720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938475 env[1308]: time="2024-12-13T14:00:00.938462760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:00:00.938762 env[1308]: time="2024-12-13T14:00:00.938724160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938799 env[1308]: time="2024-12-13T14:00:00.938763160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938799 env[1308]: time="2024-12-13T14:00:00.938777280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.938799 env[1308]: time="2024-12-13T14:00:00.938788640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:00:00.938922 env[1308]: time="2024-12-13T14:00:00.938902360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:00:00.938948 env[1308]: time="2024-12-13T14:00:00.938922200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:00:00.938948 env[1308]: time="2024-12-13T14:00:00.938939800Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:00:00.938986 env[1308]: time="2024-12-13T14:00:00.938972160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:00:00.939285 env[1308]: time="2024-12-13T14:00:00.939230760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:00:00.940006 env[1308]: time="2024-12-13T14:00:00.939296160Z" level=info msg="Connect containerd service" Dec 13 14:00:00.940006 env[1308]: time="2024-12-13T14:00:00.939385200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:00:00.940192 env[1308]: time="2024-12-13T14:00:00.940152880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:00:00.940505 env[1308]: time="2024-12-13T14:00:00.940437960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:00:00.940505 env[1308]: time="2024-12-13T14:00:00.940478400Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:00:00.940639 systemd[1]: Started containerd.service. Dec 13 14:00:00.940834 env[1308]: time="2024-12-13T14:00:00.940804920Z" level=info msg="Start subscribing containerd event" Dec 13 14:00:00.941023 env[1308]: time="2024-12-13T14:00:00.941000680Z" level=info msg="Start recovering state" Dec 13 14:00:00.941154 env[1308]: time="2024-12-13T14:00:00.941139000Z" level=info msg="Start event monitor" Dec 13 14:00:00.941344 env[1308]: time="2024-12-13T14:00:00.941228480Z" level=info msg="Start snapshots syncer" Dec 13 14:00:00.942767 env[1308]: time="2024-12-13T14:00:00.942741960Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:00:00.942858 env[1308]: time="2024-12-13T14:00:00.942843600Z" level=info msg="Start streaming server" Dec 13 14:00:00.943022 env[1308]: time="2024-12-13T14:00:00.941541440Z" level=info msg="containerd successfully booted in 0.031813s" Dec 13 14:00:00.953284 locksmithd[1347]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:00:01.848764 systemd-networkd[1103]: eth0: Gained IPv6LL Dec 13 14:00:01.850503 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:00:01.851534 systemd[1]: Reached target network-online.target. Dec 13 14:00:01.853827 systemd[1]: Starting kubelet.service... Dec 13 14:00:01.947993 sshd_keygen[1318]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:00:01.965026 systemd[1]: Finished sshd-keygen.service. Dec 13 14:00:01.967243 systemd[1]: Starting issuegen.service... Dec 13 14:00:01.972014 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:00:01.972239 systemd[1]: Finished issuegen.service. Dec 13 14:00:01.974347 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:00:01.980212 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:00:01.982406 systemd[1]: Started getty@tty1.service. Dec 13 14:00:01.984272 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:00:01.985284 systemd[1]: Reached target getty.target. Dec 13 14:00:02.387905 systemd[1]: Started kubelet.service. Dec 13 14:00:02.388955 systemd[1]: Reached target multi-user.target. Dec 13 14:00:02.390866 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:00:02.397021 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:00:02.397236 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:00:02.398081 systemd[1]: Startup finished in 4.444s (kernel) + 4.968s (userspace) = 9.412s. Dec 13 14:00:02.871865 kubelet[1386]: E1213 14:00:02.871754 1386 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:00:02.873970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:00:02.874111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:00:06.289105 systemd[1]: Created slice system-sshd.slice. Dec 13 14:00:06.290264 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:34956.service. Dec 13 14:00:06.334154 sshd[1397]: Accepted publickey for core from 10.0.0.1 port 34956 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:06.335989 sshd[1397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:06.344329 systemd-logind[1299]: New session 1 of user core. Dec 13 14:00:06.345131 systemd[1]: Created slice user-500.slice. Dec 13 14:00:06.346074 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:00:06.354342 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:00:06.355442 systemd[1]: Starting user@500.service... Dec 13 14:00:06.358821 (systemd)[1402]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:06.414367 systemd[1402]: Queued start job for default target default.target. Dec 13 14:00:06.414555 systemd[1402]: Reached target paths.target. Dec 13 14:00:06.414588 systemd[1402]: Reached target sockets.target. Dec 13 14:00:06.414601 systemd[1402]: Reached target timers.target. Dec 13 14:00:06.414626 systemd[1402]: Reached target basic.target. Dec 13 14:00:06.414722 systemd[1]: Started user@500.service. Dec 13 14:00:06.415434 systemd[1]: Started session-1.scope. Dec 13 14:00:06.415630 systemd[1402]: Reached target default.target. Dec 13 14:00:06.415676 systemd[1402]: Startup finished in 51ms. Dec 13 14:00:06.463146 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:34970.service. Dec 13 14:00:06.503993 sshd[1411]: Accepted publickey for core from 10.0.0.1 port 34970 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:06.505165 sshd[1411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:06.508602 systemd-logind[1299]: New session 2 of user core. Dec 13 14:00:06.509377 systemd[1]: Started session-2.scope. Dec 13 14:00:06.561610 sshd[1411]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:06.563805 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:34982.service. Dec 13 14:00:06.564651 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:34970.service: Deactivated successfully. Dec 13 14:00:06.565307 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:00:06.566115 systemd-logind[1299]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:00:06.566830 systemd-logind[1299]: Removed session 2. Dec 13 14:00:06.600142 sshd[1416]: Accepted publickey for core from 10.0.0.1 port 34982 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:06.601495 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:06.605414 systemd[1]: Started session-3.scope. Dec 13 14:00:06.605755 systemd-logind[1299]: New session 3 of user core. Dec 13 14:00:06.655351 sshd[1416]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:06.658047 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:34992.service. Dec 13 14:00:06.658958 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:34982.service: Deactivated successfully. Dec 13 14:00:06.659647 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:00:06.660200 systemd-logind[1299]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:00:06.660934 systemd-logind[1299]: Removed session 3. Dec 13 14:00:06.694457 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 34992 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:06.695617 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:06.698701 systemd-logind[1299]: New session 4 of user core. Dec 13 14:00:06.699472 systemd[1]: Started session-4.scope. Dec 13 14:00:06.751095 sshd[1423]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:06.753773 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:35008.service. Dec 13 14:00:06.754639 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:34992.service: Deactivated successfully. Dec 13 14:00:06.755614 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:00:06.755896 systemd-logind[1299]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:00:06.756603 systemd-logind[1299]: Removed session 4. Dec 13 14:00:06.790050 sshd[1430]: Accepted publickey for core from 10.0.0.1 port 35008 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:00:06.791468 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:00:06.796175 systemd-logind[1299]: New session 5 of user core. Dec 13 14:00:06.796263 systemd[1]: Started session-5.scope. Dec 13 14:00:06.855854 sudo[1436]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:00:06.856071 sudo[1436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:00:06.867801 systemd[1]: Starting coreos-metadata.service... Dec 13 14:00:06.874315 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 14:00:06.874533 systemd[1]: Finished coreos-metadata.service. Dec 13 14:00:07.329707 systemd[1]: Stopped kubelet.service. Dec 13 14:00:07.332795 systemd[1]: Starting kubelet.service... Dec 13 14:00:07.348940 systemd[1]: Reloading. Dec 13 14:00:07.403694 /usr/lib/systemd/system-generators/torcx-generator[1507]: time="2024-12-13T14:00:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:00:07.403996 /usr/lib/systemd/system-generators/torcx-generator[1507]: time="2024-12-13T14:00:07Z" level=info msg="torcx already run" Dec 13 14:00:07.465748 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:00:07.465766 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:00:07.482288 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:00:07.536380 systemd[1]: Started kubelet.service. Dec 13 14:00:07.539134 systemd[1]: Stopping kubelet.service... Dec 13 14:00:07.539738 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:00:07.539959 systemd[1]: Stopped kubelet.service. Dec 13 14:00:07.541968 systemd[1]: Starting kubelet.service... Dec 13 14:00:07.619203 systemd[1]: Started kubelet.service. Dec 13 14:00:07.655558 kubelet[1564]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:00:07.655558 kubelet[1564]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:00:07.655558 kubelet[1564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:00:07.656446 kubelet[1564]: I1213 14:00:07.656390 1564 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:00:09.055226 kubelet[1564]: I1213 14:00:09.055184 1564 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:00:09.055591 kubelet[1564]: I1213 14:00:09.055562 1564 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:00:09.056039 kubelet[1564]: I1213 14:00:09.056012 1564 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:00:09.085580 kubelet[1564]: I1213 14:00:09.085545 1564 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:00:09.093069 kubelet[1564]: I1213 14:00:09.093039 1564 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:00:09.094295 kubelet[1564]: I1213 14:00:09.094263 1564 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:00:09.094483 kubelet[1564]: I1213 14:00:09.094459 1564 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:00:09.094483 kubelet[1564]: I1213 14:00:09.094484 1564 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:00:09.094607 kubelet[1564]: I1213 14:00:09.094493 1564 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:00:09.096151 kubelet[1564]: I1213 14:00:09.096123 1564 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:00:09.101275 kubelet[1564]: I1213 14:00:09.101243 1564 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:00:09.101275 kubelet[1564]: I1213 14:00:09.101275 1564 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:00:09.101391 kubelet[1564]: I1213 14:00:09.101296 1564 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:00:09.101391 kubelet[1564]: I1213 14:00:09.101307 1564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:00:09.101742 kubelet[1564]: E1213 14:00:09.101709 1564 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:09.101867 kubelet[1564]: E1213 14:00:09.101848 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:09.102168 kubelet[1564]: I1213 14:00:09.102101 1564 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:00:09.102699 kubelet[1564]: I1213 14:00:09.102673 1564 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:00:09.103340 kubelet[1564]: W1213 14:00:09.103307 1564 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:00:09.104176 kubelet[1564]: I1213 14:00:09.104154 1564 server.go:1256] "Started kubelet" Dec 13 14:00:09.104251 kubelet[1564]: I1213 14:00:09.104232 1564 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:00:09.105076 kubelet[1564]: I1213 14:00:09.104644 1564 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:00:09.105156 kubelet[1564]: I1213 14:00:09.104987 1564 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:00:09.105312 kubelet[1564]: I1213 14:00:09.105285 1564 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:00:09.107169 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:00:09.111688 kubelet[1564]: I1213 14:00:09.111658 1564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:00:09.113536 kubelet[1564]: W1213 14:00:09.113503 1564 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:00:09.113536 kubelet[1564]: E1213 14:00:09.113534 1564 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:00:09.115339 kubelet[1564]: I1213 14:00:09.115309 1564 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:00:09.115840 kubelet[1564]: E1213 14:00:09.115821 1564 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.43.1810c150c038b3a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.43,UID:10.0.0.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.43,},FirstTimestamp:2024-12-13 14:00:09.104126884 +0000 UTC m=+1.481514693,LastTimestamp:2024-12-13 14:00:09.104126884 +0000 UTC m=+1.481514693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.43,}" Dec 13 14:00:09.116538 kubelet[1564]: I1213 14:00:09.116324 1564 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:00:09.116616 kubelet[1564]: I1213 14:00:09.116425 1564 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:00:09.116702 kubelet[1564]: W1213 14:00:09.116679 1564 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.43" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:00:09.116738 kubelet[1564]: E1213 14:00:09.116717 1564 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.43" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:00:09.117134 kubelet[1564]: E1213 14:00:09.117117 1564 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:00:09.117451 kubelet[1564]: I1213 14:00:09.117438 1564 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:00:09.118136 kubelet[1564]: I1213 14:00:09.118108 1564 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:00:09.119451 kubelet[1564]: I1213 14:00:09.119410 1564 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:00:09.127906 kubelet[1564]: E1213 14:00:09.127869 1564 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.43.1810c150c0feaf52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.43,UID:10.0.0.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.43,},FirstTimestamp:2024-12-13 14:00:09.117101906 +0000 UTC m=+1.494489754,LastTimestamp:2024-12-13 14:00:09.117101906 +0000 UTC m=+1.494489754,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.43,}" Dec 13 14:00:09.128106 kubelet[1564]: E1213 14:00:09.127946 1564 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.43\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 14:00:09.128106 kubelet[1564]: W1213 14:00:09.128049 1564 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:00:09.128106 kubelet[1564]: E1213 14:00:09.128069 1564 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:00:09.135228 kubelet[1564]: I1213 14:00:09.135204 1564 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:00:09.135228 kubelet[1564]: I1213 14:00:09.135226 1564 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:00:09.135333 kubelet[1564]: I1213 14:00:09.135246 1564 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:00:09.213545 kubelet[1564]: I1213 14:00:09.210730 1564 policy_none.go:49] "None policy: Start" Dec 13 14:00:09.213545 kubelet[1564]: I1213 14:00:09.211320 1564 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:00:09.213545 kubelet[1564]: I1213 14:00:09.211364 1564 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:00:09.215568 kubelet[1564]: I1213 14:00:09.215520 1564 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:00:09.215766 kubelet[1564]: I1213 14:00:09.215745 1564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:00:09.217250 kubelet[1564]: I1213 14:00:09.217221 1564 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.43" Dec 13 14:00:09.217441 kubelet[1564]: E1213 14:00:09.217427 1564 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.43\" not found" Dec 13 14:00:09.221212 kubelet[1564]: I1213 14:00:09.221181 1564 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.43" Dec 13 14:00:09.228073 kubelet[1564]: E1213 14:00:09.228049 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" Dec 13 14:00:09.281450 kubelet[1564]: I1213 14:00:09.281398 1564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:00:09.283059 kubelet[1564]: I1213 14:00:09.283024 1564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:00:09.283059 kubelet[1564]: I1213 14:00:09.283064 1564 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:00:09.283157 kubelet[1564]: I1213 14:00:09.283082 1564 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:00:09.283157 kubelet[1564]: E1213 14:00:09.283142 1564 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:00:09.329103 kubelet[1564]: E1213 14:00:09.329000 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" Dec 13 14:00:09.429504 kubelet[1564]: E1213 14:00:09.429463 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" Dec 13 14:00:09.529938 kubelet[1564]: E1213 14:00:09.529907 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" Dec 13 14:00:09.630462 kubelet[1564]: E1213 14:00:09.630367 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" Dec 13 14:00:09.731015 kubelet[1564]: E1213 14:00:09.730960 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" Dec 13 14:00:09.831464 kubelet[1564]: E1213 14:00:09.831431 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" Dec 13 14:00:09.931954 kubelet[1564]: E1213 14:00:09.931864 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" Dec 13 14:00:10.032390 kubelet[1564]: E1213 14:00:10.032352 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.43\" not found" Dec 13 14:00:10.059578 kubelet[1564]: I1213 14:00:10.059543 1564 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:00:10.059826 kubelet[1564]: W1213 14:00:10.059704 1564 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:00:10.102794 kubelet[1564]: E1213 14:00:10.102757 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:10.134232 kubelet[1564]: I1213 14:00:10.133928 1564 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:00:10.134308 env[1308]: time="2024-12-13T14:00:10.134187968Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:00:10.136553 kubelet[1564]: I1213 14:00:10.136524 1564 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:00:10.140282 sudo[1436]: pam_unix(sudo:session): session closed for user root Dec 13 14:00:10.142850 sshd[1430]: pam_unix(sshd:session): session closed for user core Dec 13 14:00:10.145136 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:35008.service: Deactivated successfully. Dec 13 14:00:10.146046 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:00:10.146201 systemd-logind[1299]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:00:10.147083 systemd-logind[1299]: Removed session 5. Dec 13 14:00:11.102950 kubelet[1564]: E1213 14:00:11.102872 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:11.102950 kubelet[1564]: I1213 14:00:11.102944 1564 apiserver.go:52] "Watching apiserver" Dec 13 14:00:11.106207 kubelet[1564]: I1213 14:00:11.106187 1564 topology_manager.go:215] "Topology Admit Handler" podUID="c89c7975-b510-4a63-9c28-0517ba07bce2" podNamespace="kube-system" podName="cilium-rqfws" Dec 13 14:00:11.106295 kubelet[1564]: I1213 14:00:11.106283 1564 topology_manager.go:215] "Topology Admit Handler" podUID="8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6" podNamespace="kube-system" podName="kube-proxy-5vfps" Dec 13 14:00:11.116997 kubelet[1564]: I1213 14:00:11.116956 1564 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:00:11.125719 kubelet[1564]: I1213 14:00:11.125687 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-bpf-maps\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125787 kubelet[1564]: I1213 14:00:11.125726 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-hostproc\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125787 kubelet[1564]: I1213 14:00:11.125749 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c89c7975-b510-4a63-9c28-0517ba07bce2-clustermesh-secrets\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125787 kubelet[1564]: I1213 14:00:11.125767 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp2sq\" (UniqueName: \"kubernetes.io/projected/c89c7975-b510-4a63-9c28-0517ba07bce2-kube-api-access-zp2sq\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125787 kubelet[1564]: I1213 14:00:11.125786 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-cgroup\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125882 kubelet[1564]: I1213 14:00:11.125803 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c89c7975-b510-4a63-9c28-0517ba07bce2-hubble-tls\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125882 kubelet[1564]: I1213 14:00:11.125822 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgxk5\" (UniqueName: \"kubernetes.io/projected/8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6-kube-api-access-rgxk5\") pod \"kube-proxy-5vfps\" (UID: \"8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6\") " pod="kube-system/kube-proxy-5vfps" Dec 13 14:00:11.125882 kubelet[1564]: I1213 14:00:11.125839 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-run\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125882 kubelet[1564]: I1213 14:00:11.125857 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-etc-cni-netd\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125882 kubelet[1564]: I1213 14:00:11.125875 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-lib-modules\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125982 kubelet[1564]: I1213 14:00:11.125893 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-host-proc-sys-kernel\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125982 kubelet[1564]: I1213 14:00:11.125911 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6-lib-modules\") pod \"kube-proxy-5vfps\" (UID: \"8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6\") " pod="kube-system/kube-proxy-5vfps" Dec 13 14:00:11.125982 kubelet[1564]: I1213 14:00:11.125928 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cni-path\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125982 kubelet[1564]: I1213 14:00:11.125945 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-xtables-lock\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125982 kubelet[1564]: I1213 14:00:11.125962 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-config-path\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.125982 kubelet[1564]: I1213 14:00:11.125979 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-host-proc-sys-net\") pod \"cilium-rqfws\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " pod="kube-system/cilium-rqfws" Dec 13 14:00:11.126095 kubelet[1564]: I1213 14:00:11.125997 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6-kube-proxy\") pod \"kube-proxy-5vfps\" (UID: \"8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6\") " pod="kube-system/kube-proxy-5vfps" Dec 13 14:00:11.126095 kubelet[1564]: I1213 14:00:11.126016 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6-xtables-lock\") pod \"kube-proxy-5vfps\" (UID: \"8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6\") " pod="kube-system/kube-proxy-5vfps" Dec 13 14:00:11.414255 kubelet[1564]: E1213 14:00:11.414150 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:11.414669 kubelet[1564]: E1213 14:00:11.414640 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:11.416086 env[1308]: time="2024-12-13T14:00:11.415982372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5vfps,Uid:8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6,Namespace:kube-system,Attempt:0,}" Dec 13 14:00:11.416398 env[1308]: time="2024-12-13T14:00:11.415979254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqfws,Uid:c89c7975-b510-4a63-9c28-0517ba07bce2,Namespace:kube-system,Attempt:0,}" Dec 13 14:00:11.959872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186598159.mount: Deactivated successfully. Dec 13 14:00:11.963021 env[1308]: time="2024-12-13T14:00:11.962976413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:11.964545 env[1308]: time="2024-12-13T14:00:11.964498698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:11.965850 env[1308]: time="2024-12-13T14:00:11.965815042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:11.968199 env[1308]: time="2024-12-13T14:00:11.968163625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:11.969852 env[1308]: time="2024-12-13T14:00:11.969819495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:11.970577 env[1308]: time="2024-12-13T14:00:11.970529221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:11.972755 env[1308]: time="2024-12-13T14:00:11.972730285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:11.974232 env[1308]: time="2024-12-13T14:00:11.974197266Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:12.001497 env[1308]: time="2024-12-13T14:00:12.001430886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:00:12.001645 env[1308]: time="2024-12-13T14:00:12.001473946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:00:12.001645 env[1308]: time="2024-12-13T14:00:12.001484148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:00:12.002184 env[1308]: time="2024-12-13T14:00:12.002139577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:00:12.002277 env[1308]: time="2024-12-13T14:00:12.002173741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:00:12.002277 env[1308]: time="2024-12-13T14:00:12.002183982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:00:12.002367 env[1308]: time="2024-12-13T14:00:12.002309801Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd pid=1626 runtime=io.containerd.runc.v2 Dec 13 14:00:12.002427 env[1308]: time="2024-12-13T14:00:12.002385126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a970e6c892584b744dda052faa4614cf5c51d80dccff09c1538ccaf0bc9a3c01 pid=1627 runtime=io.containerd.runc.v2 Dec 13 14:00:12.058402 env[1308]: time="2024-12-13T14:00:12.058353193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5vfps,Uid:8f213ac2-af13-4ac9-b05a-b28d3c3ac0d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a970e6c892584b744dda052faa4614cf5c51d80dccff09c1538ccaf0bc9a3c01\"" Dec 13 14:00:12.059068 env[1308]: time="2024-12-13T14:00:12.059036934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqfws,Uid:c89c7975-b510-4a63-9c28-0517ba07bce2,Namespace:kube-system,Attempt:0,} returns sandbox id \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\"" Dec 13 14:00:12.059853 kubelet[1564]: E1213 14:00:12.059680 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:12.059853 kubelet[1564]: E1213 14:00:12.059713 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:12.061063 env[1308]: time="2024-12-13T14:00:12.061032958Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:00:12.103970 kubelet[1564]: E1213 14:00:12.103937 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:13.104232 kubelet[1564]: E1213 14:00:13.104198 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:14.104748 kubelet[1564]: E1213 14:00:14.104720 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:15.105875 kubelet[1564]: E1213 14:00:15.105832 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:16.106566 kubelet[1564]: E1213 14:00:16.106506 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:17.107439 kubelet[1564]: E1213 14:00:17.107402 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:17.121038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270453109.mount: Deactivated successfully. Dec 13 14:00:18.107800 kubelet[1564]: E1213 14:00:18.107731 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:19.108593 kubelet[1564]: E1213 14:00:19.108540 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:19.374490 env[1308]: time="2024-12-13T14:00:19.373565793Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:19.377257 env[1308]: time="2024-12-13T14:00:19.377225503Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:19.380420 env[1308]: time="2024-12-13T14:00:19.380384440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:19.381123 env[1308]: time="2024-12-13T14:00:19.381089408Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:00:19.382214 env[1308]: time="2024-12-13T14:00:19.382182658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:00:19.383515 env[1308]: time="2024-12-13T14:00:19.383470268Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:00:19.395374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592567026.mount: Deactivated successfully. Dec 13 14:00:19.397395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522926898.mount: Deactivated successfully. Dec 13 14:00:19.405253 env[1308]: time="2024-12-13T14:00:19.405208882Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\"" Dec 13 14:00:19.408010 env[1308]: time="2024-12-13T14:00:19.407975555Z" level=info msg="StartContainer for \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\"" Dec 13 14:00:19.475083 env[1308]: time="2024-12-13T14:00:19.474939203Z" level=info msg="StartContainer for \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\" returns successfully" Dec 13 14:00:19.671008 env[1308]: time="2024-12-13T14:00:19.670893417Z" level=info msg="shim disconnected" id=774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529 Dec 13 14:00:19.671008 env[1308]: time="2024-12-13T14:00:19.670939648Z" level=warning msg="cleaning up after shim disconnected" id=774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529 namespace=k8s.io Dec 13 14:00:19.671008 env[1308]: time="2024-12-13T14:00:19.670950280Z" level=info msg="cleaning up dead shim" Dec 13 14:00:19.680422 env[1308]: time="2024-12-13T14:00:19.680378472Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1746 runtime=io.containerd.runc.v2\n" Dec 13 14:00:20.108985 kubelet[1564]: E1213 14:00:20.108841 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:20.303813 kubelet[1564]: E1213 14:00:20.303779 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:20.306108 env[1308]: time="2024-12-13T14:00:20.306060644Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:00:20.348707 env[1308]: time="2024-12-13T14:00:20.348648457Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\"" Dec 13 14:00:20.349627 env[1308]: time="2024-12-13T14:00:20.349590008Z" level=info msg="StartContainer for \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\"" Dec 13 14:00:20.394135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529-rootfs.mount: Deactivated successfully. Dec 13 14:00:20.412146 env[1308]: time="2024-12-13T14:00:20.412093246Z" level=info msg="StartContainer for \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\" returns successfully" Dec 13 14:00:20.419829 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:00:20.420071 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:00:20.420250 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:00:20.421846 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:00:20.425075 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:00:20.431092 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:00:20.447998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9-rootfs.mount: Deactivated successfully. Dec 13 14:00:20.462805 env[1308]: time="2024-12-13T14:00:20.462754931Z" level=info msg="shim disconnected" id=9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9 Dec 13 14:00:20.462989 env[1308]: time="2024-12-13T14:00:20.462969680Z" level=warning msg="cleaning up after shim disconnected" id=9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9 namespace=k8s.io Dec 13 14:00:20.463047 env[1308]: time="2024-12-13T14:00:20.463033946Z" level=info msg="cleaning up dead shim" Dec 13 14:00:20.470408 env[1308]: time="2024-12-13T14:00:20.470369341Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1812 runtime=io.containerd.runc.v2\n" Dec 13 14:00:20.671861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3840313256.mount: Deactivated successfully. Dec 13 14:00:21.089192 env[1308]: time="2024-12-13T14:00:21.088990481Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:21.091298 env[1308]: time="2024-12-13T14:00:21.091255637Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:21.093127 env[1308]: time="2024-12-13T14:00:21.093097977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:21.094890 env[1308]: time="2024-12-13T14:00:21.094850907Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:21.096292 env[1308]: time="2024-12-13T14:00:21.095403712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:00:21.098005 env[1308]: time="2024-12-13T14:00:21.097385769Z" level=info msg="CreateContainer within sandbox \"a970e6c892584b744dda052faa4614cf5c51d80dccff09c1538ccaf0bc9a3c01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:00:21.109371 kubelet[1564]: E1213 14:00:21.109335 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:21.109673 env[1308]: time="2024-12-13T14:00:21.109337940Z" level=info msg="CreateContainer within sandbox \"a970e6c892584b744dda052faa4614cf5c51d80dccff09c1538ccaf0bc9a3c01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7303ccbe70788e6e5f093983ae3e299d56027db2451297eaaa67d026fb4a576a\"" Dec 13 14:00:21.110395 env[1308]: time="2024-12-13T14:00:21.110172052Z" level=info msg="StartContainer for \"7303ccbe70788e6e5f093983ae3e299d56027db2451297eaaa67d026fb4a576a\"" Dec 13 14:00:21.170275 env[1308]: time="2024-12-13T14:00:21.170222864Z" level=info msg="StartContainer for \"7303ccbe70788e6e5f093983ae3e299d56027db2451297eaaa67d026fb4a576a\" returns successfully" Dec 13 14:00:21.307698 kubelet[1564]: E1213 14:00:21.307457 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:21.310511 env[1308]: time="2024-12-13T14:00:21.310460322Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:00:21.311603 kubelet[1564]: E1213 14:00:21.311562 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:21.325515 env[1308]: time="2024-12-13T14:00:21.325459581Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\"" Dec 13 14:00:21.326065 env[1308]: time="2024-12-13T14:00:21.326020001Z" level=info msg="StartContainer for \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\"" Dec 13 14:00:21.390787 env[1308]: time="2024-12-13T14:00:21.390306505Z" level=info msg="StartContainer for \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\" returns successfully" Dec 13 14:00:21.418329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8-rootfs.mount: Deactivated successfully. Dec 13 14:00:21.524442 env[1308]: time="2024-12-13T14:00:21.524287827Z" level=info msg="shim disconnected" id=2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8 Dec 13 14:00:21.524442 env[1308]: time="2024-12-13T14:00:21.524330679Z" level=warning msg="cleaning up after shim disconnected" id=2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8 namespace=k8s.io Dec 13 14:00:21.524442 env[1308]: time="2024-12-13T14:00:21.524341760Z" level=info msg="cleaning up dead shim" Dec 13 14:00:21.532318 env[1308]: time="2024-12-13T14:00:21.532097981Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1982 runtime=io.containerd.runc.v2\n" Dec 13 14:00:22.110399 kubelet[1564]: E1213 14:00:22.110357 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:22.314122 kubelet[1564]: E1213 14:00:22.314095 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:22.314658 kubelet[1564]: E1213 14:00:22.314632 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:22.316408 env[1308]: time="2024-12-13T14:00:22.316370131Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:00:22.328839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498552679.mount: Deactivated successfully. Dec 13 14:00:22.329751 kubelet[1564]: I1213 14:00:22.329178 1564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5vfps" podStartSLOduration=4.293627476 podStartE2EDuration="13.329142487s" podCreationTimestamp="2024-12-13 14:00:09 +0000 UTC" firstStartedPulling="2024-12-13 14:00:12.060505167 +0000 UTC m=+4.437892936" lastFinishedPulling="2024-12-13 14:00:21.096020178 +0000 UTC m=+13.473407947" observedRunningTime="2024-12-13 14:00:21.33929236 +0000 UTC m=+13.716680169" watchObservedRunningTime="2024-12-13 14:00:22.329142487 +0000 UTC m=+14.706530296" Dec 13 14:00:22.333673 env[1308]: time="2024-12-13T14:00:22.333632050Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\"" Dec 13 14:00:22.334191 env[1308]: time="2024-12-13T14:00:22.334158096Z" level=info msg="StartContainer for \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\"" Dec 13 14:00:22.378611 env[1308]: time="2024-12-13T14:00:22.378229976Z" level=info msg="StartContainer for \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\" returns successfully" Dec 13 14:00:22.396317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf-rootfs.mount: Deactivated successfully. Dec 13 14:00:22.402293 env[1308]: time="2024-12-13T14:00:22.402253895Z" level=info msg="shim disconnected" id=dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf Dec 13 14:00:22.402445 env[1308]: time="2024-12-13T14:00:22.402298560Z" level=warning msg="cleaning up after shim disconnected" id=dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf namespace=k8s.io Dec 13 14:00:22.402445 env[1308]: time="2024-12-13T14:00:22.402309686Z" level=info msg="cleaning up dead shim" Dec 13 14:00:22.408324 env[1308]: time="2024-12-13T14:00:22.408290293Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:00:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2085 runtime=io.containerd.runc.v2\n" Dec 13 14:00:23.111381 kubelet[1564]: E1213 14:00:23.111339 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:23.317112 kubelet[1564]: E1213 14:00:23.317088 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:23.320171 env[1308]: time="2024-12-13T14:00:23.320122955Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:00:23.330083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117720047.mount: Deactivated successfully. Dec 13 14:00:23.332825 env[1308]: time="2024-12-13T14:00:23.332783176Z" level=info msg="CreateContainer within sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\"" Dec 13 14:00:23.333379 env[1308]: time="2024-12-13T14:00:23.333335074Z" level=info msg="StartContainer for \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\"" Dec 13 14:00:23.379320 env[1308]: time="2024-12-13T14:00:23.379039479Z" level=info msg="StartContainer for \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\" returns successfully" Dec 13 14:00:23.547593 kubelet[1564]: I1213 14:00:23.547201 1564 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:00:23.568834 kubelet[1564]: I1213 14:00:23.568722 1564 topology_manager.go:215] "Topology Admit Handler" podUID="f193741c-94e3-44f8-bd49-651205f68015" podNamespace="kube-system" podName="coredns-76f75df574-dcql6" Dec 13 14:00:23.568973 kubelet[1564]: I1213 14:00:23.568885 1564 topology_manager.go:215] "Topology Admit Handler" podUID="b93535fc-7449-4038-9af4-a1041440df00" podNamespace="kube-system" podName="coredns-76f75df574-4mzbc" Dec 13 14:00:23.599146 kubelet[1564]: I1213 14:00:23.599120 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4pwq\" (UniqueName: \"kubernetes.io/projected/b93535fc-7449-4038-9af4-a1041440df00-kube-api-access-t4pwq\") pod \"coredns-76f75df574-4mzbc\" (UID: \"b93535fc-7449-4038-9af4-a1041440df00\") " pod="kube-system/coredns-76f75df574-4mzbc" Dec 13 14:00:23.599331 kubelet[1564]: I1213 14:00:23.599317 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f193741c-94e3-44f8-bd49-651205f68015-config-volume\") pod \"coredns-76f75df574-dcql6\" (UID: \"f193741c-94e3-44f8-bd49-651205f68015\") " pod="kube-system/coredns-76f75df574-dcql6" Dec 13 14:00:23.599424 kubelet[1564]: I1213 14:00:23.599413 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55px\" (UniqueName: \"kubernetes.io/projected/f193741c-94e3-44f8-bd49-651205f68015-kube-api-access-s55px\") pod \"coredns-76f75df574-dcql6\" (UID: \"f193741c-94e3-44f8-bd49-651205f68015\") " pod="kube-system/coredns-76f75df574-dcql6" Dec 13 14:00:23.599507 kubelet[1564]: I1213 14:00:23.599497 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b93535fc-7449-4038-9af4-a1041440df00-config-volume\") pod \"coredns-76f75df574-4mzbc\" (UID: \"b93535fc-7449-4038-9af4-a1041440df00\") " pod="kube-system/coredns-76f75df574-4mzbc" Dec 13 14:00:23.654592 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:00:23.872593 kubelet[1564]: E1213 14:00:23.872552 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:23.873354 env[1308]: time="2024-12-13T14:00:23.873315332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4mzbc,Uid:b93535fc-7449-4038-9af4-a1041440df00,Namespace:kube-system,Attempt:0,}" Dec 13 14:00:23.873498 kubelet[1564]: E1213 14:00:23.873483 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:23.874052 env[1308]: time="2024-12-13T14:00:23.873892163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dcql6,Uid:f193741c-94e3-44f8-bd49-651205f68015,Namespace:kube-system,Attempt:0,}" Dec 13 14:00:23.889591 kernel: Initializing XFRM netlink socket Dec 13 14:00:23.891625 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:00:24.112266 kubelet[1564]: E1213 14:00:24.112217 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:24.321444 kubelet[1564]: E1213 14:00:24.321143 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:24.335608 kubelet[1564]: I1213 14:00:24.335560 1564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rqfws" podStartSLOduration=8.014444331 podStartE2EDuration="15.33551978s" podCreationTimestamp="2024-12-13 14:00:09 +0000 UTC" firstStartedPulling="2024-12-13 14:00:12.060475669 +0000 UTC m=+4.437863478" lastFinishedPulling="2024-12-13 14:00:19.381551078 +0000 UTC m=+11.758938927" observedRunningTime="2024-12-13 14:00:24.334860508 +0000 UTC m=+16.712248317" watchObservedRunningTime="2024-12-13 14:00:24.33551978 +0000 UTC m=+16.712907589" Dec 13 14:00:25.112717 kubelet[1564]: E1213 14:00:25.112667 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:25.322879 kubelet[1564]: E1213 14:00:25.322842 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:25.556664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:00:25.554642 systemd-networkd[1103]: cilium_host: Link UP Dec 13 14:00:25.554854 systemd-networkd[1103]: cilium_net: Link UP Dec 13 14:00:25.554858 systemd-networkd[1103]: cilium_net: Gained carrier Dec 13 14:00:25.555084 systemd-networkd[1103]: cilium_host: Gained carrier Dec 13 14:00:25.556414 systemd-networkd[1103]: cilium_host: Gained IPv6LL Dec 13 14:00:25.635598 systemd-networkd[1103]: cilium_vxlan: Link UP Dec 13 14:00:25.635604 systemd-networkd[1103]: cilium_vxlan: Gained carrier Dec 13 14:00:25.931630 kernel: NET: Registered PF_ALG protocol family Dec 13 14:00:26.024744 systemd-networkd[1103]: cilium_net: Gained IPv6LL Dec 13 14:00:26.113837 kubelet[1564]: E1213 14:00:26.113801 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:26.323907 kubelet[1564]: E1213 14:00:26.323798 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:26.507750 systemd-networkd[1103]: lxc_health: Link UP Dec 13 14:00:26.520106 systemd-networkd[1103]: lxc_health: Gained carrier Dec 13 14:00:26.520586 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:00:26.929214 systemd-networkd[1103]: lxc0c55e404d11f: Link UP Dec 13 14:00:26.937595 kernel: eth0: renamed from tmp106ce Dec 13 14:00:26.948911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:00:26.949009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0c55e404d11f: link becomes ready Dec 13 14:00:26.948949 systemd-networkd[1103]: lxc0c55e404d11f: Gained carrier Dec 13 14:00:26.949446 systemd-networkd[1103]: lxc2e14e43e198a: Link UP Dec 13 14:00:26.960593 kernel: eth0: renamed from tmp83b59 Dec 13 14:00:26.968040 systemd-networkd[1103]: lxc2e14e43e198a: Gained carrier Dec 13 14:00:26.970401 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2e14e43e198a: link becomes ready Dec 13 14:00:27.114437 kubelet[1564]: E1213 14:00:27.114379 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:27.256739 systemd-networkd[1103]: cilium_vxlan: Gained IPv6LL Dec 13 14:00:27.415455 kubelet[1564]: E1213 14:00:27.415413 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:27.832760 systemd-networkd[1103]: lxc_health: Gained IPv6LL Dec 13 14:00:28.114856 kubelet[1564]: E1213 14:00:28.114742 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:28.152790 systemd-networkd[1103]: lxc2e14e43e198a: Gained IPv6LL Dec 13 14:00:28.326848 kubelet[1564]: E1213 14:00:28.326806 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:28.920711 systemd-networkd[1103]: lxc0c55e404d11f: Gained IPv6LL Dec 13 14:00:29.102053 kubelet[1564]: E1213 14:00:29.102008 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:29.115227 kubelet[1564]: E1213 14:00:29.115203 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:30.115952 kubelet[1564]: E1213 14:00:30.115898 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:30.432379 env[1308]: time="2024-12-13T14:00:30.432146806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:00:30.432726 env[1308]: time="2024-12-13T14:00:30.432355749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:00:30.432726 env[1308]: time="2024-12-13T14:00:30.432386557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:00:30.432726 env[1308]: time="2024-12-13T14:00:30.432396667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:00:30.432850 env[1308]: time="2024-12-13T14:00:30.432814673Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/106ce651cf48d8d3a63cc303b6c10f50c3f36ed2f194d9af3920c056b4085826 pid=2656 runtime=io.containerd.runc.v2 Dec 13 14:00:30.432964 env[1308]: time="2024-12-13T14:00:30.432937905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:00:30.433073 env[1308]: time="2024-12-13T14:00:30.433052506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:00:30.433417 env[1308]: time="2024-12-13T14:00:30.433347160Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83b595db925a0e2fecd87ef050827fbd70d6a9e14131cc6afb1426eee8c273c8 pid=2657 runtime=io.containerd.runc.v2 Dec 13 14:00:30.495965 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:00:30.514351 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:00:30.520633 env[1308]: time="2024-12-13T14:00:30.520596181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4mzbc,Uid:b93535fc-7449-4038-9af4-a1041440df00,Namespace:kube-system,Attempt:0,} returns sandbox id \"83b595db925a0e2fecd87ef050827fbd70d6a9e14131cc6afb1426eee8c273c8\"" Dec 13 14:00:30.521314 kubelet[1564]: E1213 14:00:30.521292 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:30.522502 env[1308]: time="2024-12-13T14:00:30.522463402Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:00:30.538155 env[1308]: time="2024-12-13T14:00:30.538117383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dcql6,Uid:f193741c-94e3-44f8-bd49-651205f68015,Namespace:kube-system,Attempt:0,} returns sandbox id \"106ce651cf48d8d3a63cc303b6c10f50c3f36ed2f194d9af3920c056b4085826\"" Dec 13 14:00:30.538700 kubelet[1564]: E1213 14:00:30.538680 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:31.116436 kubelet[1564]: E1213 14:00:31.116390 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:31.164971 kubelet[1564]: I1213 14:00:31.164771 1564 topology_manager.go:215] "Topology Admit Handler" podUID="622a2ff3-87b4-4b30-a892-2c04d878751d" podNamespace="default" podName="nginx-deployment-6d5f899847-2d2t9" Dec 13 14:00:31.236429 kubelet[1564]: I1213 14:00:31.236377 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgnd\" (UniqueName: \"kubernetes.io/projected/622a2ff3-87b4-4b30-a892-2c04d878751d-kube-api-access-vpgnd\") pod \"nginx-deployment-6d5f899847-2d2t9\" (UID: \"622a2ff3-87b4-4b30-a892-2c04d878751d\") " pod="default/nginx-deployment-6d5f899847-2d2t9" Dec 13 14:00:31.468474 env[1308]: time="2024-12-13T14:00:31.468214653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2d2t9,Uid:622a2ff3-87b4-4b30-a892-2c04d878751d,Namespace:default,Attempt:0,}" Dec 13 14:00:31.507990 systemd-networkd[1103]: lxca8b86ed0ab19: Link UP Dec 13 14:00:31.517594 kernel: eth0: renamed from tmpc5f4f Dec 13 14:00:31.524241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:00:31.524331 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca8b86ed0ab19: link becomes ready Dec 13 14:00:31.524457 systemd-networkd[1103]: lxca8b86ed0ab19: Gained carrier Dec 13 14:00:31.692806 env[1308]: time="2024-12-13T14:00:31.692722808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:00:31.692806 env[1308]: time="2024-12-13T14:00:31.692762172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:00:31.692806 env[1308]: time="2024-12-13T14:00:31.692772603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:00:31.692994 env[1308]: time="2024-12-13T14:00:31.692888417Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5f4fafc34dec54741eecfd0c556147afa9094e06513b7501064936cacff7426 pid=2761 runtime=io.containerd.runc.v2 Dec 13 14:00:31.723209 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:00:31.740302 env[1308]: time="2024-12-13T14:00:31.740263769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2d2t9,Uid:622a2ff3-87b4-4b30-a892-2c04d878751d,Namespace:default,Attempt:0,} returns sandbox id \"c5f4fafc34dec54741eecfd0c556147afa9094e06513b7501064936cacff7426\"" Dec 13 14:00:31.907488 env[1308]: time="2024-12-13T14:00:31.907446654Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:31.911629 env[1308]: time="2024-12-13T14:00:31.911589329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:31.914187 env[1308]: time="2024-12-13T14:00:31.914152400Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:31.915808 env[1308]: time="2024-12-13T14:00:31.915774846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:31.916653 env[1308]: time="2024-12-13T14:00:31.916619838Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:00:31.917753 env[1308]: time="2024-12-13T14:00:31.917730989Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:00:31.920078 env[1308]: time="2024-12-13T14:00:31.920026503Z" level=info msg="CreateContainer within sandbox \"83b595db925a0e2fecd87ef050827fbd70d6a9e14131cc6afb1426eee8c273c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:00:31.930233 env[1308]: time="2024-12-13T14:00:31.930184393Z" level=info msg="CreateContainer within sandbox \"83b595db925a0e2fecd87ef050827fbd70d6a9e14131cc6afb1426eee8c273c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce54248ffce9ddaebd0c0cb16d7923a95f4f915f5fbc3e8c216d07395bf79f27\"" Dec 13 14:00:31.930739 env[1308]: time="2024-12-13T14:00:31.930714191Z" level=info msg="StartContainer for \"ce54248ffce9ddaebd0c0cb16d7923a95f4f915f5fbc3e8c216d07395bf79f27\"" Dec 13 14:00:31.987425 env[1308]: time="2024-12-13T14:00:31.987329946Z" level=info msg="StartContainer for \"ce54248ffce9ddaebd0c0cb16d7923a95f4f915f5fbc3e8c216d07395bf79f27\" returns successfully" Dec 13 14:00:32.085687 env[1308]: time="2024-12-13T14:00:32.085626095Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:32.087159 env[1308]: time="2024-12-13T14:00:32.087126981Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:32.088902 env[1308]: time="2024-12-13T14:00:32.088879868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:32.090681 env[1308]: time="2024-12-13T14:00:32.090656216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:32.091472 env[1308]: time="2024-12-13T14:00:32.091448226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:00:32.092008 env[1308]: time="2024-12-13T14:00:32.091985919Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:00:32.093709 env[1308]: time="2024-12-13T14:00:32.093679053Z" level=info msg="CreateContainer within sandbox \"106ce651cf48d8d3a63cc303b6c10f50c3f36ed2f194d9af3920c056b4085826\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:00:32.102680 env[1308]: time="2024-12-13T14:00:32.102636451Z" level=info msg="CreateContainer within sandbox \"106ce651cf48d8d3a63cc303b6c10f50c3f36ed2f194d9af3920c056b4085826\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09c296890d456a1d083da376b02c6b8ddd8c4a84daa7d20f76af1d1dd95070b2\"" Dec 13 14:00:32.103315 env[1308]: time="2024-12-13T14:00:32.103289212Z" level=info msg="StartContainer for \"09c296890d456a1d083da376b02c6b8ddd8c4a84daa7d20f76af1d1dd95070b2\"" Dec 13 14:00:32.117591 kubelet[1564]: E1213 14:00:32.116667 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:32.158596 env[1308]: time="2024-12-13T14:00:32.158535252Z" level=info msg="StartContainer for \"09c296890d456a1d083da376b02c6b8ddd8c4a84daa7d20f76af1d1dd95070b2\" returns successfully" Dec 13 14:00:32.336073 kubelet[1564]: E1213 14:00:32.336042 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:32.340774 kubelet[1564]: E1213 14:00:32.340704 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:32.347457 kubelet[1564]: I1213 14:00:32.347289 1564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4mzbc" podStartSLOduration=34.952483136 podStartE2EDuration="36.34725458s" podCreationTimestamp="2024-12-13 13:59:56 +0000 UTC" firstStartedPulling="2024-12-13 14:00:30.522208027 +0000 UTC m=+22.899595836" lastFinishedPulling="2024-12-13 14:00:31.916979471 +0000 UTC m=+24.294367280" observedRunningTime="2024-12-13 14:00:32.346849982 +0000 UTC m=+24.724237791" watchObservedRunningTime="2024-12-13 14:00:32.34725458 +0000 UTC m=+24.724642389" Dec 13 14:00:32.365914 kubelet[1564]: I1213 14:00:32.365874 1564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dcql6" podStartSLOduration=34.813370674 podStartE2EDuration="36.365835048s" podCreationTimestamp="2024-12-13 13:59:56 +0000 UTC" firstStartedPulling="2024-12-13 14:00:30.539276419 +0000 UTC m=+22.916664188" lastFinishedPulling="2024-12-13 14:00:32.091740753 +0000 UTC m=+24.469128562" observedRunningTime="2024-12-13 14:00:32.365514783 +0000 UTC m=+24.742902592" watchObservedRunningTime="2024-12-13 14:00:32.365835048 +0000 UTC m=+24.743222857" Dec 13 14:00:33.117462 kubelet[1564]: E1213 14:00:33.117419 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:33.342217 kubelet[1564]: E1213 14:00:33.342191 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:33.342606 kubelet[1564]: E1213 14:00:33.342245 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:33.464733 systemd-networkd[1103]: lxca8b86ed0ab19: Gained IPv6LL Dec 13 14:00:34.118116 kubelet[1564]: E1213 14:00:34.118078 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:34.343203 kubelet[1564]: E1213 14:00:34.343168 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:00:35.118711 kubelet[1564]: E1213 14:00:35.118519 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:36.083452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597878780.mount: Deactivated successfully. Dec 13 14:00:36.119114 kubelet[1564]: E1213 14:00:36.119062 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:37.120024 kubelet[1564]: E1213 14:00:37.119967 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:37.517503 env[1308]: time="2024-12-13T14:00:37.517442459Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:37.518721 env[1308]: time="2024-12-13T14:00:37.518688635Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:37.520412 env[1308]: time="2024-12-13T14:00:37.520385107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:37.522243 env[1308]: time="2024-12-13T14:00:37.522214019Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:37.523055 env[1308]: time="2024-12-13T14:00:37.523027545Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:00:37.525153 env[1308]: time="2024-12-13T14:00:37.525122777Z" level=info msg="CreateContainer within sandbox \"c5f4fafc34dec54741eecfd0c556147afa9094e06513b7501064936cacff7426\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:00:37.534410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670349699.mount: Deactivated successfully. Dec 13 14:00:37.537156 env[1308]: time="2024-12-13T14:00:37.537121238Z" level=info msg="CreateContainer within sandbox \"c5f4fafc34dec54741eecfd0c556147afa9094e06513b7501064936cacff7426\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fb5c444350f8acdd6af4eae33971d52ca735ad2e2b6333b69c5f5939e081089e\"" Dec 13 14:00:37.537967 env[1308]: time="2024-12-13T14:00:37.537934844Z" level=info msg="StartContainer for \"fb5c444350f8acdd6af4eae33971d52ca735ad2e2b6333b69c5f5939e081089e\"" Dec 13 14:00:37.588921 env[1308]: time="2024-12-13T14:00:37.588867895Z" level=info msg="StartContainer for \"fb5c444350f8acdd6af4eae33971d52ca735ad2e2b6333b69c5f5939e081089e\" returns successfully" Dec 13 14:00:38.120735 kubelet[1564]: E1213 14:00:38.120687 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:39.121765 kubelet[1564]: E1213 14:00:39.121688 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:40.122111 kubelet[1564]: E1213 14:00:40.122070 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:41.122648 kubelet[1564]: E1213 14:00:41.122601 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:42.123404 kubelet[1564]: E1213 14:00:42.123362 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:43.124413 kubelet[1564]: E1213 14:00:43.124368 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:43.259604 kubelet[1564]: I1213 14:00:43.259523 1564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-2d2t9" podStartSLOduration=6.477693383 podStartE2EDuration="12.259481916s" podCreationTimestamp="2024-12-13 14:00:31 +0000 UTC" firstStartedPulling="2024-12-13 14:00:31.741496409 +0000 UTC m=+24.118884178" lastFinishedPulling="2024-12-13 14:00:37.523284902 +0000 UTC m=+29.900672711" observedRunningTime="2024-12-13 14:00:38.363150118 +0000 UTC m=+30.740537927" watchObservedRunningTime="2024-12-13 14:00:43.259481916 +0000 UTC m=+35.636869685" Dec 13 14:00:43.259788 kubelet[1564]: I1213 14:00:43.259663 1564 topology_manager.go:215] "Topology Admit Handler" podUID="c43632a2-33b1-4ecf-a955-c9cb46cc14a9" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:00:43.302047 kubelet[1564]: I1213 14:00:43.301991 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6kn9\" (UniqueName: \"kubernetes.io/projected/c43632a2-33b1-4ecf-a955-c9cb46cc14a9-kube-api-access-w6kn9\") pod \"nfs-server-provisioner-0\" (UID: \"c43632a2-33b1-4ecf-a955-c9cb46cc14a9\") " pod="default/nfs-server-provisioner-0" Dec 13 14:00:43.302190 kubelet[1564]: I1213 14:00:43.302073 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c43632a2-33b1-4ecf-a955-c9cb46cc14a9-data\") pod \"nfs-server-provisioner-0\" (UID: \"c43632a2-33b1-4ecf-a955-c9cb46cc14a9\") " pod="default/nfs-server-provisioner-0" Dec 13 14:00:43.564268 env[1308]: time="2024-12-13T14:00:43.564211834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c43632a2-33b1-4ecf-a955-c9cb46cc14a9,Namespace:default,Attempt:0,}" Dec 13 14:00:43.588674 systemd-networkd[1103]: lxca7bfe5fe34f6: Link UP Dec 13 14:00:43.605616 kernel: eth0: renamed from tmp51d9d Dec 13 14:00:43.615784 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:00:43.615886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca7bfe5fe34f6: link becomes ready Dec 13 14:00:43.615938 systemd-networkd[1103]: lxca7bfe5fe34f6: Gained carrier Dec 13 14:00:43.794199 env[1308]: time="2024-12-13T14:00:43.794110968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:00:43.794199 env[1308]: time="2024-12-13T14:00:43.794164700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:00:43.794383 env[1308]: time="2024-12-13T14:00:43.794175702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:00:43.794383 env[1308]: time="2024-12-13T14:00:43.794314893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51d9d1c6726d8572b4e5c23654ac5509ec589b6f7eedeec56d50f4abc0dff0f5 pid=2975 runtime=io.containerd.runc.v2 Dec 13 14:00:43.827152 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:00:43.846030 env[1308]: time="2024-12-13T14:00:43.845983098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c43632a2-33b1-4ecf-a955-c9cb46cc14a9,Namespace:default,Attempt:0,} returns sandbox id \"51d9d1c6726d8572b4e5c23654ac5509ec589b6f7eedeec56d50f4abc0dff0f5\"" Dec 13 14:00:43.847512 env[1308]: time="2024-12-13T14:00:43.847474424Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:00:44.125302 kubelet[1564]: E1213 14:00:44.125164 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:45.112788 systemd-networkd[1103]: lxca7bfe5fe34f6: Gained IPv6LL Dec 13 14:00:45.125592 kubelet[1564]: E1213 14:00:45.125542 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:46.011116 update_engine[1300]: I1213 14:00:46.010695 1300 update_attempter.cc:509] Updating boot flags... Dec 13 14:00:46.018495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781017609.mount: Deactivated successfully. Dec 13 14:00:46.126015 kubelet[1564]: E1213 14:00:46.125967 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:47.126850 kubelet[1564]: E1213 14:00:47.126802 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:47.798361 env[1308]: time="2024-12-13T14:00:47.798316843Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:47.800176 env[1308]: time="2024-12-13T14:00:47.800143089Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:47.802224 env[1308]: time="2024-12-13T14:00:47.802196775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:47.804373 env[1308]: time="2024-12-13T14:00:47.804328915Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:47.805196 env[1308]: time="2024-12-13T14:00:47.805161904Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 14:00:47.807507 env[1308]: time="2024-12-13T14:00:47.807475316Z" level=info msg="CreateContainer within sandbox \"51d9d1c6726d8572b4e5c23654ac5509ec589b6f7eedeec56d50f4abc0dff0f5\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:00:47.816935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1823462149.mount: Deactivated successfully. Dec 13 14:00:47.820693 env[1308]: time="2024-12-13T14:00:47.820641784Z" level=info msg="CreateContainer within sandbox \"51d9d1c6726d8572b4e5c23654ac5509ec589b6f7eedeec56d50f4abc0dff0f5\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ac8651d90edab8d34be9900cc3103c2a768553a8285967e42aa095f7af9b8956\"" Dec 13 14:00:47.821115 env[1308]: time="2024-12-13T14:00:47.821086943Z" level=info msg="StartContainer for \"ac8651d90edab8d34be9900cc3103c2a768553a8285967e42aa095f7af9b8956\"" Dec 13 14:00:48.068920 env[1308]: time="2024-12-13T14:00:48.068778444Z" level=info msg="StartContainer for \"ac8651d90edab8d34be9900cc3103c2a768553a8285967e42aa095f7af9b8956\" returns successfully" Dec 13 14:00:48.127235 kubelet[1564]: E1213 14:00:48.127194 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:48.383474 kubelet[1564]: I1213 14:00:48.383359 1564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.424978955 podStartE2EDuration="5.383315086s" podCreationTimestamp="2024-12-13 14:00:43 +0000 UTC" firstStartedPulling="2024-12-13 14:00:43.847097101 +0000 UTC m=+36.224484910" lastFinishedPulling="2024-12-13 14:00:47.805433232 +0000 UTC m=+40.182821041" observedRunningTime="2024-12-13 14:00:48.38298119 +0000 UTC m=+40.760368999" watchObservedRunningTime="2024-12-13 14:00:48.383315086 +0000 UTC m=+40.760702895" Dec 13 14:00:49.101690 kubelet[1564]: E1213 14:00:49.101626 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:49.128080 kubelet[1564]: E1213 14:00:49.128027 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:50.128907 kubelet[1564]: E1213 14:00:50.128850 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:51.129464 kubelet[1564]: E1213 14:00:51.129417 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:52.129634 kubelet[1564]: E1213 14:00:52.129581 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:53.130414 kubelet[1564]: E1213 14:00:53.130361 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:54.131432 kubelet[1564]: E1213 14:00:54.131382 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:55.131591 kubelet[1564]: E1213 14:00:55.131508 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:56.131726 kubelet[1564]: E1213 14:00:56.131662 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:57.132688 kubelet[1564]: E1213 14:00:57.132636 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:58.007606 kubelet[1564]: I1213 14:00:58.007546 1564 topology_manager.go:215] "Topology Admit Handler" podUID="6c5c1c38-782c-4ab0-ba8f-96f6a24cad68" podNamespace="default" podName="test-pod-1" Dec 13 14:00:58.083310 kubelet[1564]: I1213 14:00:58.083256 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d85x\" (UniqueName: \"kubernetes.io/projected/6c5c1c38-782c-4ab0-ba8f-96f6a24cad68-kube-api-access-2d85x\") pod \"test-pod-1\" (UID: \"6c5c1c38-782c-4ab0-ba8f-96f6a24cad68\") " pod="default/test-pod-1" Dec 13 14:00:58.083461 kubelet[1564]: I1213 14:00:58.083358 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8aaa23ff-9e55-4fd8-9749-b3862e1896bd\" (UniqueName: \"kubernetes.io/nfs/6c5c1c38-782c-4ab0-ba8f-96f6a24cad68-pvc-8aaa23ff-9e55-4fd8-9749-b3862e1896bd\") pod \"test-pod-1\" (UID: \"6c5c1c38-782c-4ab0-ba8f-96f6a24cad68\") " pod="default/test-pod-1" Dec 13 14:00:58.133657 kubelet[1564]: E1213 14:00:58.133610 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:58.207601 kernel: FS-Cache: Loaded Dec 13 14:00:58.237974 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:00:58.238123 kernel: RPC: Registered udp transport module. Dec 13 14:00:58.238162 kernel: RPC: Registered tcp transport module. Dec 13 14:00:58.239201 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:00:58.280595 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:00:58.409865 kernel: NFS: Registering the id_resolver key type Dec 13 14:00:58.409988 kernel: Key type id_resolver registered Dec 13 14:00:58.410009 kernel: Key type id_legacy registered Dec 13 14:00:58.436208 nfsidmap[3099]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:00:58.439366 nfsidmap[3102]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:00:58.611564 env[1308]: time="2024-12-13T14:00:58.611210424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6c5c1c38-782c-4ab0-ba8f-96f6a24cad68,Namespace:default,Attempt:0,}" Dec 13 14:00:58.638120 systemd-networkd[1103]: lxcc518670fb75a: Link UP Dec 13 14:00:58.653588 kernel: eth0: renamed from tmpc2c39 Dec 13 14:00:58.660602 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:00:58.660683 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc518670fb75a: link becomes ready Dec 13 14:00:58.660776 systemd-networkd[1103]: lxcc518670fb75a: Gained carrier Dec 13 14:00:58.796105 env[1308]: time="2024-12-13T14:00:58.796036578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:00:58.796105 env[1308]: time="2024-12-13T14:00:58.796079822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:00:58.796105 env[1308]: time="2024-12-13T14:00:58.796090263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:00:58.798149 env[1308]: time="2024-12-13T14:00:58.798116964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2c39f2cab47ed078aafa542310d3178279f592f16c1052c300d6b6153e2e5e8 pid=3134 runtime=io.containerd.runc.v2 Dec 13 14:00:58.847785 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:00:58.866769 env[1308]: time="2024-12-13T14:00:58.865559678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6c5c1c38-782c-4ab0-ba8f-96f6a24cad68,Namespace:default,Attempt:0,} returns sandbox id \"c2c39f2cab47ed078aafa542310d3178279f592f16c1052c300d6b6153e2e5e8\"" Dec 13 14:00:58.867318 env[1308]: time="2024-12-13T14:00:58.867262784Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:00:59.134558 kubelet[1564]: E1213 14:00:59.134214 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:00:59.647076 env[1308]: time="2024-12-13T14:00:59.647026468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:59.649348 env[1308]: time="2024-12-13T14:00:59.649310507Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:59.650913 env[1308]: time="2024-12-13T14:00:59.650878752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:59.655767 env[1308]: time="2024-12-13T14:00:59.655733301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:00:59.656425 env[1308]: time="2024-12-13T14:00:59.656394530Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:00:59.658196 env[1308]: time="2024-12-13T14:00:59.658168436Z" level=info msg="CreateContainer within sandbox \"c2c39f2cab47ed078aafa542310d3178279f592f16c1052c300d6b6153e2e5e8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:00:59.668498 env[1308]: time="2024-12-13T14:00:59.668460475Z" level=info msg="CreateContainer within sandbox \"c2c39f2cab47ed078aafa542310d3178279f592f16c1052c300d6b6153e2e5e8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"37f81beeb0a85fa9ad93c90e8ffbb9a27431eb20c75257f159f218e85fb9a41d\"" Dec 13 14:00:59.669447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657131179.mount: Deactivated successfully. Dec 13 14:00:59.670531 env[1308]: time="2024-12-13T14:00:59.670484928Z" level=info msg="StartContainer for \"37f81beeb0a85fa9ad93c90e8ffbb9a27431eb20c75257f159f218e85fb9a41d\"" Dec 13 14:00:59.728854 env[1308]: time="2024-12-13T14:00:59.728803003Z" level=info msg="StartContainer for \"37f81beeb0a85fa9ad93c90e8ffbb9a27431eb20c75257f159f218e85fb9a41d\" returns successfully" Dec 13 14:01:00.134476 kubelet[1564]: E1213 14:01:00.134401 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:00.404420 kubelet[1564]: I1213 14:01:00.404115 1564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.61394069 podStartE2EDuration="17.404076944s" podCreationTimestamp="2024-12-13 14:00:43 +0000 UTC" firstStartedPulling="2024-12-13 14:00:58.866561708 +0000 UTC m=+51.243949517" lastFinishedPulling="2024-12-13 14:00:59.656697962 +0000 UTC m=+52.034085771" observedRunningTime="2024-12-13 14:01:00.403801396 +0000 UTC m=+52.781189165" watchObservedRunningTime="2024-12-13 14:01:00.404076944 +0000 UTC m=+52.781464753" Dec 13 14:01:00.666622 systemd-networkd[1103]: lxcc518670fb75a: Gained IPv6LL Dec 13 14:01:01.134660 kubelet[1564]: E1213 14:01:01.134596 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:02.135717 kubelet[1564]: E1213 14:01:02.135661 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:03.136024 kubelet[1564]: E1213 14:01:03.135971 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:04.136839 kubelet[1564]: E1213 14:01:04.136791 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:05.137568 kubelet[1564]: E1213 14:01:05.137526 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:06.139242 kubelet[1564]: E1213 14:01:06.139204 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:06.143199 env[1308]: time="2024-12-13T14:01:06.143135199Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:01:06.148666 env[1308]: time="2024-12-13T14:01:06.148629849Z" level=info msg="StopContainer for \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\" with timeout 2 (s)" Dec 13 14:01:06.149084 env[1308]: time="2024-12-13T14:01:06.149054204Z" level=info msg="Stop container \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\" with signal terminated" Dec 13 14:01:06.155719 systemd-networkd[1103]: lxc_health: Link DOWN Dec 13 14:01:06.155724 systemd-networkd[1103]: lxc_health: Lost carrier Dec 13 14:01:06.211291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a-rootfs.mount: Deactivated successfully. Dec 13 14:01:06.219335 env[1308]: time="2024-12-13T14:01:06.219285245Z" level=info msg="shim disconnected" id=b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a Dec 13 14:01:06.219335 env[1308]: time="2024-12-13T14:01:06.219334289Z" level=warning msg="cleaning up after shim disconnected" id=b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a namespace=k8s.io Dec 13 14:01:06.219527 env[1308]: time="2024-12-13T14:01:06.219345890Z" level=info msg="cleaning up dead shim" Dec 13 14:01:06.226049 env[1308]: time="2024-12-13T14:01:06.226014637Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3267 runtime=io.containerd.runc.v2\n" Dec 13 14:01:06.228107 env[1308]: time="2024-12-13T14:01:06.228072046Z" level=info msg="StopContainer for \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\" returns successfully" Dec 13 14:01:06.228695 env[1308]: time="2024-12-13T14:01:06.228664654Z" level=info msg="StopPodSandbox for \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\"" Dec 13 14:01:06.228836 env[1308]: time="2024-12-13T14:01:06.228814546Z" level=info msg="Container to stop \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.228904 env[1308]: time="2024-12-13T14:01:06.228887472Z" level=info msg="Container to stop \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.228977 env[1308]: time="2024-12-13T14:01:06.228958478Z" level=info msg="Container to stop \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.229041 env[1308]: time="2024-12-13T14:01:06.229024764Z" level=info msg="Container to stop \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.229111 env[1308]: time="2024-12-13T14:01:06.229094809Z" level=info msg="Container to stop \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:06.231069 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd-shm.mount: Deactivated successfully. Dec 13 14:01:06.256137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd-rootfs.mount: Deactivated successfully. Dec 13 14:01:06.261539 env[1308]: time="2024-12-13T14:01:06.261497787Z" level=info msg="shim disconnected" id=accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd Dec 13 14:01:06.261736 env[1308]: time="2024-12-13T14:01:06.261716965Z" level=warning msg="cleaning up after shim disconnected" id=accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd namespace=k8s.io Dec 13 14:01:06.261797 env[1308]: time="2024-12-13T14:01:06.261783771Z" level=info msg="cleaning up dead shim" Dec 13 14:01:06.268341 env[1308]: time="2024-12-13T14:01:06.268305906Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3300 runtime=io.containerd.runc.v2\n" Dec 13 14:01:06.268765 env[1308]: time="2024-12-13T14:01:06.268736541Z" level=info msg="TearDown network for sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" successfully" Dec 13 14:01:06.268860 env[1308]: time="2024-12-13T14:01:06.268842270Z" level=info msg="StopPodSandbox for \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" returns successfully" Dec 13 14:01:06.408956 kubelet[1564]: I1213 14:01:06.407764 1564 scope.go:117] "RemoveContainer" containerID="b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a" Dec 13 14:01:06.409127 env[1308]: time="2024-12-13T14:01:06.409069132Z" level=info msg="RemoveContainer for \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\"" Dec 13 14:01:06.412525 env[1308]: time="2024-12-13T14:01:06.412486732Z" level=info msg="RemoveContainer for \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\" returns successfully" Dec 13 14:01:06.412758 kubelet[1564]: I1213 14:01:06.412737 1564 scope.go:117] "RemoveContainer" containerID="dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf" Dec 13 14:01:06.413657 env[1308]: time="2024-12-13T14:01:06.413617145Z" level=info msg="RemoveContainer for \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\"" Dec 13 14:01:06.415969 env[1308]: time="2024-12-13T14:01:06.415934175Z" level=info msg="RemoveContainer for \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\" returns successfully" Dec 13 14:01:06.416118 kubelet[1564]: I1213 14:01:06.416093 1564 scope.go:117] "RemoveContainer" containerID="2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8" Dec 13 14:01:06.417063 env[1308]: time="2024-12-13T14:01:06.416997702Z" level=info msg="RemoveContainer for \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\"" Dec 13 14:01:06.419327 env[1308]: time="2024-12-13T14:01:06.419296051Z" level=info msg="RemoveContainer for \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\" returns successfully" Dec 13 14:01:06.419462 kubelet[1564]: I1213 14:01:06.419436 1564 scope.go:117] "RemoveContainer" containerID="9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9" Dec 13 14:01:06.420348 env[1308]: time="2024-12-13T14:01:06.420325815Z" level=info msg="RemoveContainer for \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\"" Dec 13 14:01:06.422450 env[1308]: time="2024-12-13T14:01:06.422416907Z" level=info msg="RemoveContainer for \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\" returns successfully" Dec 13 14:01:06.422585 kubelet[1564]: I1213 14:01:06.422548 1564 scope.go:117] "RemoveContainer" containerID="774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529" Dec 13 14:01:06.423429 kubelet[1564]: I1213 14:01:06.423413 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-lib-modules\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423506 kubelet[1564]: I1213 14:01:06.423448 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp2sq\" (UniqueName: \"kubernetes.io/projected/c89c7975-b510-4a63-9c28-0517ba07bce2-kube-api-access-zp2sq\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423506 kubelet[1564]: I1213 14:01:06.423471 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-etc-cni-netd\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423506 kubelet[1564]: I1213 14:01:06.423492 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-config-path\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423603 kubelet[1564]: I1213 14:01:06.423511 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-host-proc-sys-net\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423603 kubelet[1564]: I1213 14:01:06.423529 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-hostproc\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423603 kubelet[1564]: I1213 14:01:06.423549 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c89c7975-b510-4a63-9c28-0517ba07bce2-hubble-tls\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423603 kubelet[1564]: I1213 14:01:06.423567 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-cgroup\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423603 kubelet[1564]: I1213 14:01:06.423597 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c89c7975-b510-4a63-9c28-0517ba07bce2-clustermesh-secrets\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423729 env[1308]: time="2024-12-13T14:01:06.423551520Z" level=info msg="RemoveContainer for \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\"" Dec 13 14:01:06.423756 kubelet[1564]: I1213 14:01:06.423615 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-bpf-maps\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423756 kubelet[1564]: I1213 14:01:06.423634 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-host-proc-sys-kernel\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423756 kubelet[1564]: I1213 14:01:06.423651 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-run\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423756 kubelet[1564]: I1213 14:01:06.423669 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cni-path\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423756 kubelet[1564]: I1213 14:01:06.423685 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-xtables-lock\") pod \"c89c7975-b510-4a63-9c28-0517ba07bce2\" (UID: \"c89c7975-b510-4a63-9c28-0517ba07bce2\") " Dec 13 14:01:06.423756 kubelet[1564]: I1213 14:01:06.423720 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.423884 kubelet[1564]: I1213 14:01:06.423751 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.424192 kubelet[1564]: I1213 14:01:06.424163 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.424263 kubelet[1564]: I1213 14:01:06.424215 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-hostproc" (OuterVolumeSpecName: "hostproc") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.424263 kubelet[1564]: I1213 14:01:06.424236 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.424263 kubelet[1564]: I1213 14:01:06.424253 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.424346 kubelet[1564]: I1213 14:01:06.424271 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.424626 kubelet[1564]: I1213 14:01:06.424593 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.424685 kubelet[1564]: I1213 14:01:06.424637 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.424685 kubelet[1564]: I1213 14:01:06.424656 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cni-path" (OuterVolumeSpecName: "cni-path") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:06.425556 kubelet[1564]: I1213 14:01:06.425511 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:01:06.426793 env[1308]: time="2024-12-13T14:01:06.426764903Z" level=info msg="RemoveContainer for \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\" returns successfully" Dec 13 14:01:06.428170 systemd[1]: var-lib-kubelet-pods-c89c7975\x2db510\x2d4a63\x2d9c28\x2d0517ba07bce2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzp2sq.mount: Deactivated successfully. Dec 13 14:01:06.428316 systemd[1]: var-lib-kubelet-pods-c89c7975\x2db510\x2d4a63\x2d9c28\x2d0517ba07bce2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:01:06.428388 kubelet[1564]: I1213 14:01:06.428349 1564 scope.go:117] "RemoveContainer" containerID="b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a" Dec 13 14:01:06.428935 kubelet[1564]: I1213 14:01:06.428893 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89c7975-b510-4a63-9c28-0517ba07bce2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:01:06.429013 env[1308]: time="2024-12-13T14:01:06.428932641Z" level=error msg="ContainerStatus for \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\": not found" Dec 13 14:01:06.429152 kubelet[1564]: E1213 14:01:06.429121 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\": not found" containerID="b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a" Dec 13 14:01:06.429222 kubelet[1564]: I1213 14:01:06.429208 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a"} err="failed to get container status \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b928022f56004a895f17b38842fa4dc803626e9a50be6431addb7d39890c900a\": not found" Dec 13 14:01:06.429254 kubelet[1564]: I1213 14:01:06.429227 1564 scope.go:117] "RemoveContainer" containerID="dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf" Dec 13 14:01:06.429446 env[1308]: time="2024-12-13T14:01:06.429387558Z" level=error msg="ContainerStatus for \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\": not found" Dec 13 14:01:06.429565 kubelet[1564]: E1213 14:01:06.429549 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\": not found" containerID="dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf" Dec 13 14:01:06.429694 kubelet[1564]: I1213 14:01:06.429670 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89c7975-b510-4a63-9c28-0517ba07bce2-kube-api-access-zp2sq" (OuterVolumeSpecName: "kube-api-access-zp2sq") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "kube-api-access-zp2sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:01:06.429738 kubelet[1564]: I1213 14:01:06.429726 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf"} err="failed to get container status \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd297e6e00a00b56102eb53bc905c8ed1d053a0e97c00d5c2b51dc9a4ca8f4cf\": not found" Dec 13 14:01:06.429765 kubelet[1564]: I1213 14:01:06.429743 1564 scope.go:117] "RemoveContainer" containerID="2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8" Dec 13 14:01:06.429930 kubelet[1564]: I1213 14:01:06.429906 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89c7975-b510-4a63-9c28-0517ba07bce2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c89c7975-b510-4a63-9c28-0517ba07bce2" (UID: "c89c7975-b510-4a63-9c28-0517ba07bce2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:01:06.429955 env[1308]: time="2024-12-13T14:01:06.429915202Z" level=error msg="ContainerStatus for \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\": not found" Dec 13 14:01:06.430075 kubelet[1564]: E1213 14:01:06.430059 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\": not found" containerID="2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8" Dec 13 14:01:06.430101 kubelet[1564]: I1213 14:01:06.430089 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8"} err="failed to get container status \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f894cb3d3f60ed612b27f068bd743aa1a173c2a898a6ac56e2fd9964724e5e8\": not found" Dec 13 14:01:06.430101 kubelet[1564]: I1213 14:01:06.430100 1564 scope.go:117] "RemoveContainer" containerID="9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9" Dec 13 14:01:06.430277 env[1308]: time="2024-12-13T14:01:06.430236028Z" level=error msg="ContainerStatus for \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\": not found" Dec 13 14:01:06.430371 kubelet[1564]: E1213 14:01:06.430357 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\": not found" containerID="9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9" Dec 13 14:01:06.430399 kubelet[1564]: I1213 14:01:06.430387 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9"} err="failed to get container status \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"9df984a227cda608529fcddd49f2648bd5f02f2c88342585a11a4dbb0caff0d9\": not found" Dec 13 14:01:06.430399 kubelet[1564]: I1213 14:01:06.430397 1564 scope.go:117] "RemoveContainer" containerID="774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529" Dec 13 14:01:06.430557 env[1308]: time="2024-12-13T14:01:06.430520131Z" level=error msg="ContainerStatus for \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\": not found" Dec 13 14:01:06.430736 kubelet[1564]: E1213 14:01:06.430718 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\": not found" containerID="774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529" Dec 13 14:01:06.430766 kubelet[1564]: I1213 14:01:06.430757 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529"} err="failed to get container status \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\": rpc error: code = NotFound desc = an error occurred when try to find container \"774307cff18fa8e510f0238b656c8fe2ca05c73f4e4ddd5587fdda5e05c15529\": not found" Dec 13 14:01:06.524133 kubelet[1564]: I1213 14:01:06.524091 1564 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zp2sq\" (UniqueName: \"kubernetes.io/projected/c89c7975-b510-4a63-9c28-0517ba07bce2-kube-api-access-zp2sq\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524133 kubelet[1564]: I1213 14:01:06.524128 1564 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-lib-modules\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524227 kubelet[1564]: I1213 14:01:06.524139 1564 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-host-proc-sys-net\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524227 kubelet[1564]: I1213 14:01:06.524150 1564 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-hostproc\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524227 kubelet[1564]: I1213 14:01:06.524159 1564 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-etc-cni-netd\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524227 kubelet[1564]: I1213 14:01:06.524170 1564 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-config-path\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524227 kubelet[1564]: I1213 14:01:06.524181 1564 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-cgroup\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524227 kubelet[1564]: I1213 14:01:06.524191 1564 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c89c7975-b510-4a63-9c28-0517ba07bce2-hubble-tls\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524227 kubelet[1564]: I1213 14:01:06.524200 1564 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-bpf-maps\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524381 kubelet[1564]: I1213 14:01:06.524210 1564 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c89c7975-b510-4a63-9c28-0517ba07bce2-clustermesh-secrets\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524381 kubelet[1564]: I1213 14:01:06.524245 1564 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cilium-run\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524381 kubelet[1564]: I1213 14:01:06.524254 1564 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-host-proc-sys-kernel\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524381 kubelet[1564]: I1213 14:01:06.524263 1564 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-cni-path\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:06.524381 kubelet[1564]: I1213 14:01:06.524274 1564 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c89c7975-b510-4a63-9c28-0517ba07bce2-xtables-lock\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:07.088234 systemd[1]: var-lib-kubelet-pods-c89c7975\x2db510\x2d4a63\x2d9c28\x2d0517ba07bce2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:01:07.139407 kubelet[1564]: E1213 14:01:07.139317 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:07.286253 kubelet[1564]: I1213 14:01:07.286224 1564 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c89c7975-b510-4a63-9c28-0517ba07bce2" path="/var/lib/kubelet/pods/c89c7975-b510-4a63-9c28-0517ba07bce2/volumes" Dec 13 14:01:08.140354 kubelet[1564]: E1213 14:01:08.140306 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:09.101748 kubelet[1564]: E1213 14:01:09.101710 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:09.116120 kubelet[1564]: I1213 14:01:09.115987 1564 topology_manager.go:215] "Topology Admit Handler" podUID="f350fd86-4bc6-4bdc-a293-6ff5b6c9ee81" podNamespace="kube-system" podName="cilium-operator-5cc964979-9swrw" Dec 13 14:01:09.116120 kubelet[1564]: E1213 14:01:09.116036 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c89c7975-b510-4a63-9c28-0517ba07bce2" containerName="mount-cgroup" Dec 13 14:01:09.116120 kubelet[1564]: E1213 14:01:09.116048 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c89c7975-b510-4a63-9c28-0517ba07bce2" containerName="mount-bpf-fs" Dec 13 14:01:09.116120 kubelet[1564]: E1213 14:01:09.116057 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c89c7975-b510-4a63-9c28-0517ba07bce2" containerName="clean-cilium-state" Dec 13 14:01:09.116120 kubelet[1564]: E1213 14:01:09.116064 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c89c7975-b510-4a63-9c28-0517ba07bce2" containerName="cilium-agent" Dec 13 14:01:09.116120 kubelet[1564]: E1213 14:01:09.116071 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c89c7975-b510-4a63-9c28-0517ba07bce2" containerName="apply-sysctl-overwrites" Dec 13 14:01:09.116120 kubelet[1564]: I1213 14:01:09.116092 1564 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89c7975-b510-4a63-9c28-0517ba07bce2" containerName="cilium-agent" Dec 13 14:01:09.121814 env[1308]: time="2024-12-13T14:01:09.121488303Z" level=info msg="StopPodSandbox for \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\"" Dec 13 14:01:09.121814 env[1308]: time="2024-12-13T14:01:09.121598512Z" level=info msg="TearDown network for sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" successfully" Dec 13 14:01:09.121814 env[1308]: time="2024-12-13T14:01:09.121635954Z" level=info msg="StopPodSandbox for \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" returns successfully" Dec 13 14:01:09.122291 env[1308]: time="2024-12-13T14:01:09.122083108Z" level=info msg="RemovePodSandbox for \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\"" Dec 13 14:01:09.122341 env[1308]: time="2024-12-13T14:01:09.122270962Z" level=info msg="Forcibly stopping sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\"" Dec 13 14:01:09.122473 env[1308]: time="2024-12-13T14:01:09.122439215Z" level=info msg="TearDown network for sandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" successfully" Dec 13 14:01:09.126957 env[1308]: time="2024-12-13T14:01:09.126903109Z" level=info msg="RemovePodSandbox \"accfaab249bd702df9a6f19f65f104f4f2c104db1a84f5705236d3ee5ee521bd\" returns successfully" Dec 13 14:01:09.132552 kubelet[1564]: I1213 14:01:09.132508 1564 topology_manager.go:215] "Topology Admit Handler" podUID="6a9deafd-8453-41ca-be8d-38e747f073cd" podNamespace="kube-system" podName="cilium-gszg9" Dec 13 14:01:09.136644 kubelet[1564]: I1213 14:01:09.136598 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-lib-modules\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136644 kubelet[1564]: I1213 14:01:09.136641 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-etc-cni-netd\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136797 kubelet[1564]: I1213 14:01:09.136664 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cni-path\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136797 kubelet[1564]: I1213 14:01:09.136683 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-bpf-maps\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136797 kubelet[1564]: I1213 14:01:09.136701 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-hostproc\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136797 kubelet[1564]: I1213 14:01:09.136720 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-xtables-lock\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136797 kubelet[1564]: I1213 14:01:09.136740 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56nv\" (UniqueName: \"kubernetes.io/projected/6a9deafd-8453-41ca-be8d-38e747f073cd-kube-api-access-m56nv\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136797 kubelet[1564]: I1213 14:01:09.136779 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-cgroup\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136939 kubelet[1564]: I1213 14:01:09.136813 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-config-path\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136939 kubelet[1564]: I1213 14:01:09.136921 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-host-proc-sys-kernel\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136985 kubelet[1564]: I1213 14:01:09.136951 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a9deafd-8453-41ca-be8d-38e747f073cd-hubble-tls\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.136985 kubelet[1564]: I1213 14:01:09.136973 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f350fd86-4bc6-4bdc-a293-6ff5b6c9ee81-cilium-config-path\") pod \"cilium-operator-5cc964979-9swrw\" (UID: \"f350fd86-4bc6-4bdc-a293-6ff5b6c9ee81\") " pod="kube-system/cilium-operator-5cc964979-9swrw" Dec 13 14:01:09.137030 kubelet[1564]: I1213 14:01:09.136994 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsq9n\" (UniqueName: \"kubernetes.io/projected/f350fd86-4bc6-4bdc-a293-6ff5b6c9ee81-kube-api-access-fsq9n\") pod \"cilium-operator-5cc964979-9swrw\" (UID: \"f350fd86-4bc6-4bdc-a293-6ff5b6c9ee81\") " pod="kube-system/cilium-operator-5cc964979-9swrw" Dec 13 14:01:09.137030 kubelet[1564]: I1213 14:01:09.137019 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a9deafd-8453-41ca-be8d-38e747f073cd-clustermesh-secrets\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.137074 kubelet[1564]: I1213 14:01:09.137055 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-host-proc-sys-net\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.137097 kubelet[1564]: I1213 14:01:09.137075 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-ipsec-secrets\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.137122 kubelet[1564]: I1213 14:01:09.137106 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-run\") pod \"cilium-gszg9\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " pod="kube-system/cilium-gszg9" Dec 13 14:01:09.141599 kubelet[1564]: E1213 14:01:09.140725 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:09.229857 kubelet[1564]: E1213 14:01:09.229828 1564 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:01:09.312616 kubelet[1564]: E1213 14:01:09.312559 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:09.313417 env[1308]: time="2024-12-13T14:01:09.313314248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gszg9,Uid:6a9deafd-8453-41ca-be8d-38e747f073cd,Namespace:kube-system,Attempt:0,}" Dec 13 14:01:09.326816 env[1308]: time="2024-12-13T14:01:09.326717573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:01:09.326816 env[1308]: time="2024-12-13T14:01:09.326769337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:01:09.326816 env[1308]: time="2024-12-13T14:01:09.326779297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:01:09.327025 env[1308]: time="2024-12-13T14:01:09.326938109Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a pid=3330 runtime=io.containerd.runc.v2 Dec 13 14:01:09.370131 env[1308]: time="2024-12-13T14:01:09.370015540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gszg9,Uid:6a9deafd-8453-41ca-be8d-38e747f073cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a\"" Dec 13 14:01:09.372334 kubelet[1564]: E1213 14:01:09.372298 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:09.375399 env[1308]: time="2024-12-13T14:01:09.375355980Z" level=info msg="CreateContainer within sandbox \"19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:01:09.390805 env[1308]: time="2024-12-13T14:01:09.390755655Z" level=info msg="CreateContainer within sandbox \"19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f238b9b3854637749f800d5b744e65c10f312d327c74ba0283d4fe9357a91b8b\"" Dec 13 14:01:09.391340 env[1308]: time="2024-12-13T14:01:09.391266173Z" level=info msg="StartContainer for \"f238b9b3854637749f800d5b744e65c10f312d327c74ba0283d4fe9357a91b8b\"" Dec 13 14:01:09.419759 kubelet[1564]: E1213 14:01:09.419472 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:09.420255 env[1308]: time="2024-12-13T14:01:09.420219264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-9swrw,Uid:f350fd86-4bc6-4bdc-a293-6ff5b6c9ee81,Namespace:kube-system,Attempt:0,}" Dec 13 14:01:09.434134 env[1308]: time="2024-12-13T14:01:09.434067263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:01:09.434134 env[1308]: time="2024-12-13T14:01:09.434105826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:01:09.434134 env[1308]: time="2024-12-13T14:01:09.434116266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:01:09.434468 env[1308]: time="2024-12-13T14:01:09.434428530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/560cc25bdb086e189c191720a1dec9068979a76424c64457c0119417d36b2cf9 pid=3394 runtime=io.containerd.runc.v2 Dec 13 14:01:09.451834 env[1308]: time="2024-12-13T14:01:09.451786191Z" level=info msg="StartContainer for \"f238b9b3854637749f800d5b744e65c10f312d327c74ba0283d4fe9357a91b8b\" returns successfully" Dec 13 14:01:09.486798 env[1308]: time="2024-12-13T14:01:09.486750373Z" level=info msg="shim disconnected" id=f238b9b3854637749f800d5b744e65c10f312d327c74ba0283d4fe9357a91b8b Dec 13 14:01:09.486798 env[1308]: time="2024-12-13T14:01:09.486797777Z" level=warning msg="cleaning up after shim disconnected" id=f238b9b3854637749f800d5b744e65c10f312d327c74ba0283d4fe9357a91b8b namespace=k8s.io Dec 13 14:01:09.487006 env[1308]: time="2024-12-13T14:01:09.486809778Z" level=info msg="cleaning up dead shim" Dec 13 14:01:09.488132 env[1308]: time="2024-12-13T14:01:09.488081153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-9swrw,Uid:f350fd86-4bc6-4bdc-a293-6ff5b6c9ee81,Namespace:kube-system,Attempt:0,} returns sandbox id \"560cc25bdb086e189c191720a1dec9068979a76424c64457c0119417d36b2cf9\"" Dec 13 14:01:09.488864 kubelet[1564]: E1213 14:01:09.488836 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:09.489971 env[1308]: time="2024-12-13T14:01:09.489939612Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:01:09.495958 env[1308]: time="2024-12-13T14:01:09.495918261Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3454 runtime=io.containerd.runc.v2\n" Dec 13 14:01:10.141770 kubelet[1564]: E1213 14:01:10.141728 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:10.422068 env[1308]: time="2024-12-13T14:01:10.419139391Z" level=info msg="StopPodSandbox for \"19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a\"" Dec 13 14:01:10.422068 env[1308]: time="2024-12-13T14:01:10.419207076Z" level=info msg="Container to stop \"f238b9b3854637749f800d5b744e65c10f312d327c74ba0283d4fe9357a91b8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:01:10.421199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a-shm.mount: Deactivated successfully. Dec 13 14:01:10.444829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a-rootfs.mount: Deactivated successfully. Dec 13 14:01:10.449395 env[1308]: time="2024-12-13T14:01:10.449344954Z" level=info msg="shim disconnected" id=19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a Dec 13 14:01:10.449557 env[1308]: time="2024-12-13T14:01:10.449537328Z" level=warning msg="cleaning up after shim disconnected" id=19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a namespace=k8s.io Dec 13 14:01:10.449949 env[1308]: time="2024-12-13T14:01:10.449927676Z" level=info msg="cleaning up dead shim" Dec 13 14:01:10.458065 env[1308]: time="2024-12-13T14:01:10.458030067Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3487 runtime=io.containerd.runc.v2\n" Dec 13 14:01:10.458354 env[1308]: time="2024-12-13T14:01:10.458328129Z" level=info msg="TearDown network for sandbox \"19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a\" successfully" Dec 13 14:01:10.458405 env[1308]: time="2024-12-13T14:01:10.458356131Z" level=info msg="StopPodSandbox for \"19293bf1ad849beea594baf0a917077f157cac299163f176f30c1313b462059a\" returns successfully" Dec 13 14:01:10.583104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133607039.mount: Deactivated successfully. Dec 13 14:01:10.644969 kubelet[1564]: I1213 14:01:10.644907 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cni-path\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.644969 kubelet[1564]: I1213 14:01:10.644962 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-etc-cni-netd\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645233 kubelet[1564]: I1213 14:01:10.644990 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-run\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645233 kubelet[1564]: I1213 14:01:10.645008 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-hostproc\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645233 kubelet[1564]: I1213 14:01:10.645047 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m56nv\" (UniqueName: \"kubernetes.io/projected/6a9deafd-8453-41ca-be8d-38e747f073cd-kube-api-access-m56nv\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645233 kubelet[1564]: I1213 14:01:10.645068 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a9deafd-8453-41ca-be8d-38e747f073cd-hubble-tls\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645233 kubelet[1564]: I1213 14:01:10.645086 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-host-proc-sys-kernel\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645233 kubelet[1564]: I1213 14:01:10.645116 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-ipsec-secrets\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645403 kubelet[1564]: I1213 14:01:10.645137 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-lib-modules\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645403 kubelet[1564]: I1213 14:01:10.645155 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-host-proc-sys-net\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645403 kubelet[1564]: I1213 14:01:10.645186 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a9deafd-8453-41ca-be8d-38e747f073cd-clustermesh-secrets\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645403 kubelet[1564]: I1213 14:01:10.645211 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-config-path\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645403 kubelet[1564]: I1213 14:01:10.645232 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-xtables-lock\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645403 kubelet[1564]: I1213 14:01:10.645255 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-bpf-maps\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645535 kubelet[1564]: I1213 14:01:10.645280 1564 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-cgroup\") pod \"6a9deafd-8453-41ca-be8d-38e747f073cd\" (UID: \"6a9deafd-8453-41ca-be8d-38e747f073cd\") " Dec 13 14:01:10.645535 kubelet[1564]: I1213 14:01:10.645364 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.645535 kubelet[1564]: I1213 14:01:10.645392 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cni-path" (OuterVolumeSpecName: "cni-path") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.645535 kubelet[1564]: I1213 14:01:10.645416 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.645535 kubelet[1564]: I1213 14:01:10.645432 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.645686 kubelet[1564]: I1213 14:01:10.645447 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-hostproc" (OuterVolumeSpecName: "hostproc") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.646780 kubelet[1564]: I1213 14:01:10.645918 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.646780 kubelet[1564]: I1213 14:01:10.645952 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.646780 kubelet[1564]: I1213 14:01:10.646141 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.646780 kubelet[1564]: I1213 14:01:10.646247 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.646780 kubelet[1564]: I1213 14:01:10.646292 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:01:10.649107 kubelet[1564]: I1213 14:01:10.648758 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:01:10.649192 kubelet[1564]: I1213 14:01:10.649120 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9deafd-8453-41ca-be8d-38e747f073cd-kube-api-access-m56nv" (OuterVolumeSpecName: "kube-api-access-m56nv") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "kube-api-access-m56nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:01:10.649858 kubelet[1564]: I1213 14:01:10.649827 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:01:10.650004 kubelet[1564]: I1213 14:01:10.649945 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9deafd-8453-41ca-be8d-38e747f073cd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:01:10.651353 kubelet[1564]: I1213 14:01:10.650972 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a9deafd-8453-41ca-be8d-38e747f073cd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6a9deafd-8453-41ca-be8d-38e747f073cd" (UID: "6a9deafd-8453-41ca-be8d-38e747f073cd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:01:10.747240 kubelet[1564]: I1213 14:01:10.746664 1564 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-ipsec-secrets\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747240 kubelet[1564]: I1213 14:01:10.746704 1564 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-host-proc-sys-kernel\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747240 kubelet[1564]: I1213 14:01:10.746719 1564 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a9deafd-8453-41ca-be8d-38e747f073cd-clustermesh-secrets\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747240 kubelet[1564]: I1213 14:01:10.746854 1564 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-lib-modules\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747240 kubelet[1564]: I1213 14:01:10.746874 1564 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-host-proc-sys-net\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747240 kubelet[1564]: I1213 14:01:10.746886 1564 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-config-path\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747240 kubelet[1564]: I1213 14:01:10.746896 1564 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-xtables-lock\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747240 kubelet[1564]: I1213 14:01:10.746908 1564 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-bpf-maps\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747984 kubelet[1564]: I1213 14:01:10.746917 1564 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-cgroup\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747984 kubelet[1564]: I1213 14:01:10.746928 1564 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cni-path\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747984 kubelet[1564]: I1213 14:01:10.746937 1564 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-etc-cni-netd\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747984 kubelet[1564]: I1213 14:01:10.746946 1564 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a9deafd-8453-41ca-be8d-38e747f073cd-hubble-tls\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747984 kubelet[1564]: I1213 14:01:10.746955 1564 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-cilium-run\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747984 kubelet[1564]: I1213 14:01:10.746963 1564 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a9deafd-8453-41ca-be8d-38e747f073cd-hostproc\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:10.747984 kubelet[1564]: I1213 14:01:10.746973 1564 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-m56nv\" (UniqueName: \"kubernetes.io/projected/6a9deafd-8453-41ca-be8d-38e747f073cd-kube-api-access-m56nv\") on node \"10.0.0.43\" DevicePath \"\"" Dec 13 14:01:11.006126 kubelet[1564]: I1213 14:01:11.006025 1564 setters.go:568] "Node became not ready" node="10.0.0.43" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:01:11Z","lastTransitionTime":"2024-12-13T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:01:11.142092 kubelet[1564]: E1213 14:01:11.142036 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:11.242159 systemd[1]: var-lib-kubelet-pods-6a9deafd\x2d8453\x2d41ca\x2dbe8d\x2d38e747f073cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm56nv.mount: Deactivated successfully. Dec 13 14:01:11.242292 systemd[1]: var-lib-kubelet-pods-6a9deafd\x2d8453\x2d41ca\x2dbe8d\x2d38e747f073cd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:01:11.242393 systemd[1]: var-lib-kubelet-pods-6a9deafd\x2d8453\x2d41ca\x2dbe8d\x2d38e747f073cd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:01:11.242473 systemd[1]: var-lib-kubelet-pods-6a9deafd\x2d8453\x2d41ca\x2dbe8d\x2d38e747f073cd-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:01:11.426689 kubelet[1564]: I1213 14:01:11.426660 1564 scope.go:117] "RemoveContainer" containerID="f238b9b3854637749f800d5b744e65c10f312d327c74ba0283d4fe9357a91b8b" Dec 13 14:01:11.428097 env[1308]: time="2024-12-13T14:01:11.428057144Z" level=info msg="RemoveContainer for \"f238b9b3854637749f800d5b744e65c10f312d327c74ba0283d4fe9357a91b8b\"" Dec 13 14:01:11.431472 env[1308]: time="2024-12-13T14:01:11.431417503Z" level=info msg="RemoveContainer for \"f238b9b3854637749f800d5b744e65c10f312d327c74ba0283d4fe9357a91b8b\" returns successfully" Dec 13 14:01:11.459697 kubelet[1564]: I1213 14:01:11.459653 1564 topology_manager.go:215] "Topology Admit Handler" podUID="a761766d-f885-41a8-803c-49f8da8145d9" podNamespace="kube-system" podName="cilium-l86xd" Dec 13 14:01:11.459904 kubelet[1564]: E1213 14:01:11.459889 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6a9deafd-8453-41ca-be8d-38e747f073cd" containerName="mount-cgroup" Dec 13 14:01:11.459999 kubelet[1564]: I1213 14:01:11.459987 1564 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9deafd-8453-41ca-be8d-38e747f073cd" containerName="mount-cgroup" Dec 13 14:01:11.551681 kubelet[1564]: I1213 14:01:11.551645 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a761766d-f885-41a8-803c-49f8da8145d9-hubble-tls\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.551681 kubelet[1564]: I1213 14:01:11.551691 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a761766d-f885-41a8-803c-49f8da8145d9-cilium-config-path\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.551877 kubelet[1564]: I1213 14:01:11.551713 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-cilium-cgroup\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.551877 kubelet[1564]: I1213 14:01:11.551736 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-host-proc-sys-kernel\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.551877 kubelet[1564]: I1213 14:01:11.551756 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hrx9\" (UniqueName: \"kubernetes.io/projected/a761766d-f885-41a8-803c-49f8da8145d9-kube-api-access-9hrx9\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.551877 kubelet[1564]: I1213 14:01:11.551776 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-cilium-run\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.551877 kubelet[1564]: I1213 14:01:11.551797 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-bpf-maps\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.551877 kubelet[1564]: I1213 14:01:11.551819 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-hostproc\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.552019 kubelet[1564]: I1213 14:01:11.551836 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-lib-modules\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.552019 kubelet[1564]: I1213 14:01:11.551856 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-etc-cni-netd\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.552019 kubelet[1564]: I1213 14:01:11.551874 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-xtables-lock\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.552019 kubelet[1564]: I1213 14:01:11.551893 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a761766d-f885-41a8-803c-49f8da8145d9-clustermesh-secrets\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.552019 kubelet[1564]: I1213 14:01:11.551913 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a761766d-f885-41a8-803c-49f8da8145d9-cilium-ipsec-secrets\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.552019 kubelet[1564]: I1213 14:01:11.551933 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-host-proc-sys-net\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.552149 kubelet[1564]: I1213 14:01:11.551950 1564 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a761766d-f885-41a8-803c-49f8da8145d9-cni-path\") pod \"cilium-l86xd\" (UID: \"a761766d-f885-41a8-803c-49f8da8145d9\") " pod="kube-system/cilium-l86xd" Dec 13 14:01:11.764871 kubelet[1564]: E1213 14:01:11.764847 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:11.765862 env[1308]: time="2024-12-13T14:01:11.765817205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l86xd,Uid:a761766d-f885-41a8-803c-49f8da8145d9,Namespace:kube-system,Attempt:0,}" Dec 13 14:01:11.779654 env[1308]: time="2024-12-13T14:01:11.779538459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:01:11.779654 env[1308]: time="2024-12-13T14:01:11.779588542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:01:11.779654 env[1308]: time="2024-12-13T14:01:11.779605904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:01:11.779924 env[1308]: time="2024-12-13T14:01:11.779889204Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac pid=3515 runtime=io.containerd.runc.v2 Dec 13 14:01:11.820130 env[1308]: time="2024-12-13T14:01:11.820084498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l86xd,Uid:a761766d-f885-41a8-803c-49f8da8145d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\"" Dec 13 14:01:11.820942 kubelet[1564]: E1213 14:01:11.820908 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:11.825445 env[1308]: time="2024-12-13T14:01:11.825394995Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:01:11.834600 env[1308]: time="2024-12-13T14:01:11.834526723Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"855c68b20e3f1794112525a7b35220032af3eef91c3a8d01184fa37b341bf802\"" Dec 13 14:01:11.835252 env[1308]: time="2024-12-13T14:01:11.835219332Z" level=info msg="StartContainer for \"855c68b20e3f1794112525a7b35220032af3eef91c3a8d01184fa37b341bf802\"" Dec 13 14:01:11.884477 env[1308]: time="2024-12-13T14:01:11.884430106Z" level=info msg="StartContainer for \"855c68b20e3f1794112525a7b35220032af3eef91c3a8d01184fa37b341bf802\" returns successfully" Dec 13 14:01:11.982807 env[1308]: time="2024-12-13T14:01:11.982754927Z" level=info msg="shim disconnected" id=855c68b20e3f1794112525a7b35220032af3eef91c3a8d01184fa37b341bf802 Dec 13 14:01:11.982807 env[1308]: time="2024-12-13T14:01:11.982804490Z" level=warning msg="cleaning up after shim disconnected" id=855c68b20e3f1794112525a7b35220032af3eef91c3a8d01184fa37b341bf802 namespace=k8s.io Dec 13 14:01:11.982807 env[1308]: time="2024-12-13T14:01:11.982816251Z" level=info msg="cleaning up dead shim" Dec 13 14:01:11.989526 env[1308]: time="2024-12-13T14:01:11.989473044Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3598 runtime=io.containerd.runc.v2\n" Dec 13 14:01:12.143401 kubelet[1564]: E1213 14:01:12.142637 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:12.430708 kubelet[1564]: E1213 14:01:12.430506 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:12.432752 env[1308]: time="2024-12-13T14:01:12.432710051Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:01:12.454839 env[1308]: time="2024-12-13T14:01:12.450931752Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2ee7e04c72314a57cff2786ad94b9abdb3aab583ed6ab5786ea21b19a9b29da8\"" Dec 13 14:01:12.451174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020172121.mount: Deactivated successfully. Dec 13 14:01:12.455627 env[1308]: time="2024-12-13T14:01:12.455593754Z" level=info msg="StartContainer for \"2ee7e04c72314a57cff2786ad94b9abdb3aab583ed6ab5786ea21b19a9b29da8\"" Dec 13 14:01:12.500676 env[1308]: time="2024-12-13T14:01:12.500624710Z" level=info msg="StartContainer for \"2ee7e04c72314a57cff2786ad94b9abdb3aab583ed6ab5786ea21b19a9b29da8\" returns successfully" Dec 13 14:01:12.535783 env[1308]: time="2024-12-13T14:01:12.535721338Z" level=info msg="shim disconnected" id=2ee7e04c72314a57cff2786ad94b9abdb3aab583ed6ab5786ea21b19a9b29da8 Dec 13 14:01:12.535783 env[1308]: time="2024-12-13T14:01:12.535766101Z" level=warning msg="cleaning up after shim disconnected" id=2ee7e04c72314a57cff2786ad94b9abdb3aab583ed6ab5786ea21b19a9b29da8 namespace=k8s.io Dec 13 14:01:12.535783 env[1308]: time="2024-12-13T14:01:12.535776262Z" level=info msg="cleaning up dead shim" Dec 13 14:01:12.542174 env[1308]: time="2024-12-13T14:01:12.542138102Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3659 runtime=io.containerd.runc.v2\n" Dec 13 14:01:12.729697 env[1308]: time="2024-12-13T14:01:12.729294771Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:01:12.730721 env[1308]: time="2024-12-13T14:01:12.730677987Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:01:12.732545 env[1308]: time="2024-12-13T14:01:12.732502833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:01:12.733074 env[1308]: time="2024-12-13T14:01:12.733040990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:01:12.735453 env[1308]: time="2024-12-13T14:01:12.735406834Z" level=info msg="CreateContainer within sandbox \"560cc25bdb086e189c191720a1dec9068979a76424c64457c0119417d36b2cf9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:01:12.743301 env[1308]: time="2024-12-13T14:01:12.743255137Z" level=info msg="CreateContainer within sandbox \"560cc25bdb086e189c191720a1dec9068979a76424c64457c0119417d36b2cf9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5fb44bac262f8e33fdac680ca3995ea054e3810d43a109b55007b885ae0503fa\"" Dec 13 14:01:12.743933 env[1308]: time="2024-12-13T14:01:12.743855379Z" level=info msg="StartContainer for \"5fb44bac262f8e33fdac680ca3995ea054e3810d43a109b55007b885ae0503fa\"" Dec 13 14:01:12.789106 env[1308]: time="2024-12-13T14:01:12.788995462Z" level=info msg="StartContainer for \"5fb44bac262f8e33fdac680ca3995ea054e3810d43a109b55007b885ae0503fa\" returns successfully" Dec 13 14:01:13.143144 kubelet[1564]: E1213 14:01:13.143068 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:13.243283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ee7e04c72314a57cff2786ad94b9abdb3aab583ed6ab5786ea21b19a9b29da8-rootfs.mount: Deactivated successfully. Dec 13 14:01:13.285864 kubelet[1564]: I1213 14:01:13.285833 1564 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6a9deafd-8453-41ca-be8d-38e747f073cd" path="/var/lib/kubelet/pods/6a9deafd-8453-41ca-be8d-38e747f073cd/volumes" Dec 13 14:01:13.434373 kubelet[1564]: E1213 14:01:13.434140 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:13.436019 kubelet[1564]: E1213 14:01:13.435971 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:13.438833 env[1308]: time="2024-12-13T14:01:13.438790517Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:01:13.444551 kubelet[1564]: I1213 14:01:13.444499 1564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-9swrw" podStartSLOduration=1.2008060010000001 podStartE2EDuration="4.44446614s" podCreationTimestamp="2024-12-13 14:01:09 +0000 UTC" firstStartedPulling="2024-12-13 14:01:09.489633429 +0000 UTC m=+61.867021238" lastFinishedPulling="2024-12-13 14:01:12.733293568 +0000 UTC m=+65.110681377" observedRunningTime="2024-12-13 14:01:13.444098835 +0000 UTC m=+65.821486604" watchObservedRunningTime="2024-12-13 14:01:13.44446614 +0000 UTC m=+65.821853949" Dec 13 14:01:13.453701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742011643.mount: Deactivated successfully. Dec 13 14:01:13.455728 env[1308]: time="2024-12-13T14:01:13.455665976Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"840d89f9638bf7b9828ad9c2d92dc0af2b01fe50547097acb8306a27161a0f3a\"" Dec 13 14:01:13.456304 env[1308]: time="2024-12-13T14:01:13.456274137Z" level=info msg="StartContainer for \"840d89f9638bf7b9828ad9c2d92dc0af2b01fe50547097acb8306a27161a0f3a\"" Dec 13 14:01:13.520643 env[1308]: time="2024-12-13T14:01:13.520588158Z" level=info msg="StartContainer for \"840d89f9638bf7b9828ad9c2d92dc0af2b01fe50547097acb8306a27161a0f3a\" returns successfully" Dec 13 14:01:13.537709 env[1308]: time="2024-12-13T14:01:13.537649309Z" level=info msg="shim disconnected" id=840d89f9638bf7b9828ad9c2d92dc0af2b01fe50547097acb8306a27161a0f3a Dec 13 14:01:13.537709 env[1308]: time="2024-12-13T14:01:13.537693872Z" level=warning msg="cleaning up after shim disconnected" id=840d89f9638bf7b9828ad9c2d92dc0af2b01fe50547097acb8306a27161a0f3a namespace=k8s.io Dec 13 14:01:13.537709 env[1308]: time="2024-12-13T14:01:13.537705873Z" level=info msg="cleaning up dead shim" Dec 13 14:01:13.544813 env[1308]: time="2024-12-13T14:01:13.544771230Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3756 runtime=io.containerd.runc.v2\n" Dec 13 14:01:14.144210 kubelet[1564]: E1213 14:01:14.144150 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:14.231338 kubelet[1564]: E1213 14:01:14.231294 1564 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:01:14.242556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-840d89f9638bf7b9828ad9c2d92dc0af2b01fe50547097acb8306a27161a0f3a-rootfs.mount: Deactivated successfully. Dec 13 14:01:14.440321 kubelet[1564]: E1213 14:01:14.439847 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:14.440321 kubelet[1564]: E1213 14:01:14.439889 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:14.441995 env[1308]: time="2024-12-13T14:01:14.441960734Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:01:14.456659 env[1308]: time="2024-12-13T14:01:14.456608059Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4626dc17e2463236067e19c5edfd4b67eb3fc2b6ee0da72ee00a3db1a86966d2\"" Dec 13 14:01:14.457411 env[1308]: time="2024-12-13T14:01:14.457357909Z" level=info msg="StartContainer for \"4626dc17e2463236067e19c5edfd4b67eb3fc2b6ee0da72ee00a3db1a86966d2\"" Dec 13 14:01:14.510434 env[1308]: time="2024-12-13T14:01:14.510379163Z" level=info msg="StartContainer for \"4626dc17e2463236067e19c5edfd4b67eb3fc2b6ee0da72ee00a3db1a86966d2\" returns successfully" Dec 13 14:01:14.526610 env[1308]: time="2024-12-13T14:01:14.526547428Z" level=info msg="shim disconnected" id=4626dc17e2463236067e19c5edfd4b67eb3fc2b6ee0da72ee00a3db1a86966d2 Dec 13 14:01:14.526775 env[1308]: time="2024-12-13T14:01:14.526614153Z" level=warning msg="cleaning up after shim disconnected" id=4626dc17e2463236067e19c5edfd4b67eb3fc2b6ee0da72ee00a3db1a86966d2 namespace=k8s.io Dec 13 14:01:14.526775 env[1308]: time="2024-12-13T14:01:14.526624074Z" level=info msg="cleaning up dead shim" Dec 13 14:01:14.533500 env[1308]: time="2024-12-13T14:01:14.533462684Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:01:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3811 runtime=io.containerd.runc.v2\n" Dec 13 14:01:15.144792 kubelet[1564]: E1213 14:01:15.144717 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:15.242671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4626dc17e2463236067e19c5edfd4b67eb3fc2b6ee0da72ee00a3db1a86966d2-rootfs.mount: Deactivated successfully. Dec 13 14:01:15.443021 kubelet[1564]: E1213 14:01:15.442796 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:15.445354 env[1308]: time="2024-12-13T14:01:15.445303455Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:01:15.458107 env[1308]: time="2024-12-13T14:01:15.458040795Z" level=info msg="CreateContainer within sandbox \"fa5b60f943cdcfbbe3a6028ca8a385c8789939472d65a6f0d500435a9d934cac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2388654d851df8fcc36c7cfc47827132a0b24bfe6c0e2b998cb942f52f3208e4\"" Dec 13 14:01:15.458684 env[1308]: time="2024-12-13T14:01:15.458648354Z" level=info msg="StartContainer for \"2388654d851df8fcc36c7cfc47827132a0b24bfe6c0e2b998cb942f52f3208e4\"" Dec 13 14:01:15.515337 env[1308]: time="2024-12-13T14:01:15.515287802Z" level=info msg="StartContainer for \"2388654d851df8fcc36c7cfc47827132a0b24bfe6c0e2b998cb942f52f3208e4\" returns successfully" Dec 13 14:01:15.854633 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:01:16.145848 kubelet[1564]: E1213 14:01:16.145720 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:16.448154 kubelet[1564]: E1213 14:01:16.448038 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:16.462254 kubelet[1564]: I1213 14:01:16.462210 1564 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-l86xd" podStartSLOduration=5.462172067 podStartE2EDuration="5.462172067s" podCreationTimestamp="2024-12-13 14:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:01:16.461309452 +0000 UTC m=+68.838697261" watchObservedRunningTime="2024-12-13 14:01:16.462172067 +0000 UTC m=+68.839559876" Dec 13 14:01:17.146381 kubelet[1564]: E1213 14:01:17.146318 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:17.678591 systemd[1]: run-containerd-runc-k8s.io-2388654d851df8fcc36c7cfc47827132a0b24bfe6c0e2b998cb942f52f3208e4-runc.DxAarB.mount: Deactivated successfully. Dec 13 14:01:17.766102 kubelet[1564]: E1213 14:01:17.766064 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:18.147275 kubelet[1564]: E1213 14:01:18.147223 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:18.665483 systemd-networkd[1103]: lxc_health: Link UP Dec 13 14:01:18.675489 systemd-networkd[1103]: lxc_health: Gained carrier Dec 13 14:01:18.675666 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:01:19.147665 kubelet[1564]: E1213 14:01:19.147616 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:19.766848 kubelet[1564]: E1213 14:01:19.766804 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:19.832018 systemd[1]: run-containerd-runc-k8s.io-2388654d851df8fcc36c7cfc47827132a0b24bfe6c0e2b998cb942f52f3208e4-runc.SojrjN.mount: Deactivated successfully. Dec 13 14:01:19.928786 systemd-networkd[1103]: lxc_health: Gained IPv6LL Dec 13 14:01:20.148708 kubelet[1564]: E1213 14:01:20.148218 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:20.455558 kubelet[1564]: E1213 14:01:20.455250 1564 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:01:21.148625 kubelet[1564]: E1213 14:01:21.148564 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:21.976107 systemd[1]: run-containerd-runc-k8s.io-2388654d851df8fcc36c7cfc47827132a0b24bfe6c0e2b998cb942f52f3208e4-runc.9L8XK5.mount: Deactivated successfully. Dec 13 14:01:22.149625 kubelet[1564]: E1213 14:01:22.149560 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:23.150388 kubelet[1564]: E1213 14:01:23.150333 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:24.100584 systemd[1]: run-containerd-runc-k8s.io-2388654d851df8fcc36c7cfc47827132a0b24bfe6c0e2b998cb942f52f3208e4-runc.rNPYni.mount: Deactivated successfully. Dec 13 14:01:24.151106 kubelet[1564]: E1213 14:01:24.151046 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:01:25.151566 kubelet[1564]: E1213 14:01:25.151519 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"